Arquivo da tag: Ontologias

Sandra Harding: “Becoming an Accidental Ontologist”

Sandra Harding: “Becoming an Accidental Ontologist: Overcoming Logical Positivism’s Antipathy to Metaphysics.” organized by Global Epistemologies and Ontologies (GEOS) February 11 2021, 17.00 – 18.30 (CET)

Check future seminars here: https://www.geos-project.org/

A necessária indomesticabilidade de termos como “Antropoceno”: desafios epistemológicos e ontologia relacional (Opinião Filosófica)

Opinião Filosófica Special Issuehttps://doi.org/10.36592/opiniaofilosofica.v11.1009


A necessária indomesticabilidade de termos como “Antropoceno”: desafios epistemológicos e ontologia relacional

The necessary untameability of terms as “the Anthropocene”: epistemological challenges and relational ontology


Renzo Taddei[1]
Davide Scarso[2]
Nuno Pereira Castanheira[3]

Resumo
Nesta entrevista, realizada por Davide Scarso e Nuno Pereira Castanheira entre os meses de novembro e dezembro de 2020 via e-mail, o Professor Renzo Taddei (Unifesp) discute o significado do termo Antropoceno e as suas implicações, com base nas contribuições teóricas de Deborah Danowski, Eduardo Viveiros de Castro, Donna Haraway, Isabelle Stengers e Bruno Latour, entre outros. O entrevistado enfatiza a necessidade de evitarmos a redução do Antropoceno ou termos similares a conceitos científicos, assim preservando a sua capacidade indutora de novas perspectivas e transformações existenciais e resistindo à tentação de objetivação dominadora de um mundo mais complexo e bagunçado do que a epistemologia clássica gostaria de admitir.

Palavras-chave: Ecologia. Sustentabilidade. Ontologia. Epistemologia. Política

Abstract
In this interview, conducted via e-mail by Davide Scarso and Nuno Pereira Castanheira between the months of November and December 2020, Professor Renzo Taddei (Unifesp) discusses the meaning of the term Anthropocene and its implications, based on the theoretical contributions of Deborah Danowski, Eduardo Viveiros de Castro, Donna Haraway, Isabelle Stengers and Bruno Latour, among others. The interviewee emphasizes the need of avoiding the reduction of the Anthropocene and similar terms to scientific concepts, thus preserving their ability to induce new perspectives and existential transformation, and resisting the temptation of objectifying domination of a world that is more complex and messier than classical epistemology would like to acknowledge.

Keywords: Ecology. Sustainability. Ontology. Epistemology. Politics


1)  O que exatamente é o Antropoceno: um conceito científico, uma proposição política, um alarme soando?

Esta questão é tema de amplos e acalorados debates. Uma coisa que parece estar clara, no entanto, é que não há lugar para o advérbio “exatamente” nas muitas formas como o Antropoceno é conceitualizado. De certa maneira, o contexto em que a questão é colocada define suas respostas potenciais. Sugerir um nome que aponte para os sintomas do problema é distinto de tentar circunscrever as suas causas, e ambas as coisas não são equivalentes ao intento de atribuir responsabilidades. O problema é que o Antropoceno pode ser lido como qualquer uma destas coisas, e isso causa desentendimentos. É neste contexto que surgem argumentos em defesa do uso dos termos Capitaloceno ou Plantationceno[4], dentre outros, como alternativas mais apropriadas. O anthropos do Antropoceno sugere uma humanidade tomada de forma geral, sem atentar para a quantidade de injustiça e racismo ambientais na conformação do contexto presente.

Todos estes nomes têm sua utilidade, mas devem ser usados com cuidado. Como mostraram, cada qual à sua maneira, Timothy Morton[5] e Deborah Danowski e Eduardo Viveiros de Castro[6] em colaboração, não somos capazes de abarcar o problema em sua totalidade. É um marco importante na história do pensamento social e filosófico que efetivamente exista certo consenso de que o problema é maior e mais complexo que nossos sistemas conceituais e nossas categorias de pensamento. O que nos resta é fazer uso produtivo, na forma de bricolagem, das ferramentas conceituais imperfeitas que possuímos. Como Donna Haraway afirmou repetidamente por toda a sua carreira, o mundo real é mais complexo e bagunçado (“messy”) do que tendemos a reconhecer. Todas as teorias científicas são modelos de arame, e isto inclui, obviamente, as das ciências sociais. Não seria diferente no que diz respeito ao Antropoceno.

No fundo, a busca sôfrega pelo termo “correto” é um sintoma do problema de como nossas mentes estão colonizadas por ideias positivistas sobre a realidade. Em geral, tendemos a cair muito rápido na armadilha de sentir que, quando temos um nome para algo, entendemos do que se trata. Via de regra, trata-se do oposto: nomes estão associados a formas de regimentação semiótica do mundo; são parte de nossos esforços em domesticação da realidade, em tentativa de reduzi-la a nossas expectativas sobre ela. Este é especialmente o caso de nomes “taxonômicos”, como o Antropoceno: são molduras totalizantes que direcionam nossa atenção a certas dimensões do mundo, produzidas pelas ideias hegemônicas do lugar e do tempo em que estão em voga. Há muitas maneiras de desarmar esse esquema; uma é apontando para o fato de que pensar um mundo feito de “objetos” ou mesmo “fenômenos” é causa e efeito, ao mesmo tempo, do fato de que as ciências buscam, em geral, causas unitárias para efeitos específicos no mundo. Isso funciona para a física newtoniana mas não funciona para o que chamamos de ecossistemas, por exemplo. A existência mesma do hábito de criar coisas como o termo Antropoceno nos impede de abordar de forma produtiva o problema que o termo tenta descrever.

O termo é, desta forma, uma tentativa de objetificação; o que ele acaba objetificando são alguns de nossos medos e ansiedades. Dado que temos muito a temer, e tememos de formas muito diversas, não é surpresa que inexista consenso a respeito do que é o Antropoceno.   

Se um nome se faz necessário, precisamos de um que faça coisas outras que reduzir nossa ansiedade cognitiva a níveis administráveis. Esta é a forma, lembremos, como Latour definiu a produção da “verdade” no âmbito das ciências[7]. Ou seja, o que estou dizendo é que o Antropoceno, ou qualquer outro termo que usemos em seu lugar, para ser útil de alguma forma, não deve ser um conceito científico. Necessitamos de um termo que desestabilize nossos esquemas conceituais e nos induza a novas perspectivas e à transformação de nossos modos de existência. Um conceito desta natureza deve ser, necessariamente, indomesticável. Deve, portanto, resistir ao próprio ímpeto definidor da cognição. Etimologicamente, definir é delimitar, colocar limites; trata-se, portanto, de uma forma de domesticação. Um conceito indomesticável será, necessariamente, desconfortável; será percebido como “confusão”.

Na minha percepção, essa é uma das dimensões do conceito de Chthuluceno, proposto por Donna Haraway[8]. Ele não nos fala sobre o que supostamente está acontecendo com o mundo, mas propõe, ao mesmo tempo e de forma sobreposta, novas maneiras de entender as relações entre os seres e o poder de constituição de mundos de tais relações, onde o humano e o próprio pensamento são frutos de processos simpoiéticos. Esta perspectiva impossibilita a adoção, mesmo que tácita e por hábito, da ideia de humano herdada do iluminismo e do liberalismo europeus como elemento definidor da condição que vivemos no Antropoceno, e desarticula o especismo embutido em tais perspectivas.

Outra dimensão fundamental associada ao conceito de Chthuluceno é sua rejeição das metafísicas totalizantes, onde ideias abstratas tem a pretensão de ser universais e, portanto, de não ter ancoragem contextual. Haraway sugere que precisamos alterar nossa perspectiva a respeito do que é importante, em direção ao que ela chama de materialismo sensível em contextos simpoiéticos: a capacidade de perceber as relações que constituem a vida, nos contextos locais, e de agir de forma responsável sobre tais relações. Toda forma de conhecimento é parcial, fragmentada, e tem marcas de nascimento. Quando o conhecimento se apresenta sem o reconhecimento explícito dessas coisas, uma de duas alternativas está em curso: os envolvidos reconhecem e aceitam essa incontornável contextualidade do saber e isso não é mais uma questão; ou o conhecimento segue parte das engrenagens do colonialismo.

A ideia de aterramento, apresentada no último livro de Latour[9], converge em grande medida com as posições de Haraway. Em termos de tradições filosóficas, na minha percepção de não-especialista parece-me que ambos se alinham com o pragmatismo norte-americano, ainda que raramente façam referência a isso.

2)  Frequentemente, quando se discutem os problemas ambientais mais críticos do presente, mas também outros temas urgentes da contemporaneidade, a falta de unanimidade e consenso é lamentada. O que seria, à luz de sua pesquisa e reflexão, uma resposta “adequada” às muitas questões difíceis colocadas pelo Antropoceno?

Vivemos em tempos complexos, e o desenvolvimento das ferramentas conceituais disponíveis para dar conta do que temos adiante de nós segue em ritmo acelerado, mas não exatamente na direção do que as ideologias de progresso científico do século 20 supunham natural. Não me parece que estamos chegando “mais perto” de algo que sejamos capazes de chamar de “solução”, ainda que filosófica. Não se trata mais disso. O que os autores inseridos nos debates sobre o Antropoceno estão sugerindo é que este ideário de progresso colapsou filosoficamente, ainda que siga sendo conveniente ao capitalismo. A maior parte da academia segue trabalhando dentro deste paradigma falido, de forma inercial ou porque efetivamente atua para fornecer recursos ao capitalismo.

O que ocorre é que as noções de que a mente tem acesso imediato à realidade e de que as ideias explicativas sobre o mundo buscam uma ordem subjacente universal, da qual as coisas e contextos são apenas reflexos imperfeitos – uma ordem platônica, portanto – vêm sendo atacadas desde pelo menos Nietzsche. Os autores mais importantes do debate do Antropoceno são herdeiros de uma corrente perspectivista do século 20 que tinha Nietzsche em posição central, mas que incluía também Whitehead e James, e que posteriormente esteve ligada principalmente a Deleuze e Foucault. Isso explica o fato de que é parte fundamental do debate sobre o Antropoceno a crítica às filosofias de transcendência e a atenção dada à questão das relações de imanência. É contribuição fundamental de Eduardo Viveiros de Castro mostrar ao mundo que o que Deleuze entendia como imanência tinha relações profundas com o pensamento indígena amazônico[10], e ele estava trabalhando nisso muito antes da questão do Antropoceno se impor na filosofia e nas ciências sociais. Quando o Antropoceno se tornou tema incontornável, a questão dos modos de vida indígenas ganhou saliência não apenas por se apresentar como forma real, empírica de se viver de modos relacionais, mas também pelo fato de que os povos indígenas têm pegada de carbono zero e promovem a biodiversidade. Este último item funciona como ponte entre os debates mais propriamente filosóficos e os ecológicos.

Isso tudo me parece importante para falarmos sobre o que são, e que expectativas existem em torno dos temas de unanimidade e consenso. O desejo da comunicação perfeita é irmão gêmeo do desejo da nomeação perfeita, citado na resposta anterior. Em ambos os casos, trata-se da manifestação de uma concepção de mente que flutua no vácuo, desconectada das bases materiais e processuais que a fazem existir. De certa forma, esta concepção subjaz aos debates sobre o dissenso sempre que este é entendido como problema epistemológico. Quando isso ocorre, a intersubjetividade entendida como necessária ao processo de construção de consenso é vista como ligada a conceitos e ideias, e os diagnósticos sobre a razão do dissenso rapidamente caem nas valas comuns da “falta de educação” ou de “formas míticas de pensamento”.

A despeito das diferenças entre os autores associados ao debate sobre o Antropoceno – Haraway, Latour, Viveiros de Castro, Stengers, dentre muitos outros -, uma das coisas que todos têm em comum é a rejeição de uma abordagem que reduz o problema a uma questão epistemológica, em favor de uma perspectiva que dá centralidade à dimensão mais propriamente ontológica. Em razão disso, intercâmbios muito frutíferos passaram a ocorrer entre a filosofia e a antropologia, como se pode ver na obra não apenas do Viveiros de Castro, mas também de Tim Ingold, Elizabeth Povinelli, Anna Tsing, e muitos outros.

A questão aqui é que, quando as questões ontológicas, dentro de filosofias relacionais e perspectivísticas, passam a ser tomadas em conta, a comunicação passa a ser outra coisa. O exemplo mais bem acabado de teorização sobre isso é a teoria do perspectivismo ameríndio[11], desenvolvida por Viveiros de Castro e por Tania Stolze Lima. Esta teoria postula que o que os seres percebem no mundo é definido pelo tipo de corpos que têm, dentro de relações interespecíficas perigosas (de predação, por exemplo), e frente a um pano de fundo cosmológico em que grande parte dos seres têm consciência e intencionalidade equivalentes às humanas. Para alguns povos, por exemplo, a onça vê o humano como porco do mato e sangue como cerveja de caium, enquanto o porco do mato vê o humano como onça. A questão crucial, aqui, é que nenhuma das visões é ontologicamente superior à outra. Isso quer dizer que percepção humana não é mais “correta” que a da onça; é simplesmente produzida por um corpo humano, enquanto a da onça é produzida por um corpo de onça. Não há perspectiva absoluta, porque não corpo absoluto.    

Como é que onça e humano se comunicam, então? A pergunta é interessante logo de saída, porque entre os ocidentais a onça é tida como irracional e destituída de linguagem, e a comunicação é entendida como impossível. Nos mundos indígenas, geralmente cabe ao xamã, através de tecnologias xamânicas – que na Amazônia costuma implicar o uso de substâncias das plantas da floresta -, sair de seu corpo de humano e entrar em contato com o espírito da onça, ou dos seres de alguma forma associados às onças. Mas essa é apenas parte da questão; a relação entre caçador e presa, mais ordinária do que o contexto xamânico, é frequentemente descrita como relação de sedução, como uma forma de coreografia entre os corpos.

Alguns autores do debate sobre o Antropoceno têm explorado as implicações filosóficas de uma nova fronteira da microbiologia que apresenta os seres e seus corpos através de outras lentes. Em seu último livro, Haraway discute o conceito de holobionte, um emaranhado de seres em relações simbióticas que permitem a ocorrência da vida dos envolvidos. A questão filosófica importante que advém dos holobiontes é que, ao invés de falarmos de seres que estão em simbiose, parece mais apropriado dizer que é a partir das relações que emergem os seres. A simbiose é anterior aos seres, por assim dizer. Isso pode parecer muito técnico, e fica mais claro se mencionarmos que o corpo humano é entendido como um holobionte. O corpo existe em relação de simbiose com um número imenso de bactérias e outros seres, como fungos e vírus, e está bem documentado que as bactérias que habitam o trato intestinal humano têm efeito sobre o funcionamento do sistema nervoso, induzindo a pessoa a certos estados de ânimo e vontades. Esticando o argumento no limite da provocação, seria possível dizer que o que chamamos de consciência não é produzido nas células que têm o “nosso” DNA, mas é um fenômeno emergente da associação simbiótica entre os sistemas do corpo humano e os demais seres que compõe o holobionte.

Se for este o caso, a comunicação não se dá entre mentes e sistemas semióticos imateriais, mas entre seres imbuídos de sua materialidade e da materialidade dos contextos em que vivem. Mais do que pensar de forma alinhada, a questão passa a ser encontrar formas de relação que, como diz Haraway, nos permita viver e morrer bem em simpoiese com os demais seres. Como coloquei em outro lugar[12], precisamos ser capazes de fazer alianças com quem não pensa como pensamos, com quem não pensa como humanos, e com que não pensa.

O problema é imensamente maior que o do consenso, e ao mesmo tempo mais realista, em termos das possibilidades de materialização de soluções. Duas formas de abordagem da questão foram desenvolvidas entre antropólogos que trabalham na Amazônia: Viveiros de Castro propôs a teoria do equívoco controlado[13], e Mauro Almeida a dos encontros pragmáticos[14]. Ambos os casos se referem à comunicação de seres que existem em mundos distintos, ou seja, suas existências são compostas de acordo com pressupostos distintos sobre o que existe e o que significa existir. É imediato pensar no contexto do contato entre povos indígenas e não indígenas, mas o esquema pode ser usado para pensar qualquer relação de diferença. A ideia de que a comunicação pressupõe necessariamente o alinhamento epistemológico, nesta perspectiva, implica processos de violência contra corpos, culturas e mundos.

Basta olharmos a nosso redor para perceber que a vida comum não pressupõe alinhamento epistemológico. Há alguns dias vi um grupo de formigas cooperando para carregar uma migalha de pão muito maior do que o corpo de cada uma delas, e estavam subindo uma parede vertical. Fiquei espantado com a capacidade de cooperação entre seres entre os quais não existe atividade epistemológica. Entre os seres que pensam, boa parte do que existe no mundo é fruto de desentendimentos produtivos – uma pessoa diz uma coisa, a outra entende algo diferente, e juntas transformam a sua realidade, sem serem capazes sequer de avaliar de forma idêntica o resultado de suas ações, mas ainda assim podendo ambas sentirem-se satisfeitas com o processo. É como se estivessem dançando: nunca se dança da mesma forma, ainda que os corpos estejam conectados, e tampouco se entende o que se está fazendo da mesma forma durante a performance da dança, e com tudo isso é perfeitamente possível que o efeito seja o sentimento de satisfação e a fruição estético-afetiva da situação.

Haraway tem uma forma ainda mais provocadora de colocar a questão: devemos construir relações de parentesco com outros seres, animados e inanimados, se quisermos efetivamente caminhar no tratamento dos problemas ambientais.

No contexto dos conflitos associados ao Antropoceno, dois exemplos equivalentes de acordos pragmáticos são as manifestações conta a exploração de xisto betuminoso no Canadá, em 2013, e os protestos contra o oleoduto que cruzaria o território Sioux nos estados de Dakota do Sul e Dakota do Norte, nos Estados Unidos, em 2016. Em ambos os casos, viam-se pessoas indígenas marchando ao lado de estudantes universitários não-indígenas, ativistas e celebridades televisivas. Enquanto os manifestantes indígenas referiam-se à poluição do seu solo sagrado como motivação para o protesto, ativistas e celebridades gritavam o slogan de que não devemos continuar emitindo carbono. O fato de que um astro de Hollywood seja incapaz de entender o que é o solo sagrado Sioux não o impediu de marchar ao lado de anciãos Sioux que não têm nada parecido com a “molécula do carbono” em suas ontologias. Este é um exemplo pedagógico do tipo de acordo pragmático que precisamos no futuro.

Precisamos encontrar formas de “marchar” ao lado de processos do sistema terrestre que não entendemos, bem como de rochas, rios, plantas, animais, e outros seres humanos. É claro que isso não significa abdicar do uso da capacidade do uso da linguagem, mas apenas que devemos parar de atribuir poderes metafísicos transcendentes a ela – inclusive o de resolver todos os conflitos humanos -, e entender que a linguagem é tão material e relacional quantos as demais dimensões da existência.

3) Nas conversas sobre o Antropoceno e a crise ambiental planetária, as palavras e ações de resistência de comunidades indígenas de distintos lugares é frequentemente evocada. Qual é, na sua visão, a contribuição que estas experiências e intervenções, aparentemente tão distanciadas da face mais tecnológica, para não dizer tecnocrática, dos discursos oficiais sobre o Antropoceno, oferecem ao debate?

São inúmeras, e possivelmente as transformações em curso relacionadas ao papel e lugar dos intelectuais e líderes indígenas nas sociedades ocidentais ou ocidentalizadas fará com que sejamos capazes de perceber nuances dos modos de existência indígenas que hoje não são valorizadas. Refiro-me, no caso do Brasil, ao fato de que, no período de dois anos, Sonia Guajajara foi candidata à vice-presidência da república, Raoni foi indicado ao prêmio Nobel da Paz, Ailton Krenak foi agraciado com o prêmio Juca Pato de intelectual do ano e Davi Kopenawa ganhou o Right Livelihood Award e foi eleito para a Academia Brasileira de Ciência. E isso tudo nos dois anos mais obscuros e retrógrados da história política recente do país.

Uma parte da resposta já foi elaborada nas questões anteriores. O que se poderia agregar é o fato de que, como Latour desenvolve em seu último livro, não se pode ficar assistindo o desenrolar dos fatos na esperança de que, no fim, tudo dê certo em razão de alguma ordem transcendente misteriosa. O momento atual é de embate entre quem se alinha e vive de acordo com as agendas de exploração colonial do planeta, mesmo que não se perceba desta forma, e quem luta pela recomposição dos modos de existência em aliança com os ecossistemas e demais seres. O discurso oficial sobre o Antropoceno está em transformação, justamente em razão do ativismo das lideranças indígenas, como Davi Kopenawa[15] e Ailton Krenak[16], e dos pensadores que venho mencionando em minhas respostas, junto aos meios mais conservadores da ciência e da sociedade. E uso o termo ativismo de forma consciente aqui: não se trata de escrever livros e esperar que o mundo se transforme (ou não) como resultado. A disputa se dá palmo a palmo, reunião a reunião, e o final da história não está definido. Esta atitude se alinha mais com o modo como os indígenas entendem a realidade do que com o pensamento ocidental moderno.

Uma última coisa que vale a pena adicionar, aqui, diz respeito à questão da relação entre os modos de vida indígena e a sustentabilidade. É possível que toda a argumentação que eu apresentei aqui até agora tenha pouca aceitação e repercussão entre os cientistas que definem isso que a pergunta chama de “discursos oficiais”. Ocorre, no entanto, que pesquisas nas áreas de biodiversidade e ecologia têm mostrado que nos territórios indígenas em que as populações vivem de modos tradicionais, a eficácia na conservação da biodiversidade é igual, e algumas vezes maior, do que as medidas preservacionistas mais misantrópicas, como as chamadas áreas de proteção integral. Isto tem chamado a atenção dos biólogos e ecologistas, graças ao trabalho de antropólogos como a Manuela Carneiro da Cunha, o Mauro Almeida, o Eduardo Brondízio e outros[17]. Os povos indígenas, deste modo, são bons em conservação da natureza, mesmo que não tenham, em seus vocabulários, uma palavra para natureza. É de importância central, para os esforços ocidentais em conservação da biodiversidade, entender como isso se passa entre os povos indígenas e demais populações tradicionais. Escrevi sobre isso recentemente[18]: a chave para a compreensão deste fenômeno reside na relação entre o conceito de cuidado e a ontologia relacional habitadas pelos povos indígenas. Colocando isso de forma direta, em um contexto em que as coisas importantes do mundo são pessoas, isto é, possuem intenção e agência, independente do formato e da natureza dos seus corpos, as relações entre os seres passam a ser sociais e políticas e, portanto, perigosas e complicadas. A liberdade de ação é bem menor em um mundo em que árvores, rios e animais são gente com força de ação política. O resultado líquido disso é o que chamamos de proteção da biodiversidade.

Ou seja, índio não protege a natureza porque gosta ou vive dentro dela; índio protege a floresta justamente porque a natureza, da forma como o Iluminismo europeu plasmou o conceito, simplesmente não existe[19]. Disso tudo decorre que o cuidado para com a vida é um precipitado da arquitetura ontológica dos mundos indígenas, sem demandar voluntarismo nem culpa. Nos modos de vida ocidentais, cuidado é entendido como vontade, como obrigação moral[20], em um contexto em que as infraestruturas e o jogo político são capazes muito facilmente de desarticularem tal voluntarismo. Isso explica a desconexão entre o conhecimento e o cuidado nos modos de vida ocidentais modernos. Se tomarmos a Amazônia como exemplo, é muito fácil perceber que nunca se estudou tanto o bioma amazônico como nos últimos 20 anos; ao mesmo tempo, isso não deteve em nada a devastação da floresta. A mensagem relevante, aqui, e que é bastante contundente, é que ao invés de ficarmos culpando o mundo da política por impedir que o conhecimento científico se transforme em cuidado efetivo para com o meio ambiente, precisamos transformar as bases ontológicas sobre as quais conhecimento sobre o mundo e ação no mundo ocorrem, de modo que, à maneira dos mundos indígenas, conhecer seja, ao mesmo tempo e de forma imediata, cuidar.

4)  O seu trabalho toca frequentemente em questões associadas à interdisciplinaridade, um tema recorrente em muitas das iniciativas relacionadas ao Antropoceno e às mudanças climáticas. Como você resumiria sua experiência e posição a respeito disso?

Com minha colega Sophie Haines[21] desenvolvi uma análise das relações interdisciplinares na academia, com base nas coisas que mencionei em meu comentário acima sobre o consenso, a linguagem e a comunicação. O universo da cooperação interdisciplinar é permeado por conflitos de todas as naturezas, mas o mais proeminente é o resultado da ideia de que a colaboração só é possível com o alinhamento dos conceitos. A quantidade de tempo, fundos e amizades que se desperdiçam na tentativa vã de colonizar as mentes uns dos outros é imensa. Por essa razão as paredes simbólicas dos departamentos universitários são tão grossas.

Uma forma de pensar o problema, usando ainda o arcabouço conceitual da filosofia da ciência, é considerar que em termos epistemológicos, o mundo ao qual a atividade intelectual se refere pode ser dividido em três campos: o das variáveis, foco da atenção e do investimento da atividade científica, e que define os próprios contornos disciplinares; o dos axiomas, que são suposições a respeito da realidade que não estão ali para serem testadas, mas para instrumentalizar o trabalho com as variáveis; e o que Pierre Bourdieu[22] chamou de doxa, o fundo fenomênico da realidade que é tomado como não problemático (e portanto não tem o privilégio de se transformar em variável de pesquisa), e que algumas vezes sequer é reconhecido como existente. O problema nas relações interdisciplinares é que o que é variável para uma disciplina é parte da doxa para a outra, o que induz os acadêmicos a pensar que o que os colegas de disciplinas muito distintas fazem é inútil e perda de tempo. Vivi isso na pele, no início de minha pesquisa de campo de doutorado, quando disse a colegas meteorologistas que iria pesquisar a dimensão cultural do clima. Um deles me falou que parecia óbvio que as culturas reagissem aos climas, e isso portanto não justificaria uma pesquisa que pudesse ser chamada de científica.

Hoje, mais de duas décadas depois, as grandes agências financiadoras internacionais, como a National Science Foundation e o Belmont Forum, exigem a participação de cientistas sociais em pesquisas sobre questões ambientais. As coisas caminharam. Mas falta muito a ser feito ainda.

5)  Em algo que pode ser visto como um gesto “revisionista”, Bruno Latour recentemente afirmou que o declínio acentuado na confiança pública das ciências “duras” e nos cientistas pode estar de alguma forma relacionado com décadas de trabalhos críticos produzidos pelas ciências sociais. Devemos nós, pesquisadores das ciências sociais (e, de forma mais geral, intelectuais) recuarmos para uma forma de “essencialismo estratégico”? Ou, colocando de outra maneira, o que significa hoje um posicionamento crítico no debate sobre o Antropoceno?

Na minha percepção, a ideia de essencialismo estratégico é produto de formas essencialistas de pensar. Como se tivéssemos uma resposta rígida e correta que precisasse ser escondida. Em termos pragmáticos as coisas podem parecer assim, mas conceitualmente a questão é outra. A ideia de que estamos escondendo a resposta “correta” vai contra a compreensão da realidade como constituída de forma relacional. É como se na arena de embates a realidade não estivesse sendo plasmada ali mesmo, mas o conhecimento sobre a realidade fosse algo rígido que é apresentado na arena como arma para acabar com a conversa. Este é um argumento antigo de Latour; já estava em Jamais Fomos Modernos[23].

Há uma outra questão importante a ser mencionada: Latour é nada mais do que vítima do seu próprio sucesso em ganhar um grau de atenção que se estende de forma inédita para fora da academia. Ele não foi o primeiro a revelar que os mecanismos de produção da ciência ocidental não condizem com a imagem que os discursos hegemônicos da ciência apresentam de si. Isso já estava em Wittgenstein. Paul Feyerabend desenvolveu toda a sua carreira sobre essa questão. O trabalho sobre os paradigmas e revoluções científicas de Thomas Kuhn[24] teve grande repercussão no mundo acadêmico, e é um dos golpes mais devastadores no positivismo. Lyotard[25] inaugura o que ficou conhecido como momento pós-moderno com um livro que ataca os ideais positivos da modernidade. Mais recentemente, a ideia de que se pode associar os problemas políticos com os científicos, de modo que ao resolver os últimos se resolvem os primeiros, foi novamente atacada pela teoria da sociedade do risco de Beck[26] e da ciência pós-normal de Funtowicz e Ravetz[27].

A diferença da atuação de Latour é que ele efetivamente buscou interlocução fora da academia. Ele escreveu obras teatrais, organizou diversas exposições, interagiu de forma criativa com artistas, fez experimentos sobre sua ideia de parlamento das coisas misturando intelectuais, ativistas e artistas, e recorrentemente faz uso de um estilo de escrita que busca ser inteligível entre audiências não acadêmicas. Ele começou sua carreira docente na França em uma escola de engenharia, e tem interagido de forma intensa com o meio da arquitetura e do design, especialmente no campo da computação. Ainda que para muita gente as ideias dele não são exatamente fáceis, não há dúvida de que todo o seu esforço deu frutos. E colocou ele na mira dos conservadores, naturalmente.

Ocorre que, ao se adotar uma abordagem ontológica relacional, composicionista, como ele mesmo chamou-a, não faz muito sentido pensar que os debates são vencidos em função do valor de verdade absoluta dos enunciados. Faz muito mais sentido colocar atenção nas estratégias e efeitos pragmáticos de cada debate do que defender uma ideia a ferro e fogo, independentemente de quem sejam os interlocutores. Se tudo é político, como nos mostram o feminismo, os estudos sociais da ciência e da tecnologia, a filosofia da ciência e tantos outros campos de pensamento, é politicamente irresponsável assumir uma atitude positivista sobre o mundo, ainda mais em um momento de transformação tão difícil.

Mas não é só isso. Existem arenas de debate em que o contexto e a lógica de organização semiótica da interação podem desfigurar, de antemão, uma ideia. Lyotard falou sobre essa questão em seu livro Le différend[28]; o grupo de antropólogos da linguagem e da semiótica vinculados aos trabalhos sobre metapragmática de Michael Silverstein[29] também trabalhou extensamente sobre o assunto. Em cada momento da luta política, os avanços se dão através de alianças e movimentos cuidadosamente construídos, em função do caminho que se está seguindo, e não de alguma lógica metafísica transcendente. É assim que se caminha, honrando as alianças e caminhando devagar, com a certeza de que o próprio caminhar transforma as perspectivas.

Vou dar um exemplo mais concreto: um bocado do que vai ocorrer no que diz respeito ao meio ambiente daqui a vinte anos está sendo definido nos assentos de cursos universitários no presente. Ocorre que as pessoas ocupando os assentos dos cursos de ecologia, biologia e afins têm menos poder neste processo de plasmar o futuro do que as que ocupam os assentos dos cursos de engenharia, direito, economia e agronomia. Se quisermos que o sistema de agricultura extensiva baseada em monocultura e agrotóxico deixe de existir, não basta este debate ocorrer nos cursos ligados à ecologia e às humanidades. Ele tem que ocorrer nos cursos de agronomia. O mesmo se dá com relação à mineração ou a questões energéticas e os cursos de engenharia, a questões ligadas aos direitos ambientais e das populações tradicionais e os cursos de direito, e a ideia de crescimento econômico e os cursos de economia. Dito isso, se eu chegar em um curso de engenharia com as ideias da Haraway sobre simpoiése e materialismo sensível, no mínimo não serei tomado a sério. É nisso que as alianças e movimentos têm que ser estratégicos. Não há nada mais importante, hoje, do que fazer este debate sobre o Antropoceno, da forma como os autores que eu mencionei aqui o entendem, nas faculdades de engenharia, economia, direito, agronomia e outras; mas para que eu possa fazer isso, preciso construir alianças dentro destas comunidades. E estas alianças, vistas de longe e sem a compreensão da dimensão estratégica do movimento, podem parecer retrocesso ou essencialismo estratégico. Uma diferença importante aqui é que, no caso de essencialismo estratégico, não existe a abertura para efetivamente escutar quem está do outro lado da interlocução. Em uma abordagem relacional de cunho composicionista, as alianças implicam, no mínimo, a escuta mútua, e isso tem o poder de transformar os membros da aliança. É essa abertura à vida e à transformação, característica das ontologias relacionais, que está ausente na ideia de essencialismo estratégico.

Voltando então ao Latour, o que me parece que ele está tentando fazer, em seus últimos dois livros, é reordenar a dimensão metapragmática dos debates internacionais, ou seja, reordenar os marcos de referência usados pelas pessoas para dar sentido aos problemas correntes. Um bocado de gente existe em uma situação de inércia com relação aos sistemas e infraestruturas dominantes – em como consomem ou votam, por exemplo – mas que estão potencialmente (cosmo)politicamente alinhados com o que ele chama de “terranos”. Seu objetivo é tirar estas pessoas de sua inércia perceptiva e afetiva, através do reordenamento simbólico dos elementos que organizam o debate. Ao mesmo tempo, Latour reconhece que não se trata apenas de ideias e regras de interação: instituições e infraestruturas são elementos fundamentais da composição dos mundos, e que precisam ser transformados. Daí a quantidade imensa de atividades extra-acadêmicas às quais Latour se dedica.

Talvez mais controvertido até do que esta questão do essencialismo estratégico é o movimento recente de insistir na necessidade de composição de um mundo comum. Essa defesa da composição do mundo comum é entendida por muitos como um retrocesso com relação às ideias de multiverso e multinaturalismo, de Viveiros de Castro. Talvez seja, uma vez mais, uma desaceleração e um desvio de percurso, no intuito de construir alianças importantes que demandam essas ações. Veremos. O debate está em curso.

6) Como você vê o futuro próximo dos estudos sobre o Antropoceno e, de maneira geral, das questões ecológicas no mundo lusoparlante e, em particular, no Brasil? Há novos projetos no horizonte que gostaria de mencionar? Que formas de intervenção são possíveis nos debates, não apenas dentro da academia mas também em níveis políticos mais amplos?

Há muita coisa acontecendo; não há dúvida que estamos em um momento de grandes transformações. Por essa razão, é muito difícil fazer previsões.

A condição do meio ambiente no Brasil, no governo Bolsonaro, é calamitosa, e não há qualquer sinal de que as coisas irão melhorar nos dois anos que ainda faltam para as próximas eleições. O país está à deriva. É impressionante, no entanto, que o país seja capaz de permanecer à deriva sem que tudo termine em anomia. Isso significa que existe alguma coisa além das estruturas de governo e do estado. É preciso seguir lutando, com todas as forças, para tirar o Bolsonaro do poder, e ao mesmo tempo é preciso abandonar o culto à figura do presidente que existe no Brasil. A situação atual do Brasil é paradoxal porque, ao mesmo tempo que aos sofrimentos trazidos pela pandemia se somam os sofrimentos trazidos por este governo, a vitalidade da sociedade civil, dos movimentos sociais e do ativismo ambiental é imensa.

Aqui acho que podemos fazer aqui um paralelo com uma das dimensões da questão do Antropoceno: ele pegou o mundo ocidental de surpresa, o que significa que há coisas bem à nossa frente que não somos capazes de perceber por muito tempo. Se assumirmos o início do Antropoceno com as detonações nucleares da década de 1940, vão-se aí mais de 70 anos e ainda não há reconhecimento científico institucionalizado sobre o fato. Não faz sentido dizer que já “se sabia” de sua existência porque Arrhenius tinha falado sobre isso em 1896. Uma voz perdida nos salões acadêmicos não pode ser tomada como percepção coletiva da realidade. E nem se pode reduzir o tempo que demorou para o reconhecimento do problema ao negacionismo, de forma anacrônica. O fato é que as ciências do sistema terrestre nos mostram que há inúmeros padrões de variação no funcionamento do planeta que não conhecemos, e que nos afetam diretamente. Até a década de 1920, a ciência não conhecia o fenômeno El Niño, que afeta o clima do planeta inteiro. Certamente há muitos El Niños que ainda não conhecemos, e alguns que nunca seremos capazes de conhecer com o aparato cognitivo que possuímos. O mesmo ocorre com fenômenos sociais. Há transformações e padrões no funcionamento das coletividades que não conhecemos, mas a que estamos sujeitos. Coisas imprevistas ocorrem o tempo todo no mundo social. No Brasil, por exemplo, ninguém anteviu as manifestações de 2013, e tampouco previu tamanho reconhecimento público e projeção das lideranças indígenas no país neste ano de 2020. Nem que este seria o ano em que, pela primeira vez na história brasileira, haveria mais candidatos pretos e pardos do que brancos nas eleições municipais. Eu sinceramente pensei que não veria isso acontecer nesta vida.  

Por isso acho improdutivo reduzir o contexto brasileiro atual ao Bolsonaro. Isso é seguir cultuando o estado, de certa forma, e reproduzir uma visão de mundo antropocêntrica. Há coisas importantes, inclusive nas dimensões tradicionalmente chamadas de sociais, que não acontecem na escala dos indivíduos nem na escala dos estados. Esta é exatamente uma das dimensões do Antropoceno. Reconhecer isso talvez diminua a amargura e a negatividade com que a intelectualidade progressista brasileira tem observado a realidade.

Em termos do que se vê no horizonte, o quadro é confuso, mas gosto de manter a minha atenção voltada aos fatos que sugerem que mudanças positivas estão ocorrendo. Vejamos: a ONU tem um secretário geral efetivamente comprometido com a agenda ambiental, e está sinalizando em direção à inclusão de indicadores ambientais nos índices usados para avaliar a situação dos países, como o IDH. O Papa Francisco é um ambientalista de esquerda. Trump perdeu as eleições nos EUA, e isso pode ter efeito cascata sobre a política no resto do mundo. A pandemia, a despeito da dimensão impensável de sofrimento que trouxe, forçou os mecanismos de governança planetários a se redesenharem e melhorarem seus processos. Mostrou ainda que a colaboração científica pode ocorrer sem ser induzida, e deformada, pela competição capitalista. A pandemia também deixou bastante evidente a necessidade da luta pelos comuns, inclusive entre grupos mais conservadores. As elites conservadoras abandonaram, por exemplo, a ideia de privatizar o sistema público de saúde brasileiro, o maior do mundo.

No Brasil, enquanto a ciência e a universidade são estranguladas pelo governo atual e resistem bravamente, os movimentos sociais, as periferias, a arte de rua e as iniciativas de solidariedade associadas à pandemia demonstram uma energia impressionante. O movimento da agroecologia tem ganhado muita força no país, também. Acho que, no curto prazo, haverá mais avanço vindo dessas áreas do que da academia. Mas coisas importantes estão ocorrendo no campo acadêmico, também. O que me está mais próximo é a experiência dos bacharelados interdisciplinares, nos quais efetivamente há um esforço de superação das barreiras disciplinares no tratamento de questões importantes. Sou professor em um bacharelado interdisciplinar em ciência e tecnologia do mar, onde os estudantes são preparados para lidar com as questões ambientais a partir de suas dimensões físicas, ecológicas, mas também filosóficas e sociológicas. Esta tem sido uma experiência muito positiva, e que me ajuda a ter esperança sobre o futuro.

Recebido em: 24/12/2020.
Aprovado em: 26/12/2020.
Publicado em: 26/12/2020.


[1] Professor de Antropologia da Universidade Federal de São Paulo (Unifesp). Orcid ID: https://orcid.org/0000-0002-9935-6183. E-mail: renzo.taddei@unifesp.br

[2] Professor no Departamento de Ciências Sociais Aplicadas da Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa (FCT-UNL). Orcid ID: https://orcid.org/0000-0003-1111-1286. E-mail: d.scarso@fct.unl.pt

[3] Pesquisador PNPD/CAPES e Professor-Colaborador no Programa de Pós-Graduação em Filosofia da Pontifícia Universidade Católica do Rio Grande do Sul – PUCRS. Orcid ID: https://orcid.org/0000-0003-3295-9454. E-mail: npcastanheira@gmail.com

[4] Haraway, Donna. “Anthropocene, capitalocene, plantationocene, chthulucene: Making kin.” Environmental humanities 6.1 (2015): 159-165.

[5] Morton, Timothy. Hyperobjects: Philosophy and Ecology after the End of the World. U of Minnesota Press, 2013.

[6] Danowski, Déborah, and Eduardo Viveiros de Castro. Há mundo por vir? Ensaio sobre os medos e os fins. Cultura e Barbárie Editora, 2014.

[7] Latour, Bruno, and Steve Woolgar. A vida de laboratório: a produção dos fatos científicos. Rio de Janeiro: Relume Dumará, 1997

[8] Haraway, D. 2016. Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke University Press.

[9] Latour, Bruno. Onde aterrar? Rio de Janeiro: Bazar do Tempo, 2020.

[10] Viveiros de Castro, Eduardo. Metafísicas canibais. São Paulo: Cosac Naify, 2015.

[11] Viveiros de Castro, Eduardo. “Perspectivismo e multi-naturalismo na América indígena.” In: A inconstância da alma selvagem e outros ensaios de antropologia. São Paulo: Cosac Naify, 2002: 345-399.

[12] Taddei, Renzo. “No que está por vir, seremos todos filósofos-engenheiros-dançarinos ou não seremos nada.” Moringa 10.2 (2019): 65-90.

[13] Viveiros de Castro, Eduardo. 2004. “Perspectival Anthropology and the Method of Controlled Equivocation.” Tipití: Journal of the Society for the Anthropology of Lowland South America 2 (1): 1.

[14] Almeida, Mauro William Barbosa. Caipora e outros conflitos ontológicos. Revista de Antropologia da UFSCar, v. 5, n. 1, p.7-28, 2013

[15] Kopenawa, Davi e Bruce Albert, A Queda do Céu: Palavras de um Xamã Yanomami. São Paulo: Companhia das Letras, 2015.

[16] Krenak, Ailton. Ideias para adiar o fim do mundo. Editora Companhia das Letras, 2019; Krenak, Ailton. O amanhã não está à venda. Companhia das Letras, 2020.

[17] IPBES. 2019. Summary for policymakers of the global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. Edited by S. Díaz, J. Settele, E. S. Brondízio E.S., et al. Bonn, Germany: IPBES secretariat.

[18] Taddei, Renzo. “Kopenawa and the Environmental Sciences in the Amazon.” In Philosophy on Fieldwork: Critical Introductions to Theory and Analysis in Anthropological Practice, edited by Nils Ole Bubandt and Thomas Schwarz Wentzer. London: Routledge, no prelo.

[19] Para uma análise surpreendente da importância filosófica do pensamento ameríndio, especialmente o de Davi Kopenawa, ver Valentin, M.A. Extramundanidade e Sobrenatureza. Florianópolis: Cultura e Barbárie, 2018.

[20] Puig de la Bellacasa, M. 2017. Matters of Care: Speculative Ethics in More Than Human Worlds. Minneapolis: University of Minnesota Press.

[21] Taddei, Renzo, and Sophie Haines. “Quando climatologistas encontram cientistas sociais: especulações etnográficas sobre equívocos interdisciplinares.” Sociologias 21.51 (2019).

[22] Bourdieu, Pierre. Esquisse d’une théorie de la pratique. Précédé de trois études d’ethnologie kabyle. Le Seuil, 2018.

[23] Latour, Bruno. Jamais fomos modernos. Editora 34, 1994.

[24] Kuhn, Thomas S. A estrutura das revoluções científicas. Editora Perspectiva, 2020.

[25] Lyotard, Jean-François. A condição pós-moderna. J. Olympio, 1998.

[26] Beck, Ulrich. Sociedade de risco: rumo a uma outra modernidade. Editora 34, 2011.

[27] Funtowicz, Silvio, and Jerry Ravetz. “Ciência pós-normal e comunidades ampliadas de pares face aos desafios ambientais.” História, ciências, saúde-Manguinhos 4.2 (1997): 219-230.

[28] Lyotard, Jean-François, Le différend, Paris, Éd. de Minuit, 1983.

[29] Silverstein, Michael. “Metapragmatic discourse and metapragmatic function” In Lucy, John ed. Reflexive language: Reported speech and metapragmatics. Cambridge University Press, 1993.

New Research Shocks Scientists: Human Emotion Physically Shapes Reality! (IUV)

BY  /   SUNDAY, 12 MARCH 2017

published on Life Coach Code, on February 26, 2017

Three different studies, done by different teams of scientists proved something really extraordinary. But when a new research connected these 3 discoveries, something shocking was realized, something hiding in plain sight.

Human emotion literally shapes the world around us. Not just our perception of the world, but reality itself.

Emotions-Physically-Shape-Reality

In the first experiment, human DNA, isolated in a sealed container, was placed near a test subject. Scientists gave the donor emotional stimulus and fascinatingly enough, the emotions affected their DNA in the other room.

In the presence of negative emotions the DNA tightened. In the presence of positive emotions the coils of the DNA relaxed.

The scientists concluded that “Human emotion produces effects which defy conventional laws of physics.”

Emotions-Have-An-Effect-On-Reality

In the second, similar but unrelated experiment, different group of scientists extracted Leukocytes (white blood cells) from donors and placed into chambers so they could measure electrical changes.

In this experiment, the donor was placed in one room and subjected to “emotional stimulation” consisting of video clips, which generated different emotions in the donor.

The DNA was placed in a different room in the same building. Both the donor and his DNA were monitored and as the donor exhibited emotional peaks or valleys (measured by electrical responses), the DNA exhibited the IDENTICAL RESPONSES AT THE EXACT SAME TIME.

DNA-Responds-To-Our-Emotions

There was no lag time, no transmission time. The DNA peaks and valleys EXACTLY MATCHED the peaks and valleys of the donor in time.

The scientists wanted to see how far away they could separate the donor from his DNA and still get this effect. They stopped testing after they separated the DNA and the donor by 50 miles and STILL had the SAME result. No lag time; no transmission time.

The DNA and the donor had the same identical responses in time. The conclusion was that the donor and the DNA can communicate beyond space and time.

The third experiment proved something pretty shocking!

Scientists observed the effect of DNA on our physical world.

Light photons, which make up the world around us, were observed inside a vacuum. Their natural locations were completely random.

Human DNA was then inserted into the vacuum. Shockingly the photons were no longer acting random. They precisely followed the geometry of the DNA.

Light-Photons-Followed-The-Geometry-DNA

Scientists who were studying this, described the photons behaving “surprisingly and counter-intuitively”. They went on to say that “We are forced to accept the possibility of some new field of energy!”

They concluded that human DNA literally shape the behavior of light photons that make up the world around us!

So when a new research was done, and all of these 3 scientific claims were connected together, scientists were shocked.

They came to a stunning realization that if our emotions affect our DNA and our DNA shapes the world around us, than our emotions physically change the world around us.

Scientists-Make-A-Claim-That-Human-Emotion-Defy-The-Conventional-Laws-Of-Physics-And-Reality

And not just that, we are connected to our DNA beyond space and time.

We create our reality by choosing it with our feelings.

Science has already proven some pretty MINDBLOWING facts about The Universe we live in. All we have to do is connect the dots.

Sources:
– https://www.youtube.com/watch?v=pq1q58wTolk;
– Science Alert;
– Heart Math;
– Above Top Secret;
– http://www.bibliotecapleyades.net/mistic/esp_greggbraden_11.htm;

‘No doubt’ Iceland’s elves exist: anthropologist certain the creatures live alongside regular folks (South China Morning Post)

Construction sites have been moved so as not to disturb the elves, and fishermen have refused to put out to sea because of their warnings: here in Iceland, these creatures are a part of everyday life

PUBLISHED : Saturday, 14 May, 2016, 8:01am

UPDATED : Sunday, 15 May, 2016, 6:40pm

Since the beginning of time, elves have been the stuff of legend in Iceland, but locals here will earnestly tell you that elves appear regularly to those who know how to see them.

Construction sites have been moved so as not to disturb the elves, and fishermen have refused to put out to sea because of their warnings: here in Iceland, these creatures are a part of everyday life.

Watch: Iceland’s elves, a force to be reckoned with

But honestly, do they really exist?

Anthropologist Magnus Skarphedinsson has spent decades collecting witness accounts, and he’s convinced the answer is yes.

He now passes on his knowledge to curious crowds as the headmaster of Reykjavik’s Elf School.

“There is no doubt that they exist!” exclaims the stout 60-year-old as he addresses his “students”, for the most part tourists fascinated by Icelanders’ belief in elves.

What exactly is an elf? A well-intentioned being, smaller than a person, who lives outdoors and normally does not talk. They are not to be confused with Iceland’s “hidden people”, who resemble humans and almost all of whom speak Icelandic.

To convince sceptics that this is not just a myth, Skarphedinsson relays two “witness accounts”, spinning the tales as an accomplished storyteller.

The first tells of a woman who knew a fisherman who was able to see elves who would also go out to sea to fish.

One morning in February 1921, he noticed they were not heading out to sea and he tried to convince the other fishermen not to go out either. But the boss would not let them stay on shore.

That day, there was an unusually violent storm in the North Atlantic but the fishermen, who had heeded his warning and stayed closed to shore, all returned home safe and sound.

Seven years later, in June 1928, the elves again did not put out to sea which was confusing because there had never been a fierce storm at sea at that time of year. Forced to head out, they sailed waters that were calm but caught very few fish.

“The elves knew it,” the anthropologist claims.

The other “witness” is a woman in her eighties, who in 2002 ran into a young teen who claimed to know her. Asking him where they had met, he gave her an address where she had lived 53 years ago where her daughter claimed she had played with an invisible boy.

Most people tread lightly when entering into known elf territory
ICELAND MAGAZINE

“But Mum, it’s Maggi!” exclaimed the daughter when her mother described the teen.

“He had aged fives times slower than a human being,” said Skarphedinsson.

Surveys suggest about half of Icelanders believe in elves.

“Most people say they heard [about them] from their grandparents when they were children,” said Michael Herdon, a 29-year-old American tourist attending Elf School.

Iceland Magazine says ethnologists have noted it is rare for an Icelander to really truly believe in elves. But getting them to admit it is tricky.

“Most people tread lightly when entering into known elf territory,” the English-language publication wrote in September.

That’s also the case with construction projects.

It may prompt sniggers, but respect for the elves’ habitat is a consideration every time a construction project is started in Iceland’s magnificent countryside, which is covered with lava fields and barren, windswept lowlands.

Back in 1971, Skarphedinsson recalls how elves disrupted construction of a national highway from Reykjavik to the northeast. The project, he says, suffered repeated unusual technical difficulties because they didn’t want a big boulder that served as their home to be moved to make way for the new road.

“They made an agreement in the end that the elves would leave the stone for a week, and they would move the stone 15 metres. This is probably the only country in the world whose government officially talked with elves,” Skarphedinsson says.

But Iceland is not the only country that is home to elves, he says. It’s just that Icelanders are more receptive to accounts of their existence.

“The real reason is that the Enlightenment came very late to Iceland.

“In other countries, with western scientific arrogance [and] the denial of everything that they have not discovered themselves, they say that witnesses are subject to hallucinations.”

Words for snow revisited: Languages support efficient communication about the environment (Carnegie Mellon University)

13-APR-2016

CARNEGIE MELLON UNIVERSITY

 

The claim that Eskimo languages have many words for different types of snow is well known among the public, but it has been greatly exaggerated and is therefore often dismissed by scholars of language. However, a new study published in PLOS ONE supports the general idea behind the original claim.

The claim that Eskimo languages have many words for different types of snow is well known among the public, but it has been greatly exaggerated and is therefore often dismissed by scholars of language.

However, a new study published in PLOS ONE supports the general idea behind the original claim. Carnegie Mellon University and University of California, Berkeley researchers found that languages that use the same word for snow and ice tend to be spoken in warmer climates, reflecting lower communicative need to talk about snow and ice.

“We wanted to broaden the investigation past Eskimo languages and look at how different languages carve up the world into words and meanings,” said Charles Kemp, associate professor of psychology in CMU’s Dietrich College of Humanities and Social Sciences.

For the study, Kemp, and UC Berkeley’s Terry Regier and Alexandra Carstensen analyzed the connection between local climates, patterns of language use and word(s) for snow and ice across nearly 300 languages. They drew on multiple sources of data including library reference works, Twitter and large digital collections of linguistic and meteorological data.

The results revealed a connection between temperature and snow and ice terminology, suggesting that local environmental needs leave an imprint on languages. For example, English originated in a relatively cool climate and has distinct words for snow and ice. In contrast, the Hawaiian language is spoken in a warmer climate and uses the same word for snow and for ice. These cases support the claim that languages are adapted to the local communicative needs of their speakers — the same idea that lies behind the overstated claim about Eskimo words for snow. The study finds support for this idea across language families and geographic areas.

“These findings don’t resolve the debate about Eskimo words for snow, but we think our question reflects the spirit of the initial snow claims — that languages reflect the needs of their speakers,” said Carstensen, a psychology graduate student at UC Berkeley.

The researchers suggest that in the past, excessive focus on the specific example of Eskimo words for snow may have obscured the more general principle behind it.

Carstensen added, “Here, we deliberately asked a somewhat different question about a broader set of languages.”

The study also connects with previous work that explores how the sounds and structures of language are shaped in part by a need for efficiency in communication.

“We think our study reveals the same basic principle at work, modulated by local communicative need,” said Regier, professor of linguistics and cognitive science at UC Berkeley.

###

Read the full study at http://dx.plos.org/10.1371/journal.pone.0151138.

 

Feyerabend and the harmfulness of the ontological turn (Agent Swarm)

Posted on 

by Terence Blake

Feyerabend stands in opposition to the demand for a new construction that some thinkers have made after the supposed failure or historical obsolescence of deconstruction and of post-structuralism in general. On the contrary, he wholeheartedly endorses the continued necessity of deconstruction. Feyerabend also rejects the idea that we need an overarching system or a unified theoretical framework, arguing that in many cases a system or theoretical framework is just not necessary or even useful:

a theoretical framework may not be needed (do I need a theoretical framework to get along with my neighbor?) . Even a domain that uses theories may not need a theoretical framework (in periods of revolution theories are not used as frameworks but are broken into pieces which are then arranged this way and that way until something interesting seems to arise) (Philosophy and Methodology of Military Intelligence, 13).

Further, not only is a unified framework often unnecessary, it is undesirable, as it can be a hindrance to our research and to the conduct of our lives:

“frameworks always put undue constraints on any interesting activity” (ibid, 13).

Feyerabend emphasises that our ideas must be sufficiently complex to fit in and to cope with the complexity of our practices (11). More important than a new theoretical construction which only serves “to confuse people instead of helping them” we need ideas that have the complexity and the fluidity that come from close connection with concrete practice and with its “fruitful imprecision” (11).

Lacking this connection, we get only school philosophies that “deceive people but do not help them”. They deceive people by replacing the concrete world with their own abstract construction

that gives some general and very mislead[ing] outlines but never descends to details.

The result is a simplistic set of slogans and stereotypes that

“is taken seriously only by people who have no original ideas and think that [such a school philosophy] might help them getting ideas”.

Applied to the the ontological turn, this means that an ontological system is useless, a hindrance to thought and action, whereas an ontology which is not crystallised into a unified system and a closed set of fixed principles, but which limits itself to proposing an open set of rules of thumb and of free study of concrete cases is both acceptable and desirable. The detour through ontology is both useless and harmful, according to Feyerabend, because a freer, more open, and less technical approach is possible.

Lecture: On Latour and Simondon’s Mode of Existence (Digital Milieu)

Posted by Yuk Hui – 2 Feb 2015

On Latour and Simondon’s Mode of Existence

– fragments of a fictional dialogue yet to come

Yuk Hui, intervention given in a Workshop on Latour@ Denkerei, 28 Jan,2013

This intervention from its outset searches a dialogue between Simondon and Latour, a fictional dialogue, that nevertheless exists though it hasn’t happened. It hasn’t happened, or should I say it was once about to happen, when Latour praised Simondon’s Du Mode d’existence des objets techniques, and commented that it is a work that didn’t yet find its successor. But it does exist, this fictional dialogue, or at least we can talk about its mode of existence if you prefer since being fictional is also a mode of existence. We cannot draw a squared circle but we can think of a squared circle, it has meanings, this was an example given by Edmund Husserl as a critique of formal logic. The secrete philosopher of Bruno Latour, Étienne Souriau hold a similar idea in his Les différents Modes d’existence. A fictional object or character doesn’t occur in time and space as a physical object, or a historical event, but it does exists in works, in the socio-psychological life and imaginations of their readers and witness. Modes of existence is always plural, it doesn’t follow the rule of contradiction, it is rather key to what Latour calls ontological pluralism.

The question of the mode of existence departs from the question of Dasein posted by Martin Heidegger, and the meaning of Sein, eliminates the Ontologische Differenz between Sein and Seienden in order to de-prioritize certain mode of existence, with a kind of ontological politeness. Modes of existence is a new organon to the analysis of modern life, and also one that revolt against the 20th century philosophy aiming a unified theory of existence. Now to enter the modes of existence, according to Latour one must employ a new dispostif called diplomatic, meaning one should be aware of oneself, resisting esoteric temptations, while being polite and try to negotiate different terms. Hence Latour proposed to go back to an anthropology that starts with reflection on European modernity instead of starting with dialogues with others.

It is also this word “Mode of existence” on the one hand brings together Latour and Simondon to us since Simondon is a philosopher of the mode of existence instead of existence; on the other hand, it allows us to go beyond the question of network in actor-network theory, as Latour himself said in an interview with la vie des idées “what is complicated to understand, maybe, for those who know the rest of my works, it is that network is no longer the principle mode of driving, of vehicle. The world became a bit populated: there is more vehicles moving in different forms”1.That is to say, network is only one mode of existence out of 15 different modes, among which we also find Reproduction, Metamorphose, Habit, Technics, Fiction, Reference, Politics, Right, Religion, Attachment, Organization, Morality, Preposition and Double Click. Network can no longer alonemonopolize the academic social research (by saying so, network still seems to be the framework of the whole book2). Instead it is necessary to re-articulate this specific mode of existence with other modes of existence. For Latour, new position or preposition on the mode of existence allows us to open up the new field of philosophical investigation of the Moderns. The task is no longer how “we have never been modern”, a project done 20 years ago, but rather according to Latour it is an effort to complete the uniquely negative title – we have never been modern – “with a positive version this time of the same affirmation”3.

Mode of Existences and Ontological Politeness

How could one find an entrance to the question of “mode of existence”? Philosophy starts always with dialogue, the most ancient mode of dialectics, and Socrates has always been the model of such a tradition. Now, we want to ask what could this dialogue between Latour and Simondon be? How could us continue a fiction which was started by Latour? For Latour, the significance of the work of Simondon is that he has moved far beyond subject and object, and more importantly when the like and dislike of Heidegger which still shadows the research in philosophy of technology. Latour wrote: “Simondon has grasped that the ontological question can be extracted from the search of substance, from the fascination for particular knowledge, from the obsession for the bifurcation between subject and object, and be posed rather in terms of vector.” Latour quoted a paragraph from Du Mode d’existence des objets techniques:

This de-phasing of the mediation between figural characters and background characters translates the appearance of a distance between man and the world. And mediation itself, instead of being a simple structuration of the universe, takes on a certain density; it becomes objective in the technical and subjective in religion, making the technical object appear to be the primary object and divinity the primary subject, whereas before there was only the unity of the living thing and its milieu: objectivity and subjectivity appear between the living thing and its milieu, between man and the world, at a moment where the world does not yet have a full status as object, and man a complete status as subject.4

But then he continues abruptly: “yet Simondon remains a classical thinker, obsessed as he is by original unity and future unity, deducing his modes from each other in a manner somewhat reminiscent of Hegel…Multirealism turns out to be nothing more, in the end, than a long detour that brings him back to a philosophy of being, the seventh of the modes he sketched.” Latour copied and pasted these paragraphs in numerous articles, this commentary on Simondon is only a passage to the work of Étienne Souriau’s Les Différents modes d’existence. For Latour, it was Souriau but not Simondon who really showed us how can one affirm an ontological pluralism without falling back to the old and weak anthropological relativism and philosophical monism.

In this passing [passe] in Latour’s own sense, Simondon was portrait as an original thinker who wasn’t able to break away from “classical philosophy”, then unfortunately fell back to the shadow of the “original unity and future unity”. But what does it really mean by this quote from Simondon? What does it mean by “this de-phasing of the mediation between figural characters and background characters translates the appearance of a distance between man and the world” and what would be the context of such a quote? If we allow ourselves a bit of patience, Simondon was referring to the figure and background distinction as explained in Gestalt psychology. The figural reality expresses the possibilities of human action in the world, and the background reality expresses the power of nature. Simondon was trying to explain the relation between technics and religions, that originated from the incomparability between man and the world. A society of magic, sees Simondon as the moment where subject and object, human world and nature, figure and background were not fully distinct. But it is also the result of the resolution of incomparability between human being and its milieu, the unity described by Latour is only the possibility for incompatibility. If it could be counted as the repetition of the gesture of classical philosophy in searching of an unity, then biology, physics and chemistry may also have to bear the same accusation.

What is indeed profound in Simondon’s concept of the mode of existence is that this tension or incompatibility has to be resolved constantly both in the process of individualization of technical objects, and also individuation of living beings. It is also by the notion of incompatibility that one has to affirm the multiplicity of objects and their modes of existence. Indeed, Simondon doesn’t think that one can seize an object by its end, there exists ‘espèce technique’, it is rather more productively to think of analogies between different technical species, for example a pendulum clock and a cable winch5.We must recognize here that Simondon’s didn’t only talk about the mode of existence of technical objects, for Simondon, the theory of ontogenesis and individuation is also an inquiry into how different modes of existence interact with each other and and in constant process of evolution. In other words, there is no peace for us, and there hasn’t been a mode of existence called peace – the goal of some kind of all diplomatic activities. Any pursuit of stability is only an illusion, though lets say such an illusion is also a mode of existence. There is no unity of identity, or recollection, of unity composed of parts and united according to certain method of classification6. For Latour, or his reading of Souriau, the ontological pluralism/multi-realism must affirm the existence of phenomenon, things, soul, fictional beings, god, without recurring to a phenomenological account. It must revolt against the Kantian tradition and move towards a speculative realism without correlationism. Some commentators on Simondon such as Xavier Guchet sees the similarity of the approaches between Simondon and Souriau, especially the common word “modulation” they used to signify the internal transformation in being, which is exactly dephasing in Simondon’s own vocabularies, and quoted by Latour above. As Guchet states for Simondon “unity of existence is not an unity of identity, of recollection from an situation of scattering[éparpillement], an unity obtained by composition of part and according to a method of classification”7. If there is an unity in the thoughts of Simondon, then this unity is nothing other than tension and incompatibility. Simondon didn’t use often the word “realism”, but rather “reality”, and what is human reality is actually always in tension with technical reality, while what signified by technical reality is not a single unity or a single phenomenon, but a reality conditioned by many other factors, such as geographical, industrial, natural, etc. For example, the production of white boots and raincoats is conditioned by limitation of the research in material, the visibility of certain colour in that environment, etc. If we can translate into Latour’s own vocabularies, it is the heterogeneous actors in play with different values.

Latour didn’t elaborate all these, except an abrupt assertion that seems a bit brutal, and lack of ontological politeness – to certain extent. In the book Enquête sur les Modes d’existence, we can find another commentary from Latour on Simondon. The section collected in the book is from his earlier article Prendre le Pli des techniques, in which Latour praised Simondon, but at the same time, proposed to look at the mode of existence of technics instead of the mode of existence of technical objects. Latour and Simondon are just like two acquaintances, you smile and say hi without shaking hand, but he has to node his head anyway since there must be a politeness if one wants to be diplomatic. Latour thinks that it is impossible to find the technical mode of existence in objects themselves but rather technics itself. Since technical objects don’t give us visibility, in fact they make technics opaque to us. One can probably find a similar concern from Heidegger, especially the question of Besorgen. We are concernful beings and we always forget what is in front of us, what we are using, especially Being which we are and in which we dwell: we are far away from what is closest to us.

But this dialectic movement of visible and invisible seems to be a general tendency of all technical objects, and it is the particular mode of existence of technical objects and technics, which has been widely recognized in the study of technologies. Latour was right that technics hides itself deeper than alétheia. The mode of existence of technics is only visible through technical objects, and it is also rendered invisible by technical objects, since on the one hand there is no technics without materialisation, or leaving traces; on the other hand materialisation doesn’t assure visibility, that is to say one cannot find identity or essence from eidos. I would rather say compared to Latour’s proposal of going back to the “transcendence” of technics, Simondon shows a more concrete account of the levels of existence of technical objects: namely usage, historical characters, and the profound structure of technicity. And these modes of existences also account different level of visibility and invisibility. For example, how can we think of the diode in your computer? Or lets take away the subject who speculates, how does the diode in your computer exist by itself, a diode that really exists in a black box even if you open the case of your computer and check every component? How can we think of Mercedes Benz, the different models that nevertheless associate with the brand name Mercedes Benz? When are are visible to us and invisible to us, without being reduced to question of transcendence and immanence?

Be diplomatic without double-clicks

Another Latourian commentary on Simondon comes indirectly from Graham Harman, if we can use Latour’s own vocabulary on the modes of existence, it is the overlap between Reference and Network that bring forth this mode of existence: another fictional dialogue between Latour and Simondon in the regime of enunciation of Harman. Speaking of the relational philosophy of Latour, Harman compared it with kinds of monism that supposes “a single lump universe, a world devoid of any specific realities at all8”. Among these monisms, Harman found one peculiar one, that is one related to Deleuze, and more specifically Simondon, if we now count how much Deleuze has taken from the concept of individuation of Simondon. In contrary to the single lump universe, this monism “try to enjoy the best of both worlds, defining a unified realm beneath experience that is not completely unified. Instead of a total lump-world, it is one animated in advance by different ‘pre-individual’ zones that prevent the world from being purely homogeneous.”

As Alberto Toscano describes Simondon’s position, ‘whilst [preindividual being] is yet to be in- dividuated, [it] can already be regarded as affected by relationality. This preindividual relationality, which takes place between heterogeneous dimensions, forces or energetic tendencies, is nevertheless also a sort of non- relation […]. Being is thus said to be more-than-one to the extent that all of its potentials cannot be actualized at once’. Simondon like DeLanda wants the world to be both heterogeneous and not yet parcelled out into individuals. In this way, specific realities lead a sort of halfhearted existence somewhere between one and many9.

Harman further explained that this is certainly not the case for Latour, since “his actors are fully in- dividual from the start; his philosophy contains no such concept as ‘pre- individual’. His actors are not blended together in a ‘continuous yet heterogeneous’ whole, but are basically cut off from one another. There is no continuum for Latour despite his relationism, and this thankfully entails that his relationism is less radical than it is for philosophies of the virtual (note that Latour’s rare flirtations with monism seem to coincide with his equally rare flirtations with the term ‘virtual’).” In fact, maybe it is because Harman didn’t read Simondon since he relied on Alberto Toscano’s reading, he hence has a rather vague idea of individuation. Here we see another problem of not being diplomatic enough, that is due the disagreement of word without looking into the content. The question for us is how can we negotiate different ontologies, not to generate an unity, but to affirm different realisms without a double click? In other words, how to become a professional diplomate as Latour suggests?

The fact that there are always individuals for Simondon, but individuals didn’t disclose us anything of operation or process, which can only be studied through individuation. Taking individual as isolable individual or as part of collective, according to Simondon is the problem of the substantialism of sociology and psychology. For Simondon, as well as Latour, individuals cannot be reduced; but for Simondon, who sees further than Harman, the individual cannot be reduced to itself. Each individual is not individual in itself, but always accompanied by the pre-individual, which is the potential and energetic that provide the motivation for individuation: it is a transindividual rather than an individual. And if actor-network aims to look into the complexity and the process of social phenomenon, didn’t Simondon and Latour walk in parallel?

Now if Actor-Network theory has to be re-articulated according to the modes of existence of the modern according to Latour, we must pay attention to the translation that is not necessarily diplomatic but sincere. We must also note that this notion of translation is so important in Actor-Network theory, since according to the annotation of Latour’s Ebook, it is called la sociologie de la traduction, sociology of translation. But lets be a bit careful here, with the word traduction, Latour distinguish it from translation. For him, the particular mode of existence he calls “Double Click” is a translation without traduction, meaning without transformation, without process, it is simply a jump from one process to another. But isn’t Latour and Harmon’s reading of Simondon also such a double click?

I am not rejecting Latour and Harman due to their double clicks on a button called “Simondon”, since we have to be diplomatic and polite. But maybe we need to pay attention that, there are different style of being diplomatic, and I feel like a more productive dialogue is possible if we are able to negotiate like diplomates who try to translation different terms and requests into conditions and agreements, as Latour himself suggests. These negotiations may allow us to peek into a more profound investigation on the modes of existence of Moderns. Actor-Network, a concept according to Latour needs to be renewed in the inquiry into the mode of existence, the remaining task is to re-situate network in the broader framework of the modes of existence.

Lets start and conclude with something lighter and more motivated and leave something heavier and more specific behind, so that we can find ways to start a real negotiation – even though you may criticise this is also a double-click of some kind later. Instead of going into every mode of existence, lets me outline a framework for such a dialogue. These are four pairs of beings: 1) Actor – Individual; 2) Network – Milieu; 3)Relations – Affectivo-emotive/Social-psychological; 4) Traduction – Transduction. We wouldn’t be able to go through all these pairs in details, since they deserve a work of its own. Here I can only offer a very brief detour, shows how Latour and Simondon’s interest in describing processes and operations can give us a synthetic reading of both. We will see that how different modes of existences can hardly be classified into 15 categories and simple overlap between these categories could already bring us a lot of headaches. What seems to me problematic is that actors as individuals – according to Harman – are too rigid. Of course, each individual exist, me, I am speaking in front of you as an individual, but I am not an individual to you as a total other, since you are listening to me, and we are thinking together, at least you are thinking according to my voice. You are listening to my demands, my ontologies, with your politeness. And I am observing you, some of you smiling, some of you shaking head, many of you checking Facebook, and I must adjust my speech, my tone, the volume of my voice, my perception of my speech and even myself. There are many possibilities that is totally outside me, but they are the pre-individual for me as a transindividual as Simondon proposed.

Simondon is more persistent with trans-.Note that it is a transindividual but not an individual; a transduction and not only a traduction, transduction is at the same time change and exchange that triggers transformation of structure. Latour, he himself wants to dissolve network into the question of the mode of existence, and here we can see again the possibility of reconstitute it in the concept of milieu. The network of Latour is too much into “international relations” due to its diplomatic nature, and for Simondon the milieu has to be socio-psychological and emo-affective, it is also why Simondon was able to talk about an social-psychology of technicity. This is not a simple defence for Simondon, since it wouldn’t be fruitful to do so, but in order to search the possibility of a dialogue that doesn’t dismiss each other in a double-click. For an inquiry into the modes of existence is possible, it seems that one must not repeat what has happened in the history of the inquiry into existence, like how Jorge Luis Borges made fun of Bishop John Wilkins’ ontology and the funny Chinese encyclopedia; indeed 12+310 categories doesn’t seem to be much different from15 categories except when the “+” counts. If we dare to take it a step further, then it is how a metaphysics departs from its history, not only in terms of content, but also style.

1Le diplomate de la Terre Entretien avec Bruno Latour, par Arnaud Esquerre & Jeanne Lazarus [18-09-2012], http://www.laviedesidees.fr/Le-diplomate-de-la-Terre.html

2Thanks to Jeremy James Lecomte and Markus Burkhardt for insisting on this point

3Latour, EMD, 23

4In Latour, « Reflections on Etienne Souriau’s Les Modes d’existence », in (edited by Graham Harman, Levi Bryant and Nick Srnicek The Speculative Turn Continental Materialism and Realism re.press Australi, pp. 304-333, Melbourne, Australie

5Simondon, MEOT (2012), Aubier, p.21

6Guchet, Pour un humanisme technologique. Culture, technique et société dans la philosophie de Gilbert Simondon, PUF,2011, 35

7Ibid, « l’unité de l’existence n’est pas une unité d’identité, de récollection à partir d’une situation d’éparpillement, une unité obtenue par composition de parties et selon une méthode de classification »

8Harman, Prince of Networks: Bruno Latour and Metaphysics, 159

9ibid

10Latour, EMO, 477

Is the universe a hologram? (Science Daily)

Date:
April 27, 2015
Source:
Vienna University of Technology
Summary:
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.

Is our universe a hologram? Credit: TU Wien 

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.

Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.

The Holographic Principle

Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).

Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.

Correspondence Even in Flat Spaces

For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.

However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

Calculated Twice, Same Result

“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.

When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.

“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.

This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.


Journal Reference:

  1. Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat HolographyPhysical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602

Out of Place: Space/Time and Quantum (In)security (The Disorder of Things)

APRIL 21, 2015 – DRLJSHEPHERD

A demon lives behind my left eye. As a migraine sufferer, I have developed a very personal relationship with my pain and its perceived causes. On a bad day, with a crippling sensitivity to light, nausea, and the feeling that the blood flowing to my brain has slowed to a crawl and is the poisoned consistency of pancake batter, I feel the presence of this demon keenly.

On the first day of the Q2 Symposium, however, which I was delighted to attend recently, the demon was in a tricksy mood, rather than out for blood: this was a vestibular migraine. The symptoms of this particular neurological condition are dizziness, loss of balance, and sensitivity to motion. Basically, when the demon manifests in this way, I feel constantly as though I am falling: falling over, falling out of place. The Q Symposium, hosted by James Der Derian and the marvellous team at the University of Sydney’s Centre for International Security Studies,  was intended, over the course of two days and a series of presentations, interventions, and media engagements,  to unsettle, to make participants think differently about space/time and security, thinking through quantum rather than classical theory, but I do not think that this is what the organisers had in mind.

photo of cabins and corridors at Q Station, SydneyAt the Q Station, located in Sydney where the Q Symposium was held, my pain and my present aligned: I felt out of place, I felt I was falling out of place. I did not expect to like the Q Station. It is the former quarantine station used by the colonial administration to isolate immigrants they suspected of carrying infectious diseases. Its location, on the North Head of Sydney and now within the Sydney Harbour National Park, was chosen for strategic reasons – it is secluded, easy to manage, a passageway point on the journey through to the inner harbour – but it has a much longer historical relationship with healing and disease. The North Head is a site of Aboriginal cultural significance; the space was used by the spiritual leaders (koradgee) of the Guringai peoples for healing and burial ceremonies.

So I did not expect to like it, as such an overt symbol of the colonisation of Aboriginal lands, but it disarmed me. It is a place of great natural beauty, and it has been revived with respect, I felt, for the rich spiritual heritage of the space that extended long prior to the establishment of the Quarantine Station in 1835. When we Q2 Symposium participants were welcomed to country by and invited to participate in a smoking ceremony to protect us as we passed through the space, we were reminded of this history and thus reminded – gently, respectfully (perhaps more respectfully than we deserved) – that this is not ‘our’ place. We were out of place.

We were all out of place at the Q2 Symposium. That is the point. Positioning us thus was deliberate; we were to see whether voluntary quarantine would produce new interactions and new insights, guided by the Q Vision, to see how quantum theory ‘responds to global events like natural and unnatural disasters, regime change and diplomatic negotiations that phase-shift with media interventions from states to sub-states, local to global, public to private, organised to chaotic, virtual to real and back again, often in a single news cycle’. It was two days of rich intellectual exploration and conversation, and – as is the case when these experiments work – beautiful connections began to develop between those conversations and the people conversing, conversations about peace, security, and innovation, big conversations about space, and time.

I felt out of place. Mine is not the language of quantum theory. I learned so much from listening to my fellow participants, but I was insecure; as the migraine took hold on the first day, I was not only physically but intellectually feeling as though I was continually falling out of the moment, struggling to maintain the connections between what I was hearing and what I thought I knew.

Quantum theory departs from classical theory in the proposition of entanglement and the uncertainty principle:

This principle states the impossibility of simultaneously specifying the precise position and momentum of any particle. In other words, physicists cannot measure the position of a particle, for example, without causing a disturbance in the velocity of that particle. Knowledge about position and velocity are said to be complementary, that is, they cannot be precise at the same time.

I do not know anything about quantum theory – I found it hard to follow even the beginner’s guides provided by the eloquent speakers at the Symposium – but I know a lot about uncertainty. I also feel that I know something about entanglement, perhaps not as it is conceived of within quantum physics, but perhaps that is the point of events such as the Q Symposium: to encourage us to allow the unfamiliar to flow through and around us until the stream snags, to produce an idea or at least a moment of alternative cognition.

My moment of alternative cognition was caused by foetal microchimerism, a connection that flashed for me while I was listening to a physicist talk about entanglement. Scientists have shown that during gestation, foetal cells migrate into the body of the mother and can be found in the brain, spleen, liver, and elsewhere decades later. There are (possibly) parts of my son in my brain, literally as well as simply metaphorically (as the latter was already clear). I am entangled with him in ways that I cannot comprehend. Listening to the speakers discuss entanglement, all I could think was, This is what entanglement means to me, it is in my body.

Perhaps I am not proposing entanglement as Schrödinger does, as ‘the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought’. Perhaps I am just using the concept of entanglement to denote the inextricable, inexplicable, relationality that I have with my son, my family, my community, humanity. It is this entanglement that undoes me, to use Judith Butler’s most eloquent phrase, in the face of grief, violence, and injustice. Perhaps this is the value of the quantum: to make connections that are not possible within the confines of classical thought.

I am not a scientist. I am a messy body out of place, my ‘self’ apparently composed of bodies out of place. My world is not reducible. My uncertainty is vast. All of these things make me insecure, challenge how I move through professional time and space as I navigate the academy. But when I return home from my time in quarantine and joyfully reconnect with my family, I am grounded by how I perceive my entanglement. It is love, not science, that makes me a better scholar.

photo of sign that says 'laboratory and mortuary' from Q station, sydney.

I was inspired by what I heard, witnessed, discussed at the Q2 Symposium. I was – and remain – inspired by the vision of the organisers, the refusal to be bound by classical logics in any field that turns into a drive, a desire to push our exploration of security, peace, and war in new directions. We need new directions; our classical ideas have failed us, and failed humanity, a point made by Colin Wight during his remarks on the final panel at the Symposium. Too often we continue to act as though the world is our laboratory; we have ‘all these theories yet the bodies keep piling up…‘.

But if this is the case, I must ask: do we need a quantum turn to get us to a space within which we can admit entanglement, admit uncertainty, admit that we are out of place? We are never (only) our ‘selves’: we are always both wave and particle and all that is in between and it is our being entangled that renders us human. We know this from philosophy, from art and the humanities. Can we not learn this from art? Must we turn to science (again)? I felt diminished by the asking of these questions, insecure, but I did not feel that these questions were out of place.

Review of Thinking with Whitehead: A Free and Wild Creation of Concepts (Notre Dame Philosophical Reviews)

2012.06.21
ISABELLE STENGERS
Thinking with Whitehead: A Free and Wild Creation of Concepts
Isabelle Stengers, Thinking with Whitehead: A Free and Wild Creation of Concepts, Michael Chase (tr.), Harvard University Press, 2011, 531pp., $49.95 (hbk), ISBN 9780674048034.

Reviewed by Roland Faber, Claremont School of Theology

Isabelle Stengers’ work on Whitehead was a long time in the making — as a work on Whitehead’s work, as an outcome of her thinking with Whitehead through different instantiations of her own writing, and as a process of translation from the French original. It is an important work, unusual not only for the bold generality with which it tries to characterize Whitehead’s philosophical work in its most important manifestations, but even more importantly, for its effort to present a radical alternative mode of contemporary thinking. One is almost tempted to say that the urgency of this book’s intensity is motivated by nothing less than Stengers’ immediate feeling of the importance of Whitehead’s work for the future of (human) civilization. Since we need to make life-and-death decisions regarding the directions we might (want to) take, the explication of Whitehead’s alternatives may be vital. Hence to think with Whitehead is to think alternatives in which we “sign on in advance to an adventure that will leave none of the terms we normally use as they were.” Yet, as a rule, Stengers is “with” Whitehead not only in sorting out such alternatives, but also in his non-confrontational method of peace-making, in which nothing “will be undermined or summarily denounced as a carrier of illusion.” (24)

The two parts of the book roughly bring to light the development of Whitehead’s thought and its shifting points of gravity, circling around two of its major developments.  One of these developments could be said to be temporal, since Whitehead’s philosophical work over time can be characterized as developing from a philosophy of nature (as it was still embedded in the discussion of a philosophy of science) to a metaphysics (that included everything that a philosophy of science has excluded). The other is more spatial, since it circles around the excluded middle between the philosophy of science (excluding mind) and a general metaphysics (of all worlds), namely, a cosmology of our real universe. In an interesting twist, not so common today in any of the standard fields of discourse, we could also agree with Bruno Latour, who in his introduction suggests that both developments, the temporal — how to overcome the bifurcation of nature — and the spatial — how to understand a cosmos of creative organisms — are again (and further) de-centered by the unusual Whiteheadian reintroduction of “God.” (xiii)

The first fourteen chapters that discuss the “temporal” development of Whitehead’s thought (“From the Philosophy of Nature to Metaphysics”) begin with a hermeneutical invitation to the reader to view the Whiteheadian adventure of thought as a dislocation from all commonly held beliefs and theories about nature and the world in general because it asks “questions that will separate them from every consensus.” (7) As its major problem and point of departure, Stengers identifies Whitehead’s criticism of the “bifurcation of nature,” that is, the constitutional division of the universe into mutually exclusive sections (which are often at war with one another because of this division). One section consists of what science finds to be real, but valueless, and the other of that which constitutes mind — a setup that reduces the first section to senseless motion and the second to mere “psychic additions.” (xii) At first exploring Whitehead’s The Concept of Nature, the beginning chapters draw out the contours of Whitehead’s reformulation of the concept of nature, implying that it must not avoid “what the concept of nature designates as ultimate: knowledge.” (41) In Whitehead’s view, knowledge and conceptualization become essential to the concept of nature. While the “goal is not to define a nature that is ‘knowable’ in the philosophers’ sense,” Whitehead defines nature and knowledge “correlatively” such that “‘what’ we perceive does indeed designate nature rather than the perceiving mind.” (44) Conversely, “exactness” is no longer an ideal, but “a thickness with a plurality of experiences occurring simultaneously — like a person walking by.” (55) With Bergson, Whitehead holds that such duration — an event — is the “foothold of the mind” (67) in nature. Being a standpoint, a perspective, paying attention to the aspects of its own integration, such a characterization of an event is meant to generate Whitehead’s argument, as unfolded in Science and the Modern World, against the “fallacy of misplaced concreteness” (which excludes standpoints by introducing exactness in describing vacuous matter) and, thereby, the bifurcation of nature. (113)

On the way to the cosmology of Process and Reality — itself “a labyrinth-book, a book about which one no longer knows whether it has an author, or whether it is not rather the book that has fashioned its author” (122) — Stengers examines the two unexpected metaphysical chapters of Science and the Modern World — on Abstraction and God — as urged by the aesthetic question within a universe, which defines itself by some kind of harmony and a rationality, that is, by faith in the order of a nature, that does not exclude organisms as exhibiting “living values.” (130) As it resists bifurcation, it enables us to reconcile science and philosophy. This is the moment where, as Stengers shows, Whitehead finds himself in a place where he needs to introduce the concept of God. This move is, however, not motivated by a “preliminary affirmation of His existence,” but by a

fundamental experience of humanity . . . of which no religion can be the privileged expression, although each one develops and collapses, from epoch to epoch, according to whether its doctrines, its rites, its commands, or its definitions do or do not evoke this vision, revive it, or inhibit it, giving it or failing to give it adequate expression (133).

The second part (“Cosmology”) features mainly Process and Reality. Stengers probes the uniqueness and necessity of speculative philosophy and its “intellectual intuition” (234) by exploring its criterion of reciprocal presupposition. (237) This expresses the impossibility of any bifurcation: “the ambition of speculative coherence is to escape the norms to which experiences, isolated by the logical, moral, empiricist, religious, and other stakes that privilege them, are” at “risk of ignoring” the mutuality of “each dancer’s center of gravity” with the “dancer’s spin.” This mutuality of movement requires speculative philosophy, which, in its very production, brings to existence the possibility of a thought ‘without gravity,’ without a privileged direction. The ‘neutral’ metaphysical thought of Science and the Modern World had already risked the adventure of trusting others ‘precursively’ at the moment when one accepts that one’s “own body is put off balance.” (239)

What, in such a world, is ultimately given, then? While in The Concept of Nature the Ultimate was Mind and in Science and the Modern World it was God, in Process and Reality it becomes Creativity. (255) Creativity affirms a universe of accidents, for which God introduces a requirement of the reciprocity of these accidents (265). Creativity is, like Deleuze’s “plane of immanence”, that “which insists and demands to be thought by the philosopher, but of which the philosopher is not in any way the creator.” (268)

Stengers’ distinctive mode of thought tries to avoid common dichotomies and to always highlight Whitehead’s alternative, carved out of the always present aura of complexities that surrounds any activity of becoming, interpretation and reflection. Therefore, she introduces the meaning and function of the Whiteheadian organization of organisms — each event being a “social effort, employing the whole universe” (275) — and the organization of thought (the obligations of speculative philosophy) — correcting the initial surplus of chaotic subjectivity (277). Both these forms of organization lead to “the most risky interpretation” (277) of empiricism as that which makes things hold together, neither crushed nor torn apart. Further investigating how occasions and philosophies function together (by dealing with what has been excluded), Stengers presents us with the fundamental importance of how “feeling” (or the transformation of scars) can offer new ways for (concepts of) life that testify to that which has been eliminated or neglected: how decisions can reduce the cost and victims they require (334) and, in actual and conceptual becoming, transform the status quo. (335) Whiteheadian feeling, of course, precedes consciousness and (even prior to perception) is the unconstrained reception that creates the events of its passing.

In chapters 21 and 22, God again enters the picture, not as rule of generality (metaphysically, aesthetically, or ethically), but as “divine endowment [that] thus corresponds to an individual possibility, not to what individuals should accomplish in the name of interest that transcend them.” (390) Divine intervention responds to “what is best for this impasse” (421), a proposition whose actualization is indeterminate by definition. Here, Whitehead’s metaphysics has rejected the normal/normative in favor of the relevant/valuable. (422) This again is related to the concepts of expression and importance in chapter 23, as “the way living societies can simultaneously canalize and be infected by what lurks [from the future]: originality.” (429)

Most interestingly, Stengers describes this interstitial space as a “sacrament” — the “unique sacrament of expression” — that in its “call for a sacramental plurality” conveys Whitehead’s understanding of “the cosmic meaning he confers upon expression and importance” in order to develop “a sociology of life” (435) for which signs are not only functional, but expressive. It is in this context that “Whitehead’s metaphysical God does not recognize his own, he does not read our hearts, he does not understand us better than we do ourselves, he does not demand our recognition or our gratitude, and we shall never contemplate him in his truth.” Rather, God “celebrates my relation to my self and my belongings, to my body, to my feelings, my intentions, my possibilities and perception.” (448)

If there is, for Stengers, a divine function of salvation regarding Whitehead’s God, it is that which only opens through following Whitehead’s call for a secularization of the notion of the divine. (469, 477) Nothing (not a soul) is lost (in this new secularism), although it is only saved in “the unimaginable divine experience.” (469) This “does not make God the being to whom one may say ‘Thou,’ for he has no other value than the difference he will make in the occasional experience that will derive from him its initial aim.” (477) For Stengers, Whitehead wanted to save God from the role assigned to God by the theological propositions that make God the mere respondent to the religious vision. (479) Instead, God affirms the “full solemnity of the world” (493) for us through a neutral metaphysics in which God stands for all appetite, but impersonally so — saving what is affirmed and excluded alike. (490)

Stengers concludes with one of the most astonishing characteristics of Whitehead’s philosophy: namely, his missing ethics. Instead of viewing this as a lack, she conceives his philosophy as ethos, ethos as habit, and habit as aesthetics, (515) “celebrating the adventure of impermanent syntheses.” This ethos, for Stengers, is not “critical wakefulness,” but “the difference between dream and nightmare” — a dream, a storytelling from within the Platonic cave, together with those who live and argue within in it, but also enjoy together the living values that can be received at the interstices. (516-7) In the end, as in the beginning, the adventure of alternative thinking in Whitehead asks us to walk with him in his vectors of disarming politeness — by asking polite questions that one creature may address to another creature. (518)

If there is a weakness in Stengers’ rendering of Whitehead’s work, it is of a more generic nature, demonstrating its embeddedness in a wider cultural spirit or zeitgeist. Anyone who has some knowledge of the history and development of the reception of, and scholarship on, Whitehead will not fail to discover that Stengers is not the only one who has rediscovered this Whitehead, the Whitehead of the alternative adventure, at least within the last twenty years. Her sporadic recourse to Deleuze functions only as a fleeting spark of light that, if slowed down, would highlight the philosophic background on which current thinkers (including Stengers) have begun to view Whitehead. Although this remains almost undetected between the tectonic shifts of Stengers´ reconfiguration of Whitehead’s thought, one will find Stengers’ work to be the outcome of this same tradition. As with several other of these newer approaches, one of the (unfortunate) fault-lines of Stengers’ endeavor is that, when its sources remain hidden, it contradicts the Whiteheadian spirit of recollection, rediscovery and synthesis in ever new concrescences. Originality (creativity) must not suppress the traditions on which it stands; in particular, a hundred years of Whiteheadian scholarship in process theology that is left in silence. It is sad that a rediscovery of Whitehead should narrow the creative synthesis down by being dominated by such a negative prehension. Granted that from afar one might not see the inner diversity and rich potential of process theology’s rhizomatic development, but to think that to name “God” (anew) in (Whitehead’s) philosophy today is original when it in fact rehearses positions process theology has developed over the last century still leaves me with a question: Is freedom from the past necessarily coupled with its oblivion?

In any case, Stengers’ Thinking with Whitehead is an important contribution to the current landscape of the rediscovery of Whitehead in philosophy and adjunct disciplines. It is also a gift for addressing urgent questions of survival and the “good and better life,” the envisioning of which Whitehead sees as a function of philosophy. May Stengers’ rendering of such an alternative congregation of thought for a new future of civilization steer us toward a more peaceful, polite, and less viciously violent vision.

When Exponential Progress Becomes Reality (Medium)

Niv Dror

“I used to say that this is the most important graph in all the technology business. I’m now of the opinion that this is the most important graph ever graphed.”

Steve Jurvetson

Moore’s Law

The expectation that your iPhone keeps getting thinner and faster every two years. Happy 50th anniversary.

Components get cheapercomputers get smallera lot of comparisontweets.

In 1965 Intel co-founder Gordon Moore made his original observation, noticing that over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years. The prediction was specific to semiconductors and stretched out for a decade. Its demise has long been predicted, and eventually will come to an end, but continues to be valid to this day.

Expanding beyond semiconductors, and reshaping all kinds of businesses, including those not traditionally thought of as tech.

Yes, Box co-founder Aaron Levie is the official spokesperson for Moore’s Law, and we’re all perfectly okay with that. His cloud computing company would not be around without it. He’s grateful. We’re all grateful. In conversations Moore’s Law constantly gets referenced.

It has become both a prediction and an abstraction.

Expanding far beyond its origin as a transistor-centric metric.

But Moore’s Law of integrated circuits is only the most recent paradigm in a much longer and even more profound technological trend.

Humanity’s capacity to compute has been compounding for as long as we could measure it.

5 Computing Paradigms: Electromechanical computer build by IBM for the 1890 U.S. Census → Alan Turing’s relay based computer that cracked the Nazi Enigma → Vacuum-tube computer predicted Eisenhower’s win in 1952 → Transistor-based machines used in the first space launches → Integrated-circuit-based personal computer

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines Google’s Director of Engineering, futurist, and author Ray Kurzweil proposed “The Law of Accelerating Returns”, according to which the rate of change in a wide variety of evolutionary systems tends to increase exponentially. A specific paradigm, a method or approach to solving a problem (e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the paradigm exhausts its potential. When this happens, a paradigm shift, a fundamental change in the technological approach occurs, enabling the exponential growth to continue.

Kurzweil explains:

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing’s relay-based machine that cracked the Nazi enigma code, to the vacuum tube computer that predicted Eisenhower’s win in 1952, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

This graph, which venture capitalist Steve Jurvetson describes as the most important concept ever to be graphed, is Kurzweil’s 110 year version of Moore’s Law. It spans across five paradigm shifts that have contributed to the exponential growth in computing.

Each dot represents the best computational price-performance device of the day, and when plotted on a logarithmic scale, they fit on the same double exponential curve that spans over a century. This is a very long lasting and predictable trend. It enables us to plan for a time beyond Moore’s Law, without knowing the specifics of the paradigm shift that’s ahead. The next paradigm will advance our ability to compute to such a massive scale, it will be beyond our current ability to comprehend.

The Power of Exponential Growth

Human perception is linear, technological progress is exponential. Our brains are hardwired to have linear expectations because that has always been the case. Technology today progresses so fast that the past no longer looks like the present, and the present is nowhere near the future ahead. Then seemingly out of nowhere, we find ourselves in a reality quite different than what we would expect.

Kurzweil uses the overall growth of the internet as an example. The bottom chart being linear, which makes the internet growth seem sudden and unexpected, whereas the the top chart with the same data graphed on a logarithmic scale tell a very predictable story. On the exponential graph internet growth doesn’t come out of nowhere; it’s just presented in a way that is more intuitive for us to comprehend.

We are still prone to underestimate the progress that is coming because it’s difficult to internalize this reality that we’re living in a world of exponential technological change. It is a fairly recent development. And it’s important to get an understanding for the massive scale of advancements that the technologies of the future will enable. Particularly now, as we’ve reachedwhat Kurzweil calls the “Second Half of the Chessboard.”

(In the end the emperor realizes that he’s been tricked, by exponents, and has the inventor beheaded. In another version of the story the inventor becomes the new emperor).

It’s important to note that as the emperor and inventor went through the first half of the chessboard things were fairly uneventful. The inventor was first given spoonfuls of rice, then bowls of rice, then barrels, and by the end of the first half of the chess board the inventor had accumulated one large field’s worth — 4 billion grains — which is when the emperor started to take notice. It was only as they progressed through the second half of the chessboard that the situation quickly deteriorated.

# of Grains on 1st half: 4,294,967,295

# of Grains on 2nd half: 18,446,744,069,414,600,000

Mind-bending nonlinear gains in computing are about to get a lot more realistic in our lifetime, as there have been slightly more than 32 doublings of performance since the first programmable computers were invented.

Kurzweil’s Predictions

Kurzweil is known for making mind-boggling predictions about the future. And his track record is pretty good.

“…Ray is the best person I know at predicting the future of artificial intelligence.” —Bill Gates

Ray’s prediction for the future may sound crazy (they do sound crazy), but it’s important to note that it’s not about the specific prediction or the exact year. What’s important to focus on is what the they represent. These predictions are based on an understanding of Moore’s Law and Ray’s Law of Accelerating Returns, an awareness for the power of exponential growth, and an appreciation that information technology follows an exponential trend. They may sound crazy, but they are not based out of thin air.

And with that being said…

Second Half of the Chessboard Predictions

“By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.”

“By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.”

To expand image → https://twitter.com/nivo0o0/status/564309273480409088

Not quite there yet…

“By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.”

These clones are cute.

“By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.”

Multiplying our intelligence a billionfold by linking our neocortex to a synthetic neocortex in the cloud — what does that actually mean?

In March 2014 Kurzweil gave an excellent talk at the TED Conference. It was appropriately called: Get ready for hybrid thinking.

Here is a summary:

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

These are the highlights:

Nanobots will connect our neocortex to a synthetic neocortex in the cloud, providing an extension of our neocortex.

Our thinking then will be a hybrid of biological and non-biological thinking(the non-biological portion is subject to the Law of Accelerating Returns and it will grow exponentially).

The frontal cortex and neocortex are not really qualitatively different, so it’s a quantitative expansion of the neocortex (like adding processing power).

The last time we expanded our neocortex was about two million years ago. That additional quantity of thinking was the enabling factor for us to take aqualitative leap and advance language, science, art, technology, etc.

We’re going to again expand our neocortex, only this time it won’t be limited by a fixed architecture of inclosure. It will be expanded without limits, by connecting our brain directly to the cloud.

We already carry a supercomputer in our pocket. We have unlimited access to all the world’s knowledge at our fingertips. Keeping in mind that we are prone to underestimate technological advancements (and that 2045 is not a hard deadline) is it really that far of a stretch to imagine a future where we’re always connected directly from our brain?

Progress is underway. We’ll be able to reverse engineering the neural cortex within five years. Kurzweil predicts that by 2030 we’ll be able to reverse engineer the entire brain. His latest book is called How to Create a Mind… This is the reason Google hired Kurzweil.

Hybrid Human Machines

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

“We’re going to become increasingly non-biological…”

“We’ll also have non-biological bodies…”

“If the biological part went away it wouldn’t make any difference…”

They* will be as realistic as real reality.”

Impact on Society

technological singularity —“the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization” — is beyond the scope of this article, but these advancements will absolutely have an impact on society. Which way is yet to be determined.

There may be some regret

Politicians will not know who/what to regulate.

Evolution may take an unexpected twist.

The rich-poor gap will expand.

The unimaginable will become reality and society will change.

The Idea of a Multiversum – Logics, Cosmology, Politics (Backdoor Broadcasting Company)

The Centre for Research in Modern European Philosophy (CRMEP) and the London Graduate School in collaboration with Art and Philosophy at Central Saint Martins present:

A Lecture of the Centre for Research in Modern European Philosophy’s 20th Anniversary Public Lecture Series, in association with the London Graduate School.

Professor Etienne Balibar (CRMEP, Kingston University/Columbia University, NY) – The Idea of a Multiversum – Logics, Cosmology, Politics

TALK

The Paradoxes That Threaten To Tear Modern Cosmology Apart (The Physics Arxiv Blog)

Some simple observations about the universe seem to contradict basic physics. Solving these paradoxes could change the way we think about the cosmos

The Physics arXiv Blog on Jan 20

Revolutions in science often come from the study of seemingly unresolvable paradoxes. An intense focus on these paradoxes, and their eventual resolution, is a process that has leads to many important breakthroughs.

So an interesting exercise is to list the paradoxes associated with current ideas in science. It’s just possible that these paradoxes will lead to the next generation of ideas about the universe.

Today, Yurij Baryshev at St Petersburg State University in Russia does just this with modern cosmology. The result is a list of paradoxes associated with well-established ideas and observations about the structure and origin of the universe.

Perhaps the most dramatic, and potentially most important, of these paradoxes comes from the idea that the universe is expanding, one of the great successes of modern cosmology. It is based on a number of different observations.

The first is that other galaxies are all moving away from us. The evidence for this is that light from these galaxies is red-shifted. And the greater the distance, the bigger this red-shift.

Astrophysicists interpret this as evidence that more distant galaxies are travelling away from us more quickly. Indeed, the most recent evidence is that the expansion is accelerating.

What’s curious about this expansion is that space, and the vacuum associated with it, must somehow be created in this process. And yet how this can occur is not at all clear. “The creation of space is a new cosmological phenomenon, which has not been tested yet in physical laboratory,” says Baryshev.

What’s more, there is an energy associated with any given volume of the universe. If that volume increases, the inescapable conclusion is that this energy must increase as well. And yet physicists generally think that energy creation is forbidden.

Baryshev quotes the British cosmologist, Ted Harrison, on this topic: “The conclusion, whether we like it or not, is obvious: energy in the universe is not conserved,” says Harrison.

This is a problem that cosmologists are well aware of. And yet ask them about it and they shuffle their feet and stare at the ground. Clearly, any theorist who can solve this paradox will have a bright future in cosmology.

The nature of the energy associated with the vacuum is another puzzle. This is variously called the zero point energy or the energy of the Planck vacuum and quantum physicists have spent some time attempting to calculate it.

These calculations suggest that the energy density of the vacuum is huge, of the order of 10^94 g/cm^3. This energy, being equivalent to mass, ought to have a gravitational effect on the universe.

Cosmologists have looked for this gravitational effect and calculated its value from their observations (they call it the cosmological constant). These calculations suggest that the energy density of the vacuum is about 10^-29 g/cm3.

Those numbers are difficult to reconcile. Indeed, they differ by 120 orders of magnitude. How and why this discrepancy arises is not known and is the cause of much bemused embarrassment among cosmologists.

Then there is the cosmological red-shift itself, which is another mystery. Physicists often talk about the red-shift as a kind of Doppler effect, like the change in frequency of a police siren as it passes by.

The Doppler effect arises from the relative movement of different objects. But the cosmological red-shift is different because galaxies are stationary in space. Instead, it is space itself that cosmologists think is expanding.

The mathematics that describes these effects is correspondingly different as well, not least because any relative velocity must always be less than the speed of light in conventional physics. And yet the velocity of expanding space can take any value.

Interestingly, the nature of the cosmological red-shift leads to the possibility of observational tests in the next few years. One interesting idea is that the red-shifts of distant objects must increase as they get further away. For a distant quasar, this change may be as much as one centimetre per second per year, something that may be observable with the next generation of extremely large telescopes.

One final paradox is also worth mentioning. This comes from one of the fundamental assumptions behind Einstein’s theory of general relativity—that if you look at the universe on a large enough scale, it must be the same in all directions.

It seems clear that this assumption of homogeneity does not hold on the local scale. Our galaxy is part of a cluster known as the Local Group which is itself part of a bigger supercluster.

This suggests a kind of fractal structure to the universe. In other words, the universe is made up of clusters regardless of the scale at which you look at it.

The problem with this is that it contradicts one of the basic ideas of modern cosmology—the Hubble law. This is the observation that the cosmological red-shift of an object is linearly proportional to its distance from Earth.

It is so profoundly embedded in modern cosmology that most currently accepted theories of universal expansion depend on its linear nature. That’s all okay if the universe is homogeneous (and therefore linear) on the largest scales.

But the evidence is paradoxical. Astrophysicists have measured the linear nature of the Hubble law at distances of a few hundred megaparsecs. And yet the clusters visible on those scales indicate the universe is not homogeneous on the scales.

And so the argument that the Hubble law’s linearity is a result of the homogeneity of the universe (or vice versa) does not stand up to scrutiny. Once again this is an embarrassing failure for modern cosmology.

It is sometimes tempting to think that astrophysicists have cosmology more or less sewn up, that the Big Bang model, and all that it implies, accounts for everything we see in the cosmos.

Not even close. Cosmologists may have successfully papered over the cracks in their theories in a way that keeps scientists happy for the time being. This sense of success is surely an illusion.

And that is how it should be. If scientists really think they are coming close to a final and complete description of reality, then a simple list of paradoxes can do a remarkable job of putting feet firmly back on the ground.

Ref: arxiv.org/abs/1501.01919 : Paradoxes Of Cosmological Physics In The Beginning Of The 21-St Century

Partículas telepáticas (Folha de S.Paulo)

CASSIO LEITE VIEIRA

ilustração JOSÉ PATRÍCIO

28/12/2014 03h08

RESUMO Há 50 anos, o físico norte-irlandês John Bell (1928-90) chegou a um resultado que demonstra a natureza “fantasmagórica” da realidade no mundo atômico e subatômico. Seu teorema é hoje visto como a arma mais eficaz contra a espionagem, algo que garantirá, num futuro talvez próximo, a privacidade absoluta das informações.

*

Um país da América do Sul quer manter a privacidade de suas informações estratégicas, mas se vê obrigado a comprar os equipamentos para essa tarefa de um país bem mais avançado tecnologicamente. Esses aparelhos, porém, podem estar “grampeados”.

Surge, então, a dúvida quase óbvia: haverá, no futuro, privacidade 100% garantida? Sim. E isso vale até mesmo para um país que compre a tecnologia antiespionagem do “inimigo”.
O que possibilita a resposta afirmativa acima é o resultado que já foi classificado como o mais profundo da ciência: o teorema de Bell, que trata de uma das perguntas filosóficas mais agudas e penetrantes feitas até hoje e que alicerça o próprio conhecimento: o que é a realidade? O teorema -que neste ano completou seu 50º aniversário- garante que a realidade, em sua dimensão mais íntima, é inimaginavelmente estranha.

José Patricio

A história do teorema, de sua comprovação experimental e de suas aplicações modernas tem vários começos. Talvez, aqui, o mais apropriado seja um artigo publicado em 1935 pelo físico de origem alemã Albert Einstein (1879-1955) e dois colaboradores, o russo Boris Podolsky (1896-1966) e o americano Nathan Rosen (1909-95).

Conhecido como paradoxo EPR (iniciais dos sobrenomes dos autores), o experimento teórico ali descrito resumia uma longa insatisfação de Einstein com os rumos que a mecânica quântica, a teoria dos fenômenos na escala atômica, havia tomado. Inicialmente, causou amargo no paladar do autor da relatividade o fato de essa teoria, desenvolvida na década de 1920, fornecer apenas a probabilidade de um fenômeno ocorrer. Isso contrastava com a “certeza” (determinismo) da física dita clássica, a que rege os fenômenos macroscópicos.

Einstein, na verdade, estranhava sua criatura, pois havia sido um dos pais da teoria quântica. Com alguma relutância inicial, o indeterminismo da mecânica quântica acabou digerido por ele. Algo, porém, nunca lhe passou pela garganta: a não localidade, ou seja, o estranhíssimo fato de algo aqui influenciar instantaneamente algo ali -mesmo que esse “ali” esteja muito distante. Einstein acreditava que coisas distantes tinham realidades independentes.

Einstein chegou a comparar -vale salientar que é só uma analogia- a não localidade a um tipo de telepatia. Mas a definição mais famosa dada por Einstein a essa estranheza foi “fantasmagórica ação a distância”.

EMARANHADO

A essência do argumento do paradoxo EPR é o seguinte: sob condições especiais, duas partículas que interagiram e se separaram acabam em um estado denominado emaranhado, como se fossem “gêmeas telepáticas”. De forma menos pictórica, diz-se que as partículas estão conectadas (ou correlacionadas, como preferem os físicos) e assim seguem, mesmo depois da interação.

A estranheza maior vem agora: se uma das partículas desse par for perturbada -ou seja, sofrer uma medida qualquer, como dizem os físicos-, a outra “sente” essa perturbação instantaneamente. E isso independe da distância entre as duas partículas. Podem estar separadas por anos-luz.

Os autores do paradoxo EPR diziam que era impossível imaginar que a natureza permitisse a conexão instantânea entre os dois objetos. E, por meio de argumentação lógica e complexa, Einstein, Podolsky e Rosen concluíam: a mecânica quântica tem que ser incompleta. Portanto, provisória.

SUPERIOR À LUZ?

Uma leitura apressada (porém, muito comum) do paradoxo EPR é dizer que uma ação instantânea (não local, no vocabulário da física) é impossível, porque violaria a relatividade de Einstein: nada pode viajar com velocidade superior à da luz no vácuo, 300 mil km/s.

No entanto, a não localidade atuaria apenas na dimensão microscópica -não pode ser usada, por exemplo, para mandar ou receber mensagens. No mundo macroscópico, se quisermos fazer isso, teremos que usar sinais que nunca viajam com velocidade maior que a da luz no vácuo. Ou seja, relatividade é preservada.

A não localidade tem a ver com conexões persistentes (e misteriosas) entre dois objetos: interferir com (alterar, mudar etc.) um deles interfere com (altera, muda etc.) o outro. Instantaneamente. O simples ato de observar um deles interfere no estado do outro.

Einstein não gostou da versão final do artigo de 1935, que só viu impressa -a redação ficou a cargo de Podolsky. Ele havia imaginado um texto menos filosófico. Pouco meses depois, viria a resposta do físico dinamarquês Niels Bohr (1885-1962) ao EPR -poucos anos antes, Einstein e Bohr haviam protagonizado o que para muitos é um dos debates filosóficos mais importantes da história: o tema era a “alma da natureza”, nas palavras de um filósofo da física.

Em sua resposta ao EPR, Bohr reafirmou tanto a completude da mecânica quântica quanto sua visão antirrealista do universo atômico: não é possível dizer que uma entidade quântica (elétron, próton, fóton etc.) tenha uma propriedade antes que esta seja medida. Ou seja, tal propriedade não seria real, não estaria oculta à espera de um aparelho de medida ou qualquer interferência (até mesmo o olhar) do observador. Quanto a isso, Einstein, mais tarde, ironizaria: “Será que a Lua só existe quando olhamos para ela?”.

AUTORIDADE

Um modo de entender o que seja uma teoria determinista é o seguinte: é aquela na qual se pressupõe que a propriedade a ser medida está presente (ou “escondida”) no objeto e pode ser determinada com certeza. Os físicos denominam esse tipo de teoria com um nome bem apropriado: teoria de variáveis ocultas.

Em uma teoria de variáveis ocultas, a tal propriedade (conhecida ou não) existe, é real. Daí, por vezes, os filósofos classificarem esse cenário como realismo -Einstein gostava do termo “realidade objetiva”: as coisas existem sem a necessidade de serem observadas.

Mas, na década de 1930, um teorema havia provado que seria impossível haver uma versão da mecânica quântica como uma teoria de variáveis ocultas. O feito era de um dos maiores matemáticos de todos os tempos, o húngaro John von Neumann (1903-57). E, fato não raro na história da ciência, valeu o argumento da autoridade em vez da autoridade do argumento.

O teorema de Von Neumann era perfeito do ponto de vista matemático, mas “errado, tolo” e “infantil” (como chegou a ser classificado) no âmbito da física, pois partia de uma premissa equivocada. Sabe-se hoje que Einstein desconfiou dessa premissa: “Temos que aceitar isso como verdade?”, perguntou a dois colegas. Mas não foi além.

O teorema de Von Neumann serviu, porém, para praticamente pisotear a versão determinista (portanto, de variáveis ocultas) da mecânica quântica feita em 1927 pelo nobre francês Louis de Broglie (1892-1987), Nobel de Física de 1929, que acabou desistindo dessa linha de pesquisa.

Por exatas duas décadas, o teorema de Von Neumann e as ideias de Bohr -que formou em torno dele uma influente escola de jovens notáveis- dissuadiram tentativas de buscar uma versão determinista da mecânica quântica.

Mas, em 1952, o físico norte-americano David Bohm (1917-92), inspirado pelas ideias de De Broglie, apresentou uma versão de variáveis ocultas da mecânica quântica -hoje, denominada mecânica quântica bohmiana, homenagem ao pesquisador que trabalhou na década de 1950 na Universidade de São Paulo (USP), quando perseguido nos EUA pelo macarthismo.

A mecânica quântica bohmiana tinha duas características em sua essência: 1) era determinista (ou seja, de variáveis ocultas); 2) era não local (isto é, admitia a ação a distância) -o que fez com que Einstein, localista convicto, perdesse o interesse inicial nela.

PROTAGONISTA

Eis que entra em cena a principal personagem desta história: o físico norte-irlandês John Stewart Bell, que, ao tomar conhecimento da mecânica bohmiana, teve uma certeza: o “impossível havia sido feito”. Mais: Von Neumann estava errado.

A mecânica quântica de Bohm -ignorada logo de início pela comunidade de físicos- acabava de cair em terreno fértil: Bell remoía, desde a universidade, como um “hobby”, os fundamentos filosóficos da mecânica quântica (EPR, Von Neumann, De Broglie etc.). E tinha tomado partido nesses debates: era um einsteiniano assumido e achava Bohr obscuro.

Bell nasceu em 28 de junho de 1928, em Belfast, em uma família anglicana sem posses. Deveria ter parado de estudar aos 14 anos, mas, por insistência da mãe, que percebeu os dotes intelectuais do segundo de quatro filhos, foi enviado a uma escola técnica de ensino médio, onde ele aprendeu coisas práticas (carpintaria, construção civil, biblioteconomia etc.).

Formado, aos 16, tentou empregos em escritórios, mas o destino quis que terminasse como técnico preparador de experimentos no departamento de física da Queen’s University, também em Belfast.

Os professores do curso logo perceberam o interesse do técnico pela física e passaram a incentivá-lo, com indicações de leituras e aulas. Com uma bolsa de estudos, Bell se formou em 1948 em física experimental e, no ano seguinte, em física matemática. Em ambos os casos, com louvor.

De 1949 a 1960, Bell trabalhou no Aere (Estabelecimento para a Pesquisa em Energia Atômica), em Harwell, no Reino Unido. Lá conheceria sua futura mulher, a física Mary Ross, sua interlocutora em vários trabalhos sobre física. “Ao olhar novamente esses artigos, vejo-a em todo lugar”, disse Bell, em homenagem recebida em 1987, três anos antes de morrer, de hemorragia cerebral.

Defendeu doutorado em 1956, após um período na Universidade de Birmingham, sob orientação do físico teuto-britânico Rudolf Peierls (1907-95). A tese inclui uma prova de um teorema muito importante da física (teorema CPT), que havia sido descoberto pouco antes por um contemporâneo seu.

O TEOREMA

Por discordar dos rumos das pesquisas no Aere, o casal decidiu trocar empregos estáveis por posições temporárias no Centro Europeu de Pesquisas Nucleares (Cern), em Genebra (Suíça). Ele na divisão de física teórica; ela, na de aceleradores.

Bell passou 1963 e 1964 trabalhando nos EUA. Lá, encontrou tempo para se dedicar a seu “hobby” intelectual e gestar o resultado que marcaria sua carreira e lhe daria, décadas mais tarde, fama.

Ele se fez a seguinte pergunta: será que a não localidade da teoria de variáveis ocultas de Bohm seria uma característica de qualquer teoria realista da mecânica quântica? Em outras palavras, se as coisas existirem sem serem observadas, elas terão que necessariamente estabelecer entre si aquela fantasmagórica ação a distância?

O teorema de Bell, publicado em 1964, é também conhecido como desigualdade de Bell. Sua matemática não é complexa. De forma muito simplificada, podemos pensar nesse teorema como uma inequação: x ≤ 2 (x menor ou igual a dois), sendo que “x” representa, para nossos propósitos aqui, os resultados de um experimento.

As consequências mais interessantes do teorema de Bell ocorreriam se tal experimento violasse a desigualdade, ou seja, mostrasse que x > 2 (x maior que dois). Nesse caso, teríamos de abrir mão de uma das duas suposições: 1) realismo (as coisas existem sem serem observadas); 2) da localidade (o mundo quântico não permite conexões mais velozes que a luz).

O artigo do teorema não teve grande repercussão -Bell havia feito outro antes, fundamental para ele chegar ao resultado, mas, por erro do editor do periódico, acabou publicado só em 1966.

REBELDIA A retomada das ideias de Bell -e, por conseguinte, do EPR e de Bohm- ganhou momento com fatores externos à física. Muitos anos depois do agitado final dos anos 1960, o físico americano John Clauser recordaria o período: ”A Guerra do Vietnã dominava os pensamentos políticos da minha geração. Sendo um jovem físico naquele período revolucionário, eu naturalmente queria chacoalhar o mundo”.

A ciência, como o resto do mundo, acabou marcada pelo espírito da geração paz e amor; pela luta pelos direitos civis; por maio de 1968; pelas filosofias orientais; pelas drogas psicodélicas; pela telepatia -em uma palavra: pela rebeldia. Que, traduzida para a física, significava se dedicar a uma área herética na academia: interpretações (ou fundamentos) da mecânica quântica. Mas fazer isso aumentava consideravelmente as chances de um jovem físico arruinar sua carreira: EPR, Bohm e Bell eram considerados temas filosóficos, e não físicos.

O elemento final para que o campo tabu de estudos ganhasse fôlego foi a crise do petróleo de 1973, que diminuiu a oferta de postos para jovens pesquisadores -incluindo físicos. À rebeldia somou-se a recessão.
Clauser, com mais três colegas, Abner Shimony, Richard Holt e Michael Horne, publicou suas primeiras ideias sobre o assunto em 1969, com o título “Proposta de Experimento para Testar Teorias de Variáveis Ocultas”. O quarteto fez isso em parte por ter notado que a desigualdade de Bell poderia ser testada com fótons, que são mais fáceis de serem gerados. Até então se pensava em arranjos experimentais mais complicados.

Em 1972, a tal proposta virou experimento -feito por Clauser e Stuart Freedman (1944-2012)-, e a desigualdade de Bell foi violada.

O mundo parecia ser não local -ironicamente, Clauser era localista! Mas só parecia: o experimento seguiu, por cerca de uma década, incompreendido e, portanto, desconsiderado pela comunidade de físicos. Mas aqueles resultados serviram a reforçar algo importante: fundamentos da mecânica quântica não eram só filosofia. Eram também física experimental.

MUDANÇA DE CENÁRIO

O aperfeiçoamento de equipamentos de óptica (incluindo lasers) permitiu que, em 1982, um experimento se tornasse um clássico da área.

Pouco antes, o físico francês Alain Aspect havia decidido iniciar um doutorado tardio, mesmo sendo um físico experimental experiente. Escolheu como tema o teorema de Bell. Foi ao encontro do colega norte-irlandês no Cern. Em entrevista ao físico Ivan dos Santos Oliveira, do Centro Brasileiro de Pesquisas Físicas, no Rio de Janeiro, e ao autor deste texto, Aspect contou o seguinte diálogo entre ele e Bell. “Você tem um cargo estável?”, perguntou Bell. “Sim”, disse Aspect. Caso contrário, “você seria muito pressionado a não fazer o experimento”, disse Bell.

O diálogo relatado por Aspect nos permite afirmar que, quase duas décadas depois do artigo seminal de 1964, o tema continuava revestido de preconceito.

Em um experimento feito com pares de fótons emaranhados, a natureza, mais uma vez, mostrou seu caráter não local: a desigualdade de Bell foi violada. Os dados mostraram x > 2. Em 2007, por exemplo, o grupo do físico austríaco Anton Zeilinger verificou a violação da desigualdade usando fótons separados por… 144 km.

Na entrevista no Brasil, Aspect disse que, até então, o teorema era pouquíssimo conhecido pelos físicos, mas ganharia fama depois de sua tese de doutorado, de cuja banca, aliás, Bell participou.

ESTRANHO

Afinal, por que a natureza permite que haja a “telepatia” einsteiniana? É no mínimo estranho pensar que uma partícula perturbada aqui possa, de algum modo, alterar o estado de sua companheira nos confins do universo.

Há várias maneiras de interpretar as consequências do que Bell fez. De partida, algumas (bem) equivocadas: 1) a não localidade não pode existir, porque viola a relatividade; 2) teorias de variáveis ocultas (Bohm, De Broglie etc.) da mecânica quântica estão totalmente descartadas; 3) a mecânica quântica é realmente indeterminista; 4) o irrealismo -ou seja, coisas só existem quando observadas- é a palavra final. A lista é longa.

Quando o teorema foi publicado, uma leitura rasa (e errônea) dizia que ele não tinha importância, pois o teorema de Von Neumann já havia descartado as variáveis ocultas, e a mecânica quântica seria, portanto, de fato indeterminista. Entre os que não aceitam a não localidade, há ainda aqueles que chegam ao ponto de dizer que Einstein, Bohm e Bell não entenderam o que fizeram.

O filósofo da física norte-americano Tim Maudlin, da Universidade de Nova York, em dois excelentes artigos, “What Bell Did” (O que Bell fez, arxiv.org/abs/1408.1826) e “Reply to Werner” (em que responde a comentários sobre o texto anterior, arxiv.org/abs/1408.1828), oferece uma longa lista de equívocos.

Para Maudlin, renomado em sua área, o teorema de Bell e sua violação significam uma só coisa: a natureza é não local (“fantasmagórica”) e, portanto, não há esperança para a localidade, como Einstein gostaria -nesse sentido, pode-se dizer que Bell mostrou que Einstein estava errado. Assim, qualquer teoria determinista (realista) que reproduza os resultados experimentais obtidos até hoje pela mecânica quântica -por sinal, a teoria mais precisa da história da ciência- terá que necessariamente ser não local.

De Aspect até hoje, desenvolvimentos tecnológicos importantes possibilitaram algo impensável há poucas décadas: estudar isoladamente uma entidade quântica (átomo, elétron, fóton etc.). E isso deu início à área de informação quântica, que abrange o estudo da criptografia quântica -aquela que permitirá a segurança absoluta dos dados- e o dos computadores quânticos, máquinas extremamente velozes. De certo modo, trata-se de filosofia transformada em física experimental.

Muitos desses avanços se devem basicamente à rebeldia de uma geração de físicos jovens que queriam contrariar o “sistema”.

Uma história saborosa desse período está em “How the Hippies Saved Physics” (Como os hippies salvaram a física, publicado pela W. W. Norton & Company em 2011), do historiador da física norte-americano David Kaiser. E uma análise histórica detalhada em “Quantum Dissidents: Research on the Foundations of Quantum Theory circa 1970” (Dissidentes do quantum: pesquisa sobre os fundamentos da teoria quântica por volta de 1970, bit.ly/1xyipTJ, só para assinantes), do historiador da física Olival Freire Jr., da Universidade Federal da Bahia.

Para os mais interessados no viés filosófico, há os dois volumes premiados de “Conceitos de Física Quântica” (Editora Livraria da Física, 2003), do físico e filósofo Osvaldo Pessoa Jr., da USP.

PRIVACIDADE

A esta altura, o(a) leitor(a) talvez esteja se perguntando sobre o que o teorema de Bell tem a ver com uma privacidade 100% garantida.

No futuro, é (bem) provável que a informação seja enviada e recebida na forma de fótons emaranhados. Pesquisas recentes em criptografia quântica garantem que bastaria submeter essas partículas de luz ao teste da desigualdade de Bell. Se ela for violada, então não há nenhuma possibilidade de a mensagem ter sido bisbilhotada indevidamente. E o teste independe do equipamento usado para enviar ou receber os fótons. A base teórica para isso está, por exemplo, em “The Ultimate Physical Limits of Privacy” (Limites físicos extremos da privacidade), de Artur Ekert e Renato Renner (bit.ly/1gFjynG, só para assinantes).

Em um futuro não muito distante, talvez, o teorema de Bell se transforme na arma mais poderosa contra a espionagem. Isso é um tremendo alento para um mundo que parece rumar à privacidade zero. É também um imenso desdobramento de uma pergunta filosófica que, segundo o físico norte-americano Henry Stapp, especialista em fundamentos da mecânica quântica, se tornou “o resultado mais profundo da ciência”. Merecidamente. Afinal, por que a natureza optou pela “ação fantasmagórica a distância”?

A resposta é um mistério. Pena que a pergunta não seja nem sequer mencionada nas graduações de física no Brasil.

CÁSSIO LEITE VIEIRA, 54, jornalista do Instituto Ciência Hoje (RJ), é autor de “Einstein – O Reformulador do Universo” (Odysseus).
JOSÉ PATRÍCIO, 54, artista plástico pernambucano, participa da mostra “Asas a Raízes” na Caixa Cultural do Rio, de 17/1 a 15/3.

The Battle in Philosophy: Time, Substance, and the Void – Slavoj Zizek vs. Graham Harman (Dark Ecologies)

03 Wednesday Dec 2014

In my pursuit to understand poetry and philosophy in our time I’ve found that “time” is the key: there is a great battle that has up till now been perpetrated under the auspices of subtantialist versus process philosophers – as in the recent battle over Graham Harman and Object Oriented Philosophy (a reversion to a substantive formalism, although non-Aristotelian in intent), and the Process philosophers who seem to come out of Whitehead and others. Part of the wars of speculative realism…

In Harman the object is split between a sensual (phenomenal) appendage and a real (noumenal) withdrawn core, etc. For him this real can never be described, or even known directly, but must be teased out or allured from its “volcanic” hiding place, etc. While for those like Zizek there is nothing there, even less than nothing: a void that is the negation of negation: a self-reflecting nothingness. No core, no substance, no big Other.

Graham Harman will tells us that at the heart of our era there lurks a philosophical dogma, an idealism purporting to mask itself under the rubric of deflationary realism. Under the banner of deflationary realism he will align deconstruction (Jaques Derrida), Lacanian/Hegelian dialectics (Slavoj Zizek), and every dialectical philosophy “which tries to undercut any subterranean power of the things by calling this power an “essence,” then claiming that essence is a naive abstraction unless it finds its proper place in the drama of human knowledge about the world.”1 The point he makes is that at the center of this view of the world is the notion of singular gap between the human and its world. (p. 123)

As one reads Harman’s works which on the surface seem a revisionary turn in phenomenological thinking and philosophy – especially as to its central reading of Heidegger’s concept of readiness-to-hand (Zuhandenheit), which “refers to objects insofar as they withdraw from human view into a dark subterranean reality that never becomes present to practical action any more than it does to theoretical awareness” (ibid. 1). This notion of a non-utilitarian realism beyond the human with its attendant swerve from the linguistic turn, dialectical materialism, and the naturalism of scientific physicalism and scientisms sets the tone: an enframing of the withdrawal of objects from the human/world bifurcation or gap ontology of deflationary realism, and a decentering of the anthropocentric world-view that pervades humanistic philosophy and literature, art and aesthetics offers the base approach of Harman’s philosophical outlay.

Objects for Harman are first of all entities as formal cause, as well as the converse notion that “every set of relations is also an entity” (p. 260). Harman will argue against all naïve materialisms and naturalisms, saying: “

What separates this model from all materialism is that I am not pampering one level of reality (that of infinitesimal particles) at the expense of all others. What is real in the cosmos are forms wrapped inside of forms, not durable specks of material that reduce everything else to derivative status. If this is “materialism,” then it is the first materialism in history to deny the existence of matter.(p. 293)

This notion that there is no physical matter, but that everything from the smallest quantum events to the largest structures in the universe are forms within forms: structured entities immersed in relations and the engines of reality. Yet, these very entities can unplug from these relations and enter into new and different engagements. The point here takes up the notion of intervention and the revisionary process of entities in their actual ongoing movements across the tiers or levels of reality. As he will tell it instead of materialism, this is perhaps a new sort of “formalism,” one that sides with Francis Bacon “who lampoons efficient causation as ridiculous.” (p. 293).

Anyone who has read the early works of Harman finds Zizek everywhere in the pages. Harman fights with Zizek from the opposite end, holding to an new or revised substantial formalism. Zizek starts with lack (Void, Gap, Den: Democritus) at the heart of things, while for Harman there is no lack – everything is fully deployed in an almost copy of the Platonic notion of time as vessel (our universe on a flat plane with multilevel tiers or scales). Zizek sticks with the whirlwind of nothings that Democritus termed “Den”: his less than nothing that gives birth to nothing and from there our universe ( a quantum theory of subjectivity as process and emergence out of the void). This is the basic battle between opposing conceptual frameworks of reality.

Harman will openly tell us he likes Zizek, yet he totally disagrees with almost everything he’s written, saying of one of Zizek’s key concepts: “

Among the most central of these ideas is Zizek’s concept of retroactive causation—a theme in one respect very close to the present book, and in another respect diametrically opposed. (p. 205)

He will tell us that Zizek’s retroactive causation brings with it the notion that the Real is not a “real world” outside of the human sphere, but the very gap between appearance and the non-appearing that is first posited by the fantasy of the human subject.(p. 207) Even a cursory reading of Zizek’s latest two magnum opus’s will attest to this continued drift (see Less Than Nothing, and Absolute Recoil). Zizek against all substantial formalisms will tell us:

This last claim should be qualified, or, rather, corrected: what is retroactively called into existence is not the “hitherto formless matter” but, precisely, matter which was well articulated before the rise of the new, and whose contours were only blurred, or became invisible , from the horizon of the new historical form— with the rise of the new form, the previous form is (mis) perceived as “hitherto formless matter,” that is, the “formlessness” itself is a retroactive effect , a violent erasure of the previous form. If one misses the retroactivity of such positing of presuppositions, one finds oneself in the ideological universe of evolutionary teleology: an ideological narrative thus emerges in which previous epochs are conceived as progressive stages or steps towards the present “civilized” epoch . This is why the retroactive positing of presuppositions is the materialist “substitute for that ‘teleology’ for which [Hegel] is ordinarily indicted.”3

The point Zizek makes is that in a dialectical process, the thing becomes “what it always already was”; that is, the “eternal essence” (or, rather, concept) of a thing is not given in advance, it emerges, forms itself in an open contingent process— the eternally past essence is a retroactive result of the dialectical process. This retroactivity is what Kant was not able to think , and Hegel himself had to work long and hard to conceptualize it. Here is how the early Hegel, still struggling to differentiate himself from the legacy of the other German Idealists, qualifies Kant’s great philosophical breakthrough: in the Kantian transcendental synthesis, “the determinateness of form is nothing but the identity of opposites.(ibid.)

As you can see at the heart of the conflict between Harman and Zizek is a notion of causation, a view of time and the implication of time’s determinations in reality. For Zizek the concept or essence does not precede its history or processual movement in time, but is rather a creation of its contingent interactions in the dialectical process of this time itself. For Harman the “essence” is that core depth of every entity. In his discussion of Zubiri on essence he will tell us: “

Zubiri allows common sense to pull off a bloodless coup d’état at the precise moment when he had begun to open our eyes to a zone of incomparable strangeness—- that of the essence withdrawn from all relation, even from brute causal relation (as overlooked by Heidegger, Levinas, and Whitehead alike).(p. 258)

This is a core notion of Harman’s that real objects (essences) can withdraw from all relations. As he will tell us further on “It is not only the case that every entity has a deeper essence—rather, every essence has a deeper essence as well” (p. 258). Realizing this leads to an infinite regress Harman will instead term it an “indefinite regress, and move on to other problems that arise from the emerging concept of substance” (p. 259). Succinctly Harman’s position is stated as follows:

I have offered the model of reality as a reversal between tool and broken tool, with the tool-being receding not just behind human awareness, but behind all relation whatsoever. This duality has been crossed by another opposition of equal power: the difference between the specific quality of a thing and its systematic union. Furthermore, the world is not split up evenly with a nation of pure tool-being on one side and a land of sheer relations on the other—every point in the cosmos is both a concealed reality and one that enters into explicit contact with others. Finally, in the strict sense, there is no such thing as a sheer “relation”; every relation turns out to be an entity in its own right. As a result, there is no cleared transcendent space that gains a distance from entities to reveal them “as” what they are. There is no exit from the density of being, no way to stand outside the brutal play of forces and vacuum-packed entities that crowd the world.(pp. 288-289).

In the above tool-being and the concept of “essence” are interchangeable. So for Harman the essence of real objects precedes its sensual appendages, and in fact for him withdraws not only from human awareness but from all relation whatsoever.

We are here back at the notion of den in Democritus: a “something cheaper than nothing,” a weird pre-ontological “something” which is less than nothing.

– Slavoj Zizek

(Badiou and Zizek from a materialist perspective also opt for a event based, non-substantive notion of time, a time of rupture and newness: an event.

Zizek recounting an Agatha Christie Jane Marple mystery in which a woman sees a murder on another passing train in which the police find no evidence, and only Mrs. Marple believes her and follows up:

This is an event at its purest and most minimal : something shocking, out of joint that appears to happen all of a sudden and interrupts the usual flow of things; something that emerges seemingly out of nowhere, without discernible causes, an appearance without solid being as its foundation.

It is a manifestation of a circular structure in which the evental effect retroactively determines its causes or reasons.1

As Zizek further qualifies  an event is thus the effect that seems to exceed its causes – and the space of an event is that which opens up by the gap that separates an effect from its causes. Already with this approximate definition, we find ourselves at the very heart of philosophy, since causality is one of the basic problems philosophy deals with: are all things connected with causal links? Does everything that exists have to be grounded in sufficient reasons? Or are there things that somehow happen out of nowhere? How, then, can philosophy help us to determine what an event – an occurrence not grounded in sufficient reasons – is and how it is possible? (Zizek, 5)

Zizek will see this as two approaches or opposing views of reality: the transcendental and the ontological or ontic. The first concerns the universal structure of how reality appears to us. Which conditions must be met for us to perceive something as really existing? ‘Transcendental’ is the philosopher’s technical term for such a frame, which defines the co-ordinates of reality – for example, the transcendental approach makes us aware that, for a scientific naturalist, only spatio-temporal material phenomena regulated by natural laws really exist, while for a premodern traditionalist, spirits and meanings are also part of reality, not only our human projections. The ontic approach, on the other hand, is concerned with reality itself, in its emergence and deployment: how did the universe come to be? Does it have a beginning and an end? What is our place in it?(Zizek, 5-6)

I’ve begun a long arduous process of tracing down this ancient battle between substantial formalists (object oriented) and non-substantive event (process) based philosophers, and have begun organizing a philosophical work around the great theme of Time that will tease out the current climate of Continental thought against this background.

In some ways I want to take up Zizek’s philosophical materialism of non-substantial self-relating nothingness vs. Harman’s substantial formalism where they intersect in the notions of Time and Causality. We’ve seen work on both of these philosophers, but have yet to see the drama they are enacting from the two world perspectives of transcendental vs. ontology and ontic, substance vs. void or gap. I think this would be a worthwhile battle to bring to light what is laying there in fragments.

Stay tuned.

1. Harman, Graham (2011-08-31). Tool-Being: Heidegger and the Metaphysics of Objects (p. 1). Open Court. Kindle Edition
2. Zizek, Slavoj (2014-08-26). Event: A Philosophical Journey Through A Concept (p. 4). Melville House. Kindle Edition.
3. Zizek, Slavoj (2012-04-30). Less Than Nothing: Hegel and the Shadow of Dialectical Materialism (Kindle Locations 6322-6330). Norton. Kindle Edition.

The Creepy New Wave of the Internet (NY Review of Books)

Sue Halpern

NOVEMBER 20, 2014 ISSUE

The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism
by Jeremy Rifkin
Palgrave Macmillan, 356 pp., $28.00

Enchanted Objects: Design, Human Desire, and the Internet of Things
by David Rose
Scribner, 304 pp., $28.00

Age of Context: Mobile, Sensors, Data and the Future of Privacy
by Robert Scoble and Shel Israel, with a foreword by Marc Benioff
Patrick Brewster, 225 pp., $14.45 (paper)

More Awesome Than Money: Four Boys and Their Heroic Quest to Save Your Privacy from Facebook
by Jim Dwyer
Viking, 374 pp., $27.95

A detail of Penelope Umbrico’s Sunset Portraits from 11,827,282 Flickr Sunsets on 1/7/13, 2013. For the project, Umbrico searched the website Flickr for scenes of sunsets in which the sun, not the subject, predominated. The installation, consisting of two thousand 4 x 6 C-prints, explores the idea that ‘the individual assertion of “being here” is ultimately read as a lack of individuality when faced with so many assertions that are more or less all the same.’ A collection of her work, Penelope Umbrico (photographs), was published in 2011 by Aperture.

Every day a piece of computer code is sent to me by e-mail from a website to which I subscribe called IFTTT. Those letters stand for the phrase “if this then that,” and the code is in the form of a “recipe” that has the power to animate it. Recently, for instance, I chose to enable an IFTTT recipe that read, “if the temperature in my house falls below 45 degrees Fahrenheit, then send me a text message.” It’s a simple command that heralds a significant change in how we will be living our lives when much of the material world is connected—like my thermostat—to the Internet.

It is already possible to buy Internet-enabled light bulbs that turn on when your car signals your home that you are a certain distance away and coffeemakers that sync to the alarm on your phone, as well as WiFi washer-dryers that know you are away and periodically fluff your clothes until you return, and Internet-connected slow cookers, vacuums, and refrigerators. “Check the morning weather, browse the web for recipes, explore your social networks or leave notes for your family—all from the refrigerator door,” reads the ad for one.

Welcome to the beginning of what is being touted as the Internet’s next wave by technologists, investment bankers, research organizations, and the companies that stand to rake in some of an estimated $14.4 trillion by 2022—what they call the Internet of Things (IoT). Cisco Systems, which is one of those companies, and whose CEO came up with that multitrillion-dollar figure, takes it a step further and calls this wave “the Internet of Everything,” which is both aspirational and telling. The writer and social thinker Jeremy Rifkin, whose consulting firm is working with businesses and governments to hurry this new wave along, describes it like this:

The Internet of Things will connect every thing with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and virtually every other aspect of economic and social life will be linked via sensors and software to the IoT platform, continually feeding Big Data to every node—businesses, homes, vehicles—moment to moment, in real time. Big Data, in turn, will be processed with advanced analytics, transformed into predictive algorithms, and programmed into automated systems to improve thermodynamic efficiencies, dramatically increase productivity, and reduce the marginal cost of producing and delivering a full range of goods and services to near zero across the entire economy.

In Rifkin’s estimation, all this connectivity will bring on the “Third Industrial Revolution,” poised as he believes it is to not merely redefine our relationship to machines and their relationship to one another, but to overtake and overthrow capitalism once the efficiencies of the Internet of Things undermine the market system, dropping the cost of producing goods to, basically, nothing. His recent book, The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism, is a paean to this coming epoch.

It is also deeply wishful, as many prospective arguments are, even when they start from fact. And the fact is, the Internet of Things is happening, and happening quickly. Rifkin notes that in 2007 there were ten million sensors of all kinds connected to the Internet, a number he says will increase to 100 trillion by 2030. A lot of these are small radio-frequency identification (RFID) microchips attached to goods as they crisscross the globe, but there are also sensors on vending machines, delivery trucks, cattle and other farm animals, cell phones, cars, weather-monitoring equipment, NFL football helmets, jet engines, and running shoes, among other things, generating data meant to streamline, inform, and increase productivity, often by bypassing human intervention. Additionally, the number of autonomous Internet-connected devices such as cell phones—devices that communicate directly with one another—now doubles every five years, growing from 12.5 billion in 2010 to an estimated 25 billion next year and 50 billion by 2020.

For years, a cohort of technologists, most notably Ray Kurzweil, the writer, inventor, and director of engineering at Google, have been predicting the day when computer intelligence surpasses human intelligence and merges with it in what they call the Singularity. We are not there yet, but a kind of singularity is already upon us as we swallow pills embedded with microscopic computer chips, activated by stomach acids, that will be able to report compliance with our doctor’s orders (or not) directly to our electronic medical records. Then there is the singularity that occurs when we outfit our bodies with “wearable technology” that sends data about our physical activity, heart rate, respiration, and sleep patterns to a database in the cloud as well as to our mobile phones and computers (and to Facebook and our insurance company and our employer).

Cisco Systems, for instance, which is already deep into wearable technology, is working on a platform called “the Connected Athlete” that “turns the athlete’s body into a distributed system of sensors and network intelligence…[so] the athlete becomes more than just a competitor—he or she becomes a Wireless Body Area Network, or WBAN.” Wearable technology, which generated $800 million in 2013, is expected to make nearly twice that this year. These are numbers that not only represent sales, but the public’s acceptance of, and habituation to, becoming one of the things connected to and through the Internet.

One reason that it has been easy to miss the emergence of the Internet of Things, and therefore miss its significance, is that much of what is presented to the public as its avatars seems superfluous and beside the point. An alarm clock that emits the scent of bacon, a glow ball that signals if it is too windy to go out sailing, and an “egg minder” that tells you how many eggs are in your refrigerator no matter where you are in the (Internet-connected) world, revolutionary as they may be, hardly seem the stuff of revolutions; because they are novelties, they obscure what is novel about them.

And then there is the creepiness factor. In the weeks before the general release of Google Glass, Google’s $1,500 see-through eyeglass computer that lets the wearer record what she is seeing and hearing, the press reported a number of incidents in which early adopters were physically accosted by people offended by the product’s intrusiveness. Enough is enough, the Glass opponents were saying.

Why a small cohort of people encountering Google Glass for the first time found it disturbing is the same reason that David Rose, an instructor at MIT and the founder of a company that embeds Internet connectivity into everyday devices like umbrellas and medicine vials, celebrates it and waxes nearly poetic on the potential of “heads up displays.” As he writes in Enchanted Objects: Design, Human Desire, and the Internet of Things, such devices have the potential to radically transform human encounters. Rose imagines a party where

Wearing your fashionable [heads up] display, you will instruct the device to display the people’s names and key biographical info above their heads. In the business meeting, you will call up information about previous meetings and agenda items. The HUD display will call up useful websites, tap into social networks, and dig into massive info sources…. You will fact-check your friends and colleagues…. You will also engage in real-time messaging, including videoconferencing with friends or colleagues who will participate, coach, consult, or lurk.
Whether this scenario excites or repels you, it represents the vision of more than one of the players moving us in the direction of pervasive connectivity. Rose’s company, Ambient Devices, has been at the forefront of what he calls “enchanting” objects—that is, connecting them to the Internet to make them “extraordinary.” This is a task that Glenn Lurie, the CEO of ATT Mobility, believes is “spot on.” Among these enchanted objects are the Google Latitude Doorbell that “lets you know where your family members are and when they are approaching home,” an umbrella that turns blue when it is about to rain so you might be inspired to take it with you, and a jacket that gives you a hug every time someone likes your Facebook post.

Rose envisions “an enchanted wall in your kitchen that could display, through lines of colored light, the trends and patterns of your loved ones’ moods,” because it will offer “a better understanding of [the] hidden thoughts and emotions that are relevant to us….” If his account of a mood wall seems unduly fanciful (and nutty), it should be noted that this summer, British Airways gave passengers flying from New York to London blankets embedded with neurosensors to track how they were feeling. Apparently this was more scientific than simply asking them. According to one report:

When the fiber optics woven into the blanket turned red, flight attendants knew that the passengers were feeling stressed and anxious. Blue blankets were a sign that the passenger was feeling calm and relaxed.
Thus the airline learned that passengers were happiest when eating and drinking, and most relaxed when sleeping.

While, arguably, this “finding” is as trivial as an umbrella that turns blue when it’s going to rain, there is nothing trivial about collecting personal data, as innocuous as that data may seem. It takes very little imagination to foresee how the kitchen mood wall could lead to advertisements for antidepressants that follow you around the Web, or trigger an alert to your employer, or show up on your Facebook page because, according to Robert Scoble and Shel Israel in Age of Context: Mobile, Sensors, Data and the Future of Privacy, Facebook “wants to build a system that anticipates your needs.”

It takes even less imagination to foresee how information about your comings and goings obtained from the Google Latitude Doorbell could be used in a court of law. Cars are now outfitted with scores of sensors, including ones in the seats that determine how many passengers are in them, as well as with an “event data recorder” (EDR), which is the automobile equivalent of an airplane’s black box. As Scoble and Israel report in Age of Context, “the general legal consensus is that police will be able to subpoena car logs the same way they now subpoena phone records.”

Meanwhile, cars themselves are becoming computers on wheels, with operating system updates coming wirelessly over the air, and with increasing capacity to “understand” their owners. As Scoble and Israel tell it:

They not only adjust seat positions and mirrors automatically, but soon they’ll also know your preferences in music, service stations, dining spots and hotels…. They know when you are headed home, and soon they’ll be able to remind you to stop at the market to get a dessert for dinner.
Recent revelations from the journalist Glenn Greenwald put the number of Americans under government surveillance at a colossal 1.2 million people. Once the Internet of Things is in place, that number might easily expand to include everyone else, because a system that can remind you to stop at the market for dessert is a system that knows who you are and where you are and what you’ve been doing and with whom you’ve been doing it. And this is information we give out freely, or unwittingly, and largely without question or complaint, trading it for convenience, or what passes for convenience.

halpern_2-112014.jpg
Michael Cogliantry
The journalist A.J. Jacobs wearing data-collecting sensors to keep track of his health and fitness; from Rick Smolan and Jennifer Erwitt’s The Human Face of Big Data, published in 2012 by Against All Odds
In other words, as human behavior is tracked and merchandized on a massive scale, the Internet of Things creates the perfect conditions to bolster and expand the surveillance state. In the world of the Internet of Things, your car, your heating system, your refrigerator, your fitness apps, your credit card, your television set, your window shades, your scale, your medications, your camera, your heart rate monitor, your electric toothbrush, and your washing machine—to say nothing of your phone—generate a continuous stream of data that resides largely out of reach of the individual but not of those willing to pay for it or in other ways commandeer it.

That is the point: the Internet of Things is about the “dataization” of our bodies, ourselves, and our environment. As a post on the tech website Gigaom put it, “The Internet of Things isn’t about things. It’s about cheap data.” Lots and lots of it. “The more you tell the world about yourself, the more the world can give you what you want,” says Sam Lessin, the head of Facebook’s Identity Product Group. It’s a sentiment shared by Scoble and Israel, who write:

The more the technology knows about you, the more benefits you will receive. That can leave you with the chilling sensation that big data is watching you. In the vast majority of cases, we believe the coming benefits are worth that trade-off.
So, too, does Jeremy Rifkin, who dismisses our legal, social, and cultural affinity for privacy as, essentially, a bourgeois affectation—a remnant of the enclosure laws that spawned capitalism:

Connecting everyone and everything in a neural network brings the human race out of the age of privacy, a defining characteristic of modernity, and into the era of transparency. While privacy has long been considered a fundamental right, it has never been an inherent right. Indeed, for all of human history, until the modern era, life was lived more or less publicly….
In virtually every society that we know of before the modern era, people bathed together in public, often urinated and defecated in public, ate at communal tables, frequently engaged in sexual intimacy in public, and slept huddled together en masse. It wasn’t until the early capitalist era that people began to retreat behind locked doors.
As anyone who has spent any time on Facebook knows, transparency is a fiction—literally. Social media is about presenting a curated self; it is opacity masquerading as transparency. In a sense, then, it is about preserving privacy. So when Rifkin claims that for young people, “privacy has lost much of its appeal,” he is either confusing sharing (as in sharing pictures of a vacation in Spain) with openness, or he is acknowledging that young people, especially, have become inured to the trade-offs they are making to use services like Facebook. (But they are not completely inured to it, as demonstrated by both Jim Dwyer’s painstaking book More Awesome Than Money, about the failed race to build a noncommercial social media site called Diaspora in 2010, as well as the overwhelming response—as many as 31,000 requests an hour for invitations—to the recent announcement that there soon will be a Facebook alternative, Ello, that does not collect or sell users’ data.)

These trade-offs will only increase as the quotidian becomes digitized, leaving fewer and fewer opportunities to opt out. It’s one thing to edit the self that is broadcast on Facebook and Twitter, but the Internet of Things, which knows our viewing habits, grooming rituals, medical histories, and more, allows no such interventions—unless it is our behaviors and curiosities and idiosyncracies themselves that end up on the cutting room floor.

Even so, no matter what we do, the ubiquity of the Internet of Things is putting us squarely in the path of hackers, who will have almost unlimited portals into our digital lives. When, last winter, cybercriminals broke into more than 100,000 Internet-enabled appliances including refrigerators and sent out 750,000 spam e-mails to their users, they demonstrated just how vulnerable Internet-connected machines are.

Not long after that, Forbes reported that security researchers had come up with a $20 tool that was able to remotely control a car’s steering, brakes, acceleration, locks, and lights. It was an experiment that, again, showed how simple it is to manipulate and sabotage the smartest of machines, even though—but really because—a car is now, in the words of a Ford executive, a “cognitive device.”

More recently, a study of ten popular IoT devices by the computer company Hewlett-Packard uncovered a total of 250 security flaws among them. As Jerry Michalski, a former tech industry analyst and founder of the REX think tank, observed in a recent Pew study: “Most of the devices exposed on the internet will be vulnerable. They will also be prone to unintended consequences: they will do things nobody designed for beforehand, most of which will be undesirable.”

Breaking into a home system so that the refrigerator will send out spam that will flood your e-mail and hacking a car to trigger a crash are, of course, terrible and real possibilities, yet as bad as they may be, they are limited in scope. As IoT technology is adopted in manufacturing, logistics, and energy generation and distribution, the vulnerabilities do not have to scale up for the stakes to soar. In a New York Times article last year, Matthew Wald wrote:

If an adversary lands a knockout blow [to the energy grid]…it could black out vast areas of the continent for weeks; interrupt supplies of water, gasoline, diesel fuel and fresh food; shut down communications; and create disruptions of a scale that was only hinted at by Hurricane Sandy and the attacks of Sept. 11.
In that same article, Wald noted that though government officials, law enforcement personnel, National Guard members, and utility workers had been brought together to go through a worst-case scenario practice drill, they often seemed to be speaking different languages, which did not bode well for an effective response to what is recognized as a near inevitability. (Last year the Department of Homeland Security responded to 256 cyberattacks, half of them directed at the electrical grid. This was double the number for 2012.)

This Babel problem dogs the whole Internet of Things venture. After the “things” are connected to the Internet, they need to communicate with one another: your smart TV to your smart light bulbs to your smart door locks to your smart socks (yes, they exist). And if there is no lingua franca—which there isn’t so far—then when that television breaks or becomes obsolete (because soon enough there will be an even smarter one), your choices will be limited by what language is connecting all your stuff. Though there are industry groups trying to unify the platform, in September Apple offered a glimpse of how the Internet of Things actually might play out, when it introduced the company’s new smart watch, mobile payment system, health apps, and other, seemingly random, additions to its product line. As Mat Honan virtually shouted in Wired:

Apple is building a world in which there is a computer in your every interaction, waking and sleeping. A computer in your pocket. A computer on your body. A computer paying for all your purchases. A computer opening your hotel room door. A computer monitoring your movements as you walk though the mall. A computer watching you sleep. A computer controlling the devices in your home. A computer that tells you where you parked. A computer taking your pulse, telling you how many steps you took, how high you climbed and how many calories you burned—and sharing it all with your friends…. THIS IS THE NEW APPLE ECOSYSTEM. APPLE HAS TURNED OUR WORLD INTO ONE BIG UBIQUITOUS COMPUTER.
The ecosystem may be lush, but it will be, by design, limited. Call it the Internet of Proprietary Things.

For many of us, it is difficult to imagine smart watches and WiFi-enabled light bulbs leading to a new world order, whether that new world order is a surveillance state that knows more about us than we do about ourselves or the techno-utopia envisioned by Jeremy Rifkin, where people can make much of what they need on 3-D printers powered by solar panels and unleashed human creativity. Because home automation is likely to be expensive—it will take a lot of eggs before the egg minder pays for itself—it is unlikely that those watches and light bulbs will be the primary driver of the Internet of Things, though they will be its showcase.

Rather, the Internet’s third wave will be propelled by businesses that are able to rationalize their operations by replacing people with machines, using sensors to simplify distribution patterns and reduce inventories, deploying algorithms that eliminate human error, and so on. Those business savings are crucial to Rifkin’s vision of the Third Industrial Revolution, not simply because they have the potential to bring down the price of consumer goods, but because, for the first time, a central tenet of capitalism—that increased productivity requires increased human labor—will no longer hold. And once productivity is unmoored from labor, he argues, capitalism will not be able to support itself, either ideologically or practically.

What will rise in place of capitalism is what Rifkin calls the “collaborative commons,” where goods and property are shared, and the distinction between those who own the means of production and those who are beholden to those who own the means of production disappears. “The old paradigm of owners and workers, and of sellers and consumers, is beginning to break down,” he writes.

Consumers are becoming their own producers, eliminating the distinction. Prosumers will increasingly be able to produce, consume, and share their own goods…. The automation of work is already beginning to free up human labor to migrate to the evolving social economy…. The Internet of Things frees human beings from the market economy to pursue nonmaterial shared interests on the Collaborative Commons.
Rifkin’s vision that people will occupy themselves with more fulfilling activities like making music and self-publishing novels once they are freed from work, while machines do the heavy lifting, is offered at a moment when a new kind of structural unemployment born of robotics, big data, and artificial intelligence takes hold globally, and traditional ways of making a living disappear. Rifkin’s claims may be comforting, but they are illusory and misleading. (We’ve also heard this before, in 1845, when Marx wrote in The German Ideology that under communism people would be “free to hunt in the morning, fish in the afternoon, rear cattle in the evening, [and] criticize after dinner.”)

As an example, Rifkin points to Etsy, the online marketplace where thousands of “prosumers” sell their crafts, as a model for what he dubs the new creative economy. “Currently 900,000 small producers of goods advertise at no cost on the Etsy website,” he writes.

Nearly 60 million consumers per month from around the world browse the website, often interacting personally with suppliers…. This form of laterally scaled marketing puts the small enterprise on a level playing field with the big boys, allowing them to reach a worldwide user market at a fraction of the cost.
All that may be accurate and yet largely irrelevant if the goal is for those 900,000 small producers to make an actual living. As Amanda Hess wrote last year in Slate:

Etsy says its crafters are “thinking and acting like entrepreneurs,” but they’re not thinking or acting like very effective ones. Seventy-four percent of Etsy sellers consider their shop a “business,” including 65 percent of sellers who made less than $100 last year.
While it is true that a do-it-yourself subculture is thriving, and sharing cars, tools, houses, and other property is becoming more common, it is also true that much of this activity is happening under duress as steady employment disappears. As an article in The New York Times this past summer made clear, employment in the sharing economy, also known as the gig economy, where people piece together an income by driving for Uber and delivering groceries for Instacart, leaves them little time for hunting and fishing, unless it’s hunting for work and fishing under a shared couch for loose change.

So here comes the Internet’s Third Wave. In its wake jobs will disappear, work will morph, and a lot of money will be made by the companies, consultants, and investment banks that saw it coming. Privacy will disappear, too, and our intimate spaces will become advertising platforms—last December Google sent a letter to the SEC explaining how it might run ads on home appliances—and we may be too busy trying to get our toaster to communicate with our bathroom scale to notice. Technology, which allows us to augment and extend our native capabilities, tends to evolve haphazardly, and the future that is imagined for it—good or bad—is almost always historical, which is to say, naive.

A reader’s guide to the “ontological turn” – Parts 1 to 4 (Somatosphere)

January 15, 2014

A reader’s guide to the “ontological turn” – Part 1

Judith Farquhar

This article is part of the series: A reader’s guide to the “ontological turn”

Editor’s note: In the wake of the discussion about the ‘ontological turn’ at this year’s American Anthropological Association conference, we asked several scholars, “which texts or resources would you recommend to a student or colleague interested in the uses of ‘ontology’ as an analytical category in recent work in anthropology and science and technology studies?”  This was the reading list we received fromJudith Farquhar, Max Palevsky Professor of Anthropology at the University of Chicago.  Answers from a number of other scholars will appear as separate posts in the series.

In providing a reading list, I had lots of good “ontological” resources at hand, having just taught a seminar called “Ontological Politics.”  This list is pared down from the syllabus; and the syllabus itself was just a subset of the many useful philosophical, historical, and ethnographic readings that I had been devouring during the previous year, when I was on leave.

I really like all these pieces, though I don’t actually “follow” all of them.  This is a good thing, because the field — if it can be called that — tends to go in circles, with all the usual suspects citing all the usual suspects.  In the end, as we worked our way through the course, I found the ethnographic work more exciting than most of the more theoretically inclined writing.  At the other end of the spectrum, I feel quite transformed by having read Heidegger’s “The Thing” — but I’m not sure why!

Philosophical and methodological works in anthropology and beyond:

Philippe Descola, 2013, The Ecology of Others, Chicago: Prickly Paradigm Press.

William Connolly, 2005, Pluralism. Durham: Duke University Press. (Ch. 3, “Pluralism and the Universe” [on William James], pp. 68-92.)

Eduardo Viveiros de Castro, 2004, “Perspectival Anthropology and the Method of Controlled Equivocation,” Tipiti 2 (1): 3-22.

Eduardo Viveiros de Castro, 2012, “Immanence and Fear: Stranger events and subjects in Amazonia,” HAU: Journal of Ethnographic Theory 2 (1): 27-43.

Marisol de la Cadena, 2010, “Indigenous Cosmopolitics in the Andes: Conceptual reflections beyond ‘politics’,” Cultural Anthropology 25 (2): 334-370.

Bruno Latour, 2004, “Why Has Critique Run Out of Steam? From matters of fact to matters of concern.Critical Inquiry 30 (2): 225-248.

A dialogue from Common Knowledge 2004 (3): Ulrich Beck: “The Truth of Others: A Cosmopolitan Approach” (pp. 430-449) and Bruno Latour: “Whose Cosmos, Which Cosmopolitics? Comments on the Peace Terms of Ulrich Beck” (pp. 450-462).

Graham Harman, 2009, Prince of Networks: Bruno Latour and Metaphysics.  Melbourne: Re.Press.  (OA)

Isabelle Stengers, 2005, “The Cosmopolitical Proposal,” in Bruno Latour & Peter Weibel, eds., Making Things Public: Atmospheres of Democracy.  Cambridge MA: MIT Press, pp. 994-1003.

Martin Heidegger, 1971, “The Thing,” in Poetry, Language, Thought (Tr. Albert Hofstadter).  New York: Harper & Row, pp. 163-180

Graham Harman, 2010, “Technology, Objects and Things in Heidegger,”Cambridge Journal of Economics 34: 17-25.

Jane Bennett and William Connolly, 2012, “The Crumpled Handkerchief,” in Bernd Herzogenrath, ed., Time and History in Deleuze and Serres. London & New York: Continuum, pp. 153-171.

Tim Ingold, 2004, “A Circumpolar Night’s Dream,” in John Clammer et al., eds., Figured Worlds: Ontological Obstacles in Intercultural Relations.  Toronto: University of Toronto Press, pp. 25-57.

Annemarie Mol, 1999, “Ontological Politics: A Word and Some Questions,” in John Law, and J. Hassard, ed., Actor Network Theory and After.  Oxford: Blackwell, pp. 74-89.

Terrific ethnographic studies very concerned with ontologies:

Mario Blaser, 2010, Storytelling Globalization from the Chaco and Beyond.  Durham NC: Duke University Press.

Eduardo Kohn, 2013, How Forests Think: Toward an anthropology beyond the human. Berkeley: University of California Press.

Helen Verran, 2011, “On Assemblage: Indigenous Knowledge and Digital Media (2003-2006) and HMS Investigator (1800-1805).” In Tony Bennet & Chris Healey, eds.,  Assembling Culture.  London & New York: Routledge, pp. 163-176.

Morten Pedersen, 2011, Not Quite Shamans: Spirit worlds and Political Lives in Northern Mongolia. Ithaca: Cornell University Press.

John Law & Marianne Lien, 2013, “Slippery: Field Notes in Empirical Ontology,” Social Studies of Science 43 (3): 363-378.

Stacey A. Langwick, 2011, Bodies, Politics, and African Healing: The Matter of Maladies in Tanzania.  Bloomington: Indiana University Press.

Judith Farquhar is Max Palevsky Professor of Anthropology and Social Sciences at the University of Chicago. Her research concerns traditional medicine, popular culture, and everyday life in contemporary China. She is the author of Knowing Practice: The Clinical Encounter of Chinese Medicine (Westview 1996),Appetites: Food and Sex in Post-Socialist China (Duke 2002), and Ten Thousand Things: Nurturing Life in Contemporary Beijing (Zone 2012) (with Qicheng Zhang), and editor (with Margaret Lock) of Beyond the Body Proper: Reading the Anthropology of Material Life (Duke 2007).

*   *   *

January 17, 2014

A reader’s guide to the “ontological turn” – Part 2

Javier Lezaun

This article is part of the series: A reader’s guide to the “ontological turn”

Editor’s note: In the wake of the discussion about the ‘ontological turn’ at this year’s American Anthropological Association conference, we asked several scholars, “which texts or resources would you recommend to a student or colleague interested in the uses of ‘ontology’ as an analytical category in recent work in anthropology and science and technology studies?”  This was the answer we received from Javier Lezaun, James Martin Lecturer in Science and Technology Governance at the University of Oxford. 

Those of us who have been brought up in the science and technology studies (STS) tradition look at claims of an ‘ontological turn’ with a strange sense of familiarity: it’s déjà vu all over again! For we can read the whole history of STS (cheekily and retroactively, of course) as a ‘turn to ontology’, albeit one that was rarely thematized as such.

A key text in forming STS and giving it a proto-ontological orientation (if such a term can be invented) is Ian Hacking’s Representing and Intervening (1983). On its surface the book is an introduction to central themes and keywords in the philosophy of science. In effect, it launches a programme of research that actively blurs the lines between depictions of the world and interventions into its composition. And it does so by bringing to the fore the constitutive role of experimental practices – a key leitmotiv of what would eventually become STS.

Hacking, of course, went on to develop a highly original form of pragmatic realism, particularly in relation to the emergence of psychiatric categories and new forms of personhood. His 2004 book, Historical Ontology, captures well the main thrust of his arguments, and lays out a useful contrast with the ‘meta-epistemology’ of much of the best contemporary writing in the history of science.

But we are getting ahead of ourselves and disrespecting our good old friend Chronology. The truth is that references to ontology are scarce in the foundational texts of STS (the term is not even indexed in Representing and Intervening, for instance). This is hardly surprising: alluding to the ontological implies a neat distinction between being and representing, precisely the dichotomy that STS scholars were trying to overcome – or, more accurately, ignore – at the time. The strategy was to enrich our notion of representation, not to turn away from it in favour of higher plane of being.

It is in the particular subfield of studies of particle physics that the discussion about ontology within STS developed, simply because matters of reality – and the reality of matter – featured much more prominently in the object of study. Andrew Pickering’s Constructing Quarks: A Sociological History of Particle Physics (1984) was one of the few texts that tackled ontological matters head on, and it shared with Hacking’s an emphasis on the role of experimental machineries in producing agreed-upon worlds. In his following book, The Mangle of Practice: Time, Agency, and Science (1995), Pickering would develop this insight into a full-fledged theory of temporal emergence based on the dialectic of resistance and accommodation.

An interesting continuation and counterpoint in this tradition is Karen Barad’s book, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (2007). Barad’s thesis, particularly her theory of agential realism, is avowedly and explicitly ontological, but this does not imply a return to traditional metaphysical problem-definitions. In fact, Barad speaks of ‘onto-epistemology’, or even of ‘onto-ethico-epistemology’, to describe her approach. The result is an aggregation of planes of analysis, rather than a turn from one to the other.

Arguments about the nature of quarks, bubble chambers and quantum physics might seem very distant from the sort of anthropo-somatic questions that preoccupy readers of this blog, but it is worth noting that this rarefied discussion has been the terrain where key elements of the current STS interest in ontology – the idioms of performativity and materialism in particular – were first tested.

The work that best represents this current interest in matters of ontology within STS is that of Annemarie Mol and John Law. Their papers on topologies (e.g., ‘Regions, Networks and Fluids: Anaemia and Social Topology’ in 1994; ‘Situating technoscience:  an inquiry into spatialities’, 2001) broke new ground in making explicit the argument about the multiplicity of the world(s), and served to develop a first typology of alternative modes of reality. Mol’s ethnography of atherosclerosis, The Body Multiple: Ontology in Medical Practice (2003), is of course the (provisional?) culmination of this brand of ‘empirical philosophy’, and a text that offers a template for STS-inflected anthropology (and vice versa).

One distinct contribution of this body of work – and this is a point made by Malcolm Ashmore in his review of The Body Multiple – is to extend STS modes of inquiry beyond the study of new or controversial entities, and draw the same kind of analytical intensity to realities – like that (or those) of atherosclerosis – whose univocal reality we tend to take for granted. For better and worse, STS grew out of an effort to understand how new facts and artifacts enter our world, and the field remains attached to all that is (or appears to be) new – even if the end-result of the analysis is often to challenge those claims to novelty. The current ‘ontological turn’ in STS would then represent an effort to excavate mundane layers of reality, to draw attention to the performed or enacted nature of that that appears old, settled or uncontroversial. I suspect this manoeuvre carries less value in Anthropology, where the everyday and the taken-for-granted is often the very locus of inquiry.

The other value of the ‘ontological turn’ is, in my view, to recast the question of politics – as both an object of study and a mode of engagement with the world. This recasting can take at least two different forms. There are those who argue that attending to the ontological, i.e., to the reality of plural worlds and the unavoidable condition of multinaturalism, intensifies (and clarifies) the normative implications of our analyses (see for instance the genealogical argument put forward very forcefully by Dimitris Papadopoulos in his article ‘Alter-ontologies: towards a constituent politics in technoscience’). A slightly different course of action is to think of ontology as a way of addressing the intertwining of the technological and the political. Excellent recent examples of this approach are Noortje Marres’s Material Participation: Technology, the Environment, and everyday Publics (2012) and Andrew Barry’s Material Politics: Disputes Along the Pipeline (2013).

In sum, and to stake out my own position, I think STS is best seen as a fairly tight bundle of analytical sensibilities – sensibilities that are manifested in an evolving archipelago of case studies. It is not a theory of the world (let alone a theory of being), and it quickly becomes trite and somewhat ritualistic when it is transformed into a laundry list of statements about what the world is or should be like. In this sense, an ‘ontological turn’ would run counter to the STS tradition, as I see it, if it implies asserting a particular ontology of the world, regardless of whether the claim is that that ontology is plural, multiple, fluid, relational, etc. This sort of categorical, pre-empirical position smothers the critical instincts that energize the field and have driven its evolution over the last three decades. Steve Woolgar and I have formulated this view in a recent piece for Social Studies of Science (‘The wrong bin bag:  a turn to ontology in science and technology studies?’), and a similar argument been made often and persuasively by Michael Lynch (e.g., “Ontography: investigating the production of things, deflating ontology”).

Javier Lezaun is James Martin Lecturer in Science and Technology Governance and Deputy Director at the Institute for Science, Innovation and Society in the School of Anthropology and Museum Ethnography at the University of Oxford. His research focuses on the politics of scientific research and its governance. He directs the research programme BioProperty, funded by the European Research Council, which investigates the role of property rights and new forms of ownership in biomedical research. Javier is also currently participating in research projects on thegovernance of climate geoengineering, and new forms of consumer mobilization in food markets.

*   *   *

February 12, 2014

A reader’s guide to the “ontological turn” – Part 3

Morten Axel Pedersen

This article is part of the series: A reader’s guide to the “ontological turn”

Editor’s note: In the wake of all the discussion about the ‘ontological turn’ at this year’s American Anthropological Association conference, we asked four scholars, “which texts or resources would you recommend to a student or colleague interested in the uses of ‘ontology’ as an analytical category in recent work in anthropology and science and technology studies?”  This was the reading list we received fromMorten Axel Pedersen, Professor of Anthropology at the University of Copenhagen.

As someone who has, for a decade, participated in discussions about ‘ontology’ at various European anthropology venues and departments, I share the sense of déjà-vu noted by Lezaun in Part 2 of this Reader’s Guide. In fact, it is surprising just how much interest and enthusiasm, not to mention critique and aversion, has been generated by the recent introduction of this discussion into mainstream US anthropology. Arguably, the ontological turn now faces the risk of becoming the latest ‘new thing’, so critique is inevitable, necessary and welcome. Indeed, students and scholars from some of the same institutions that spearheaded anthropology’s turn to ontology are now questioning its most deeply held assumptions and cherished arguments. That, of course, is precisely how things should be. And hopefully, the part-repetition in the US of debates that are now losing steam in Latin America, Japan and Europe will provide a new framework for experimentally transforming and productively distorting anthropology’s engagement with ontology, and thus avoid the ever lurking danger of it becoming just another orthodoxy.

What follows here is a list of predominantly anthropological readings, which does not cover the creative interfaces between STS and anthropology explored by scholars in Copenhagen, Manchester, Osaka, and elsewhere. The list is not intended to be exclusive. Indeed, many scholars who figure on it may well not consider themselves part of the ontological turn and may be critical of part or all of it. The reason why they are nevertheless included is that they all have, in my view, played a role in making the ‘turn’ what it is today.

Books

Blaser, Mario. 2010. Storytelling Globalization from the Chaco and Beyond.  Durham NC: Duke University Press.

Descola, Philippe. 2013. Beyond Nature and Culture. Trans. J. Lloyd. Chicago: University of Chicago Press

Gell, Alfred. 1998. Art and Agency: An Anthropological Theory. Oxford: Clarendon Press.

Holbraad, Martin. 2012. Truth in Motion: The Recursive Anthropology of Cuban Divination. Chicago: Chicago University Press.

Kohn, Eduardo. 2012. How Forests Think: Toward an anthropology beyond the human. Berkeley: University of California Press.

Krøijer, Stine. Forthcoming. Figurations of the Future: Forms and Temporality of Left Radical Politics in Northern Europe. Oxford: Berghahn Books.

Maurer, Bill. 2005. Mutual Life, Limited. Islamic Banking, Alternative Currencies, Lateral Reason. Princeton: Princeton University Press.

Miyazaki, Hirokazu. 2013. Arbitraging Japan: Dream of Capitalism at the End of Finance. Berkeley: University of California Press.

Rio, Knut Mikjel. 2007. The Power of Perspective. Social Ontology and Agency on Ambrym Island, Vanuatu. Oxford: Berghahn Books.

Scott, Michael W. 2007. The Severed Snake: Matrilineages, Making Place, and a Melanesian Christianity in Southeast Solomon Islands. Durham NC: Carolina Academic Press.

Stasch, Rupert. 2009. Society of Others. Kinship and Mourning in a West Papuan Place. Berkeley: University of California Press.

Strathern, Marilyn. 2004. Partial Connections (Updated Edition). Walnut Creek, CA: Altamira.

Swancutt, Katrhine, 2012. Fortune and the Cursed: The Sliding Scale of Time in Mongolian Divination. Oxford: Berghahn.

Wagner, Roy. 1975. The Invention of Culture. Chicago: University of Chicago Press.

Willerslev, Rane. 2007. Soul Hunters: Hunting, Animism and Personhood amomg the Siberian Yukaghirs. Berkeley: University of California Press.

Viveiros de Castro, Eduardo. 2009. Métaphysiques cannibales. Paris: Presses Universitaires de France

Edited volumes/sections

Jensen, C. B, M. A. Pedersen & B. R. Wintereik, eds. 2011. “Comparative Relativism”, special issue of Common Knowledge 17 (1).

Jensen, C. B. & A. Morita, eds. 2012. “Anthropology as critique of reality: A Japanese turn“. Forum in HAU: Journal of Ethnographic Theory 2 (2): 358-405.

Candea, Matei & Lys Alcayna–Stevens, eds. 2012. “Internal Others: Ethnographies of Naturalism“, Special section in Cambridge Anthropology30(2): 36-146

Henare, A., M: Holbraad and S.Wastell, eds. 2007. Thinking Through Things: Theorising Artifacts Ethnographically. London: Routledge. (Here’s a pre-publication version of the Introduction).

Pedersen, M. A., R. Empson and C. Humphrey, eds. 2007. “Inner Asian Perspectivism,” special issue of Inner Asia 9 (2) (especially papers by da Col,Holbraad/Willerslev and Viveiros de Castro)

Articles engaging explicitly with “ontology”, also critically

Alberti, B., S. Fowles, M. Holbraad, Y. Marshall, C. Witmore. 2011. ‘Worlds otherwise’: Archaeology, Anthropology, and Ontological Difference forum.Current Anthropology 52(6): 896-912

Blaser, Mario. 2013. Ontological conflicts and the stories of peoples in spite of Europe: toward a conversation on political ontology. Current Anthropology54(5): 547-568.

Course, Magnus. 2010. Of Words and Fog. Linguistic relativity and Amerindian ontology. Anthropological Theory 10(3): 247–263.

De la Cadena, Marisol. 2010. Indigenous Cosmopolitics in the Andes: Conceptual Reflections beyond ‘Politics’. Cultural Anthropology 25 (2): 334-70.

Hage, Ghassan. 2012. Critical anthropological thought and the radical political imaginary today. Critique of Anthropology 32(3): 285–308

Heywood, Paolo. 2012. Anthropology and What There Is: Reflections on “Ontology”. Cambridge Anthropology 30 (1): 143-151.

Holbraad, Martin. 2009. Ontography and Alterity: Defining anthropological truth. Social Analysis 53 (2): 80-93.

Holbraad, Martin. 2011. Can the Thing Speak? OAP Press, Working Paper Series, Article # 7.

Laidlaw, James. 2012. Ontologically Challenged. Anthropology of This Century, vol. 4, London, May 2012.

Laidlaw, James and Paolo Heywood, 2013. One More Turn and You’re There.Anthropology of This Century, vol. 7, London, May 2013.

Nielsen, Morten. 2013. Analogic Asphalt: Suspended value conversions among young road workers in Southern Mozambique. HAU: Journal of Ethnographic Theory 3 (2): 79-96.

Pedersen, Morten Axel. 2001. Totemism, animism and North Asian indigenous ontologies. Journal of the Royal Anthropological Institute 7 (3): 411-427.

Pedersen, Morten Axel. 2012. Common nonsense. A review of certain recent reviews of the ‘ontological turn.’ Anthropology of This Century, 5.

Salmon, Amira. 2013. Transforming translations (part I):“The owner of these bones”. HAU: Journal of Ethnographic Theory 3(3): 1-32.

Scott, Michael W. 2013. The Anthropology of Ontology (Religious Science?).Journal of the Royal Anthropological Institute 19 (4): 859–72.

Venkatesan, Soumhya et al. 2010. Ontology Is Just Another Word for Culture: Motion Tabled at the 2008 Meeting of the Group for Debates in Anthropological Theory, University of Manchester. Critique of Anthropology30 (2):152-200. (The papers can also be downloaded here).

Viveiros de Castro, Eduardo. 2002. And. Manchester: Papers in Social Anthropology.

Viveiros de Castro, E. 2013 “The Relative Native” by HAU: Journal of Ethnographic Theory 3(3): 473-502.

**

Finally, there are some recent and ongoing dialogues in France between anthropologists and philosophers concerning issues of metaphysics and ontology, which may be of interest:

Morten Axel Pedersen is Professor of Anthropology at the University of Copenhagen. His publications include Not Quite Shamans: Spirit Worlds and Political Lives in Northern Mongolia (2011). He is also co-editor, with Martin Holbraad, of Times of Security: Ethnographies of Fear, Protest, and the Future(2013). A new book co-authored with Lars Højer, Urban Hunters: Dealing and Dreaming in Times of Transition is forthcoming.

*   *   *

March 19, 2014

A reader’s guide to the “ontological turn” – Part 4

Annemarie Mol

This article is part of the series: A reader’s guide to the “ontological turn”

Editor’s note: In the wake of all the discussion about the ‘ontological turn’ at this year’s American Anthropological Association conference, we asked four scholars, “which texts or resources would you recommend to a student or colleague interested in the uses of ‘ontology’ as an analytical category in recent work in anthropology and science and technology studies?”  This was the answer we received fromAnnemarie Mol, professor of Anthropology of the Body at the University of Amsterdam.  Answers from Judith Farquhar, Javier Lezaun, and Morten Axel Pedersen appear as separate posts in the series.

The point of the use of the word ‘ontology’ in STS was that it allowed us not just to talk about the methods that were used in the sciences, but (in relation to these) also address what the sciences made of their object. E.g. rather than asking whether or not some branch of science knows ‘women’ correctly, or instead with some kind of bias, we wanted to shift to the question: what are the topics, the concerns and the questions that knowledge practices insist on; how do they interfere in practices; what do they do to/with women; etc. At first this was cast in constructivist terms as ‘what do various scientific provinces make ofwomen’. But then we began to doubt whether ‘making’ was such a good metaphor, as it gives some ‘maker’ too much credit; as it suggests a time line with a before and an after; and materials out of which x or y might be made. So we shifted terminology and used words like perform, or do, or enact. Here we widened the idea of the staging of social realities (e.g. identities) to that of physical realities.

The idea was that there are not just many ways of knowing ‘an object’, but rather many ways of practising it. Each way of practising stages – performs, does, enacts – a different version of ‘the’ object. Hence, it is not ‘an object’, but more than one. An object multiple. That reality might be multiple goes head on against the Euroamerican tradition in which different people may each have their own perspective on reality, while there is only one reality – singular, coherent, elusive – to have ‘perspectives’ on.  To underline our break with this monorealist heritage of monotheism, we imported the old fashioned philosophical term of ontology and put it in the plural. Ontologies. That was – at the time – an unheard of oxymoron.

Crucial in all this was the work of Donna Haraway (even if she did not particularly use the word ontology). Read it all – or pick out what seems interesting to you. Here, now. But if you don’t quite know where to start, plunge into Primate Visions.

Crucial, too, was earlier STS work on methods that had recast these as techniques of staging a world (not just of objects, but also of tools, money, readers, investors, etc.). Here Bruno Latour, Michel Callon and John Law worked in ways that later fed into the ‘ontology’ stream. See for that particular history: Annemarie Mol, “Actor-Network Theory: Sensitive Terms and Enduring Tensions.”

The branches of STS from which studies into ontology grew, took themselves as shifting the anthropological gaze from ‘the others’ to the sciences, scienced that staged themselves as universal, but weren’t. They were variously situated techno-science practices and making them travel was hard work. “Show me a universal and I will ask how much it costs,” wrote Bruno Latour, (in Irréductions, the second part of The Pasteurisation of France) Hence, going out in the world to study ‘others’ while presuming ‘the West’ (or at least (its) science) was rational, coherent, naturalist, what have you – seemed a bad idea to us. The West could do with some thorough unmasking – and taking this to what many saw as pivotal to its alleged superiority, its truth machines, seemed a good idea (even if a lot later some of the techniques involved were highjacked by climate change deniers… ).

But there were also always specific relevant interventions to be made. For instance, if ontology is not singular and given, the question arises about whichreality to ‘do’. Ontology does not precede or escape politics, but has a politics of its own. Not a politics of who (who gets to speak; act; etc.) but a politics of what(what is the reality that takes shape and that various people come to live with?) See: A. Mol, “Ontological politics. A word and some questions,” (in Law & Hassard, Actor Network Theory and After).

For a longer and more extensive opening up of ontologies / realities (in the plural), well, there is my book The body multiple: Ontology in medical practice(Duke University Press 2003) – that lays it all out step by step… Including the difficult aspect of ontological multiplicity that while there is more reality than one, its different versions are variously entangled with one another, so that there are less than many. (As Donna Haraway put it; and as explored by Marilyn Strathern in Partial Connections)

For an earlier use of the term ontological that makes its relevance clear and lays out how realities being done may change over time: Cussins, Charis.“Ontological choreography: Agency through objectification in infertility clinics.” Social studies of science 26, no. 3 (1996): 575-610. Later reworked in Thompson Charis, Making Parents: The Ontological Choreography of Reproductive Technologies.

For an early attempt to differentiate the semiotics involved from the symbolic interactionist tradition and its perspectives see: Mol, Annemarie, and Jessica Mesman. “Neonatal food and the politics of theory: some questions of method.” Social Studies of Science 26, no. 2 (1996): 419-444.

The politics at stake come out very well in Ingunn Moser: “Making Alzheimer’s disease matter. Enacting, interfering and doing politics of nature.” Geoforum39, no. 1 (2008): 98-110.

And for the haunting question as to what/who acts and/or what/who is enacted, see: Mol, Annemarie, and John Law. “Embodied action, enacted bodies: the example of hypoglycaemia.” Body & Society 10, no. 2-3 (2004): 43-62.

If you like realities as they get tied up with techniques, this is an exciting one, as it multiplies what it is to give birth: Akrich, Madeleine, and Bernike Pasveer.“Multiplying obstetrics: techniques of surveillance and forms of coordination.”Theoretical medicine and bioethics 21, no. 1 (2000): 63-83.

Remember, the multiplicity of reality does not imply its plurality. Here is a great example of that, a study that traces the task of coordinating between different versions of reality in the course of an operation: Moreira, Tiago.“Heterogeneity and coordination of blood pressure in neurosurgery.” Social Studies of Science 36, no. 1 (2006): 69-97.

But if different versions of ‘an object’ may be enacted in practice, this is not to say that they are always fused at some point into ‘an object’ – they may never quite get to hang together. For a good case of that, see: Law, John, and Vicky Singleton. “Object lessons.” Organization 12, no. 3 (2005): 331-355.

And here an obligatory one for anthropologists, as the ‘object’ being studied – and multiplied – is a ‘population’ as defined by genetics in practice: M’charek, Amâde. “Technologies of population: Forensic DNA testing practices and the making of differences and similarities.” Configurations 8, no. 1 (2000): 121-158.

Oh, and I should not forget this troubling of ‘perspectives’ that went beyond realities to also include appreciations: Pols, Jeannette. “Enacting appreciations: beyond the patient perspective.” Health Care Analysis 13, no. 3 (2005): 203-221.

More recently, there was a special issue of Social Studies of Science to do with ontologies. It has a good introduction: Woolgar, Steve, and Javier Lezaun. “The wrong bin bag: A turn to ontology in science and technology studies?.”Social Studies of Science 43, no. 3 (2013): 321-340. In it, you may want to read: Law, John, and Marianne Elisabeth Lien. “Slippery: Field notes in empirical ontology.” Social Studies of Science 43, no. 3 (2013): 363-378.

And if you are still hungry for ontologies, then there is (with the example of eating and with norms explicitly added to ‘onto’): Mol, Annemarie. “Mind your plate! The ontonorms of Dutch dieting.” Social Studies of Science 43, no. 3 (2013): 379-396.

All of which is not to say that I would want to argue for such a thing as a ‘turn to ontology’ in anthropology or anywhere else. In the branch of the social studies of science, technology and medicine that I come from this term, ontology, has served quite specific purposes. It has helped to put some issues and questions on the agenda. But of course, like all terms, it has its limits. For it evokes ‘reality’ better than other things deserving our attention – norms, processes, spatialities, dangers, pleasures: what have you…

 

Annemarie Mol is professor of Anthropology of the Body at the University of Amsterdam. In her work she combines the ethnographic study of practices with the task of shifting our theoretical repertoires. She is author of  The body multiple: Ontology in medical practice and The Logic of Care: Health and the Problem of Patient Choice.

An Indigenous Feminist’s take on the Ontological Turn: ‘ontology’ is just another word for colonialism (Urbane Adventurer: Amiskwacî)

Personal paradigm shifts have a way of sneaking up on you. It started, innocently enough, with a trip to Edinburgh to see the great Latour discuss his latest work in February 2013. I was giddy with excitement: a talk by the Great Latour. Live and in colour! In his talk, on that February night, he discussed the climate as sentient. Funny, I thought, this sounds an awful lot like the little bit of Inuit cosmological thought I have been taught by Inuit friends. I waited, through the whole talk, to hear the Great Latour credit Indigenous thinkers for their millennia of engagement with sentient environments, with cosmologies that enmesh people into complex relationships between themselves and all relations. 

It never came. He did not mention Inuit. Or Anishinaabe. Or Nehiyawak. Or any Indigenous thinkers at all. In fact, he spent a great deal of time interlocuting with a Scottish thinker, long dead. And with Gaia.

I left the hall early, before the questions were finished. I was unimpressed. Again, I thought with a sinking feeling in my chest, it appeared that the so-called Ontological Turn was spinning itself on the backs of non-european thinkers. And, again, the ones we credited for these incredible insights into the ‘more-than-human’, and sentience and agency, were not the people who built and maintain the knowledge systems that european and north american anthropologists and philosophers have been studying for well over a hundred years, and predicating their current ‘aha’ ontological moment upon. No, here we were celebrating and worshipping a european thinker for ‘discovering’ what many an Indigenous thinker around the world could have told you for millennia. The climate is sentient!

So, again, I was just another inconvenient Indigenous body in a room full of people excited to hear a white guy talk around Indigenous thought without giving Indigenous people credit. Doesn’t this feel familiar, I thought.

As an Indigenous woman, I have tried, over the last few years, to find thinkers who engage with Indigenous thought respectfully. Who give full credit to Indigenous laws, stories and epistemologies. Who quote and cite Indigenous people rather than anthropologists who studied them 80 years ago. This is not always easy. I am so grateful to scholars like David Anderson, Julie Cruikshank and Ann Fienup-Riordan, among others, for giving me hope amidst the despair I’ve felt as the ‘Ontological Turn’ gains steam on both sides of the Atlantic. I am so grateful, too, for the Indigenous thinkers who wrestle with the academy, who have positioned themselves to speak back to Empire despite all of the polite/hidden racism, heteropatriarchy, and let’s face it–white supremacy–of the University.

The euro-western academy is colonial. It elevates people who talk about Indigenous people above people who speak with Indigenous people as equals, or who ARE Indigenous. (Just do a body count of the number of Indigenous scholars relative to non-Indigenous scholars in the euro academy, and you’ll see that over here there are far more people talking about Indigenous issues than Indigenous people talking about those issues themselves). As scholars of the euro-western tradition, we have a whole host of non-Indigenous thinkers we turn to, in knee-jerk fashion, when we want to discuss the ‘more-than-human’ or sentient environments, or experiential learning. There are many reasons for this. I think euro scholars would benefit from reading more about Critical Race theory, intersectionality, and studying the mounting number of rebukes against the privilege of european philosophy and thought and how this silences non-white voices within and outside the academy. This philosopher, Eugene Sun Park, wrote a scathing critique of the reticence of philosophy departments in the USA to consider non-european thought as ‘credible’. I would say many of the problems he identifies in euro-western philosophy are the same problems I have experienced in european anthropology, despite efforts to decolonise and re-direct the field during the ‘reflexive turn’ of the 1970s-onwards.

As an Indigenous feminist, I think it’s time we take the Ontological Turn, and the european academy more broadly, head on. To accomplish this, I want to direct you to Indigenous thinkers who have been writing about Indigenous legal theory, human-animal relations and multiple epistemologies/ontologies for decades. Consider the links at the end of this post as a ‘cite this, not that’ cheat-sheet for people who feel dissatisfied with the current euro (and white, and quite often, male) centric discourse taking place in our disciplines, departments, conferences and journals.

My experience, as a Métis woman from the prairies of Canada currently working in the UK, is of course limited to the little bit that I know. I can only direct you to the thinkers that I have met or listened to in person, whose writing and speaking I have fallen in love with, who have shifted paradigms for me as an Indigenous person navigating the hostile halls of the academy. I cannot, nor would I try, to speak for Indigenous thinkers in other parts of the world. But I guarantee that there are myriad voices in every continent being ignored in favour of the ‘GREAT WHITE HOPES’ we currently turn to when we discuss ontological matters (I speak here, of course, of ontology as an anthropologist, so hold your horses, philosophers, if you feel my analysis of ‘the ontological’ is weak. We can discuss THAT whole pickle another day).

So why does this all matter? Why am I so fired up at the realisation that (some) european thinkers are exploiting Indigenous thought, seemingly with no remorse? Well, it’s this little matter of colonialism, see. Whereas the european academy tends to discuss the ‘post-colonial’, in Canada I assure you that we are firmly still experiencing the colonial (see Pinkoski 2008 for a cogent discussion of this issue in Anthropology). In 2009, our Prime-Minister, Stephen Harper, famously claimed that Canada has ‘no history of colonialism’. And yet, we struggle with the fact that Indigenous women experience much higher rates of violence than non-Indigenous women (1200 Indigenous women have been murdered or gone missing in the last forty years alone, prompting cries from the UN and other bodies for our government to address this horrific reality). Canada’s first Prime-Minister, proud Scotsman John A. MacDonald (I refuse to apply the ‘Sir’), famously attempted to ‘kill the Indian in the Child’ with his residential schools. Canada is only now coming around to the realisation that through things like residential schools, and the deeply racist—and still legislated!–Indian Act, that it, as a nation, was built on genocide and dispossession. Given our strong British roots in Canada, you can imagine that it’s All Very Uncomfortable and creates a lot of hand-wringing and cognitive dissonance for those who have lived blissfully unaware of these violences. But ask any Indigenous person, and you will hear that nobody from an Indigenous Nation has ever laboured under the fantasy that Canada is post-colonial. Or benevolent. Nor would we pretend that the British Empire saddled us with solely happy, beautiful, loving legacies. For all its excessive politeness, the British colonial moment rent and tore apart sovereign Indigenous nations and peoples in what is now Canada, and though the sun has set on Queen Victoria’s Empire, British institutions (including the academy) still benefit from that colonial moment. We are enmeshed, across the Atlantic, in ongoing colonial legacies. And in order to dismantle those legacies, we must face our complicity head on.

Similarly, with the wave of the post-colonial wand, many european thinkers seem to have absolved themselves of any implication in ongoing colonial realities throughout the globe. And yet, each one of us is embedded in systems that uphold the exploitation and dispossession of Indigenous peoples. The academy plays a role in shaping the narratives that erase ongoing colonial violence. My experience in Britain has been incredibly eye-opening: as far as the majority of Brits are concerned, their responsibility for, and implication in, colonialism in North America ended with the War of Independence (in America) or the repatriation of the Canadian constitution (1982).

Is it so simple, though? To draw such arbitrary lines through intergenerational suffering and colonial trauma, to absolve the european academy and the european mind of any guilt in the genocide of Indigenous people (if and when european and north american actors are willing to admit it’s a genocide)? And then to turn around and use Indigenous cosmologies and knowledge systems in a so-called new intellectual ‘turn’, all the while ignoring the contemporary realities of Indigenous peoples vis-à-vis colonial nation-states, or the many Indigenous thinkers who are themselves writing about these issues? And is it intellectually or ethically responsible or honest to pretend that european bodies do not still oppress Indigenous ones throughout the world?

Zygmunt Bauman (1989) takes sociology to task for its role in narrating the Holocaust, and its role in erasing our collective guilt in the possibility for a future Holocaust to emerge. He argues that by framing the Holocaust as either a a) one-off atrocity never to be repeated (“a failure of modernity”) (5) or b) an inevitable outcome of modernity, sociology enables humanity to ignore its ongoing complicity in the conditions that created the horrors of the Holocaust. The rhetoric of the post-colonial is similarly complacent: it absolves the present generation of thinkers, politicians, lawyers, and policy wonks for their duty to acknowledge what came before, and, in keeping with Bauman’s insights, the possibility it could happen again — that within all societies lurk the ‘two faces’ of humanity that can either facilitate or quash systemic and calculated human suffering and exploitation. But the reality is, as Bauman asserts, that humanity is responsible. For all of these atrocities. And humanity must be willing to face itself, to acknowledge its role in these horrors, in order to ensure we never tread the path of such destruction again. 

I take Bauman’s words to heart, and ask my non-Indigenous peers to consider their roles in the ongoing colonial oppression of Indigenous peoples. The colonial moment has not passed. The conditions that fostered it have not suddenly disappeared. We talk of neo-colonialism, neo-Imperialism, but it is as if these are far away things (these days these accusations are often mounted with terse suspicion against the BRIC countries, as though the members of the G8 have not already colonized the globe through neo-liberal economic and political policies). The reality is that we are just an invasion or economic policy away from re-colonizing at any moment. So it is so important to think, deeply, about how the Ontological Turn–with its breathless ‘realisations’ that animals, the climate, water, ‘atmospheres’ and non-human presences like ancestors and spirits are sentient and possess agency, that ‘nature’ and ‘culture’, ‘human’ and ‘animal’ may not be so separate after all—is itself perpetuating the exploitation of Indigenous peoples. To paraphrase a colleague I deeply admire, Caleb Behn: first they came for the land, the water, the wood, the furs, bodies, the gold. Now, they come armed with consent forms and feeble promises of collaboration and take our laws, our stories, our philosophies. If they bother to pretend to care enough to do even that much—many simply ignore Indigenous people, laws, epistemologies altogether and re-invent the more-than-human without so much as a polite nod towards Indigenous bodies/Nations.

A point I am making in my dissertation, informed by the work of Indigenous legal theorists like John Borrows, Kahente Horn-Miller, Tracey Lindberg, and Val Napoleon, is that Indigenous thought is not just about social relations and philosophical anecdotes, as many an ethnography would suggest. These scholars have already shown that Indigenous epistemologies and ontologies represents legal orders, legal orders through which Indigenous peoples throughout the world are fighting for self-determination, sovereignty. The dispossession wrought by centuries of stop-start chaotic colonial invasion and imposition of european laws and languages is ongoing. It did not end with repatriation of constitutions or independence from colonial rule. Europe is still implicated in what it wrought through centuries of colonial exploitation. Whether it likes it or not.

My point here is that Indigenous peoples, throughout the world, are fighting for recognition. Fighting to assert their laws, philosophies and stories on their own terms. And when anthropologists and other assembled social scientists sashay in and start cherry-picking parts of Indigenous thought that appeal to them without engaging directly in (or unambiguously acknowledging) the political situation, agency and relationality of both Indigenous people and scholars, we immediately become complicit in colonial violence. When we cite european thinkers who discuss the ‘more-than-human’ but do not discuss their Indigenous contemporaries who are writing on the exact same topics, we perpetuate the white supremacy of the academy.

So, for every time you want to cite a Great Thinker who is on the public speaking circuit these days, consider digging around for others who are discussing the same topics in other ways. Decolonising the academy, both in europe and north america, means that we must consider our own prejudices, our own biases. Systems like peer-review and the subtle violence of european academies tend to privilege certain voices and silence others. Consider why, as of 2011, there were no black philosophy profs in all of the UK. Consider why it’s okay to discuss sentient climates in an Edinburgh lecture hall without a nod to Indigenous epistemologies and not have a single person openly question that. And then, familiarise yourself with the Indigenous thinkers (and more!) I am linking below and broaden the spectrum of who you cite, who you reaffirm as ‘knowledgeable’.

hiy-hiy.

Zoe Todd (Métis) is a PhD Candidate in Social Anthropology at the University of Aberdeen, Scotland. She researches human-fish relations in the community of Paulatuuq in the Inuvialuit Settlement Region, Northwest Territories, Canada. She is a 2011 Pierre Elliott Trudeau Foundation Scholar.

Is ontology making us stupid? (Theoria)

By Terence Blake

MARS 8, 2013

Einstein-neutrinos-vitesse-lumiere-0

(This is a translation and expansion of my paper given at Bernard Stiegler’s Summer Academy in August 2012. In it I consider the ontologies…

(This is a translation and expansion of my paper given at Bernard Stiegler’s Summer Academy in August 2012. In it I consider the ontologies of Louis Althusser, of Graham Harman, and of Paul Feyerabend).

Abstract: I begin by “deconstructing” the title and explaining that Feyerabend does not really use the word “ontology”, though he does sometimes call his position ontological realism. I explain that he talks about his position as indifferently a “general methodology” or a “general cosmology”, and that he seems to be be hostile to the very enterprise of ontology, as a separate discipline forming part of what Feyerabend critiques as “school philosophy”. I then go on to say that there is perhaps a concept of a different type of ontology, that I call a “diachronic ontology” that perhaps he would have accepted, and that is very different from ontology as ordinarily thought, which I claim to be synchronic ontology (having no room for the dialogue with Being, but just supposing that Being is already and always there without our contribution). I discuss Althusser and Graham Harman as exemplifying synchronic ontology, giving a reading of Harman’s recent book THE THIRD TABLE. I then discuss Feyerabend’s ideas as showing a different way, that of a diachronic ontology, in which there is no stable framework or fixed path. I end with Andrew Pickering whose essay NEW ONTOLOGIES makes a similar distinction to mine, expressing it in the imagistic terms of a De Kooningian (diachronic) versus a Mondrianesque (synchronic) approach.

A) INTRODUCTION

The question posed in the title, is ontology making us stupid?, is in reference to Nicholas Carr’s book THE SHALLOWS, which is an elaboration of his earlier essy IS GOOGLE MAKING US STUPID?, and I will destroy the suspense by giving you the answer right away: Yes and No. Yes ontology can make us more stupid if it privileges the synchronic, and I will give two examples: (1) the «marxist» ontology of Louis Althusser and (2) the object-oriented ontology of Graham Harman. No, on the contrary, it can make us less stupid, if it privileges the diachronic, and here I will give the example of the pluralist ontology of Paul Feyerabend.

Normally, I should give a little definition of ontology: the study of being as being, or the study of the most fundamental categories of beings, or the general theory of objects and their relations. However, this paper ends with a presentation of the ideas of Paul Feyerabend, and it must be noted that Feyerabend himself does not use the word «ontology», preferring instead to talk, indifferently, of «general cosmology» or of «general methodology». Sometimes as well he talks of the underlying system of categories of a worldview. And towards the end of his life he began to talk of Being with a capital B, but he always emphasized that we should not get hung up on one particular word or approach because there is no «stable framework which encompasses everything», and that any name or argument or approach only «accompanies us on our journey without tying it to a fixed road» (Feyerabend’s Letter to the Reader, Against Method xvi, available here:http://www.kjf.ca/31-C2BOR.htm. Feyerabend explicitly indicated that his own «deconstructive» approach derived from his fidelity to this ambiguity and this fluidity. Thus ontology for Feyerabend implies a journey, ie a process of individuation, without a fixed road and without a stable framework.

As for «stupid», it refers to a process of «stupidification» or dumbing down, of dis-individuation, that tends to impose on us just such a fixed road and stable framework. The word «making» also calls for explanation. We are noetic creatures, and so the good news is that we can never be completely stupid, or completely disindividuated, except in case of brain death. The bad news is that we can always become stupider than we are today, just as we can always become more open, more fluid, more multiple, more differenciated, in short more individuated. Ontology is not a magic wand that can transform us into an animal or a god, but it can favorise one or the other fork of the bifurcation of paths.

ARGUMENT: My argument will be very simple:

    1. traditional ontologies are based on an approach to the real that privileges the synchronic dimension, where the paths are fixed and the framework is stable. Althusser and Harman are good examples of synchronic ontology.
    2. another type of ontology is possible, and it exists sporadically, which privileges the diachronic dimension, and thus the aspects of plurality and becoming, the paths are multiple and the framework is fluid. Feyerabend is a good example of diachronic ontology.

NB: For the sake of brevity, I talk of synchronic and of diachronic ontologies, but in fact each type of ontology contains elements of the other type, and it is simply a matter of the primacy given to the synchronic over the diachronic, or the inverse.

Philosophy is inseparable from a series of radical conversions where our comprehension of all that exists is transformed. In itself, such a capacity for conversion or paradigm change is rather positive. A problem arises when this conversion amounts to a reduction of our vision and to an impoverishment of our life, if it makes us stupid. My conversion to a diachronic ontology took place in 1972, when I read Feyerabend’s AGAINST METHOD (NB : this was the earlier essay version, with several interesting developments that were left out of the book)., where he gives an outline of a pluralist ontology and an epistemology. On reading it I was transported, transformed, converted ; unfortunately, at the same period my philosophy department converted to a very different philosophy – Althusserianism.

B) ALTHUSSER AND ALTHUSSERIANISM

In fact, 1973 was a year that marked a turning point between the “diachronic tempest” of the 60s and the synchronic return to order desired by the Althusserians. I am deliberately using the expression that Bernard Stiegler uses to describe the invention of metaphysics as it was put to work in Plato’s REPUBLIC, in support of a project of synchronisation of minds and behaviours. I was the unwilling and unconsenting witness of an attempt at such a synchronisation on a small scale: my department, the Department of General Philosophy, sank into the dogmatic project, explicitly announced as such, of forming radical (ie Althusserian) intellectuals under the aegis of Althusserian Marxist Science. A small number of Althusserian militants took administrative and intellectual control of the department, and by all sorts of techniques of propaganda, intimidation, harassment and exclusion, forced all its members, or almost all, either to conform to the Althusserian party line or to leave.

Intellectually the Althusserians imposed an onto-epistemological meta-language in terms of which they affirmed the radical difference between science and ideology, and the scientificity of Marxism. It is customary to describe Althusserianism from the epistemological point of view, but it also had an ontological dimension, thanks to its distinction between real objects and theoretical objects: scientific practice produces, according to them, its own objects, theoretical objects, as a means of knowing the real objects. The objects of everyday life, the objects of common sense, and even perceptual objects, are not real objects, but ideological constructions, simulacra (as Harman will later claim, they are “utter shams”).

Faced with this negative conversion of an entire department, I tried to resist. Because I am “counter-suggestible” (as Feyerabend claimed to be) – in other words, because i am faithful to the process of individuation rather than to a party line – I devoted myself to a critique of Althusserianism. Its rudimentary ontology, the determination of Being in terms of real objects, corresponds to a transcendental point of view of first philosophy which acts as a hindrance to scientific practice, and pre-constrains the type of theoretical construction that it can elaborate. To maintain the diachronicity of the sciences one cannot retain the strict demarcation between real objects and theoretical objects, nor between science and ideology. The sciences thus risk being demoted to the same plane as any other ideological construction and having their objects demoted to the status of simulacra. This is a step that the Althusserians did not take, but that, as we shall see, Harman does, thus relieving the sciences of their privileged status.

NB: The set of interviews with Jacques Derrida, POLITICS AND FRIENDSHIP, describes the same phenomenon of intellectual pretention and intimidation supported by a theory having an aura of epistemologica and ontologicall sophistication but which was radically deficient. Derrida emphasises that the concepts of “object” and of “objectivity” were deployed without sufficient analysis of their pertinence nor of their theoretical and practical utility and groundedness.

After the period of Althusserian hegemony came a new period of “diachronic storm”, this time on the intellectual plane. Translations came out of works by Foucault and Derrida, but also of Lyotard and Deleuze. Althusserian dogmas were contested and deconstructed. But for me there still remained serious limitations on thought despite this new sophistication. There was an ontological dimension common to all these authors, and this ontological dimension was either neglected or ignored by the defenders of French Theory. Feyerabend himself seemed to be in need of an ontology to re-inforce his pluralism and to protect it against dogmatic incursions of the Althusserian type and against relativist dissolutions of the post-modern type. I obtained a scholarship to go and study in Paris, and I left Australia in 1980 to continue my ontological and epistemological research.

What I retain from this experience, over and above the need to maintain and to push forward the deconstruction by elaborating a new sort of ontology to accompany its advances, is the feeling of disappointment with the contradictory sophistication in Althusserian philosophy. I had the impression that it pluralised and diachronised with one hand what it reduced and synchronised with the other. Thus, despite its initial show of sophistication it made its acolytes stupid, disindividuated. Further, as an instrument of synchronisation on the large scale it was doomed to failure by its Marxism and its scientism, both of which made securing its general adoption an impossible mission. It would have been necessary to de-marxise and de-scientise its theory to make it acceptable to the greatest number. Further, its diffusion was limited to the academic microcosm, because at that time there was no internet. These limitations to the theory’s propagation (Marxism, scientism, academic confinement) have been deconstructed and overcome by a new philosophical movement, called OOO (object-oriented ontology) which has conquered a new sort of philosophical public. Lastly, I retain a distrust of any “movement” in philosophy, and of the power tactics (propaganda, intimidation, harassment, exclusion) that are inevitably implied. Oblivious to this sort of “wariness” with respect to the sociology of homo academicus, the OOOxians publicise themselves as a movement and attribute the rapid diffusion of their ideas to their mastery of digital social technologies.

C) HARMAN AND OBJECT-ORIENTED ONTOLOGY

In THE THIRD TABLE, Harman gives a brief summary of the principle themes of his object-oriented ontology. It is a little book, published this year in a bilingual (English-German) edition, and theEnglish text occupies a little over 11 pages (p4-15). The content is quite engaging as Harman accomplishes the exploit of presenting his principal ideas in the form of a response to Eddington’s famous “two tables” argument. This permits him toformulate his arguments in terms of a continuous polemic against reductionism in both its humanistic and scientistic forms. All that is fine, so far as it goes. However, problems arise when we examine his presentation of each of Eddington’s two tables, and even more so with his presentation of his own contribution to the discussion: a “third table”, the only real one in Harman’s eyes.

In the introduction to his book THE NATURE OF THE PHYSICAL WORLD (1928), Eddington begins with an apparent paradox: “I have just settled down to the task of writing these lectures and have drawn up my chairs to my two tables. Two tables! Yes; there are duplicates of every object about me two tables, two chairs, two pens” (xi). Eddington explains that there is the familiar object, the table as a substantial thing, solid and reliable,against which I can support myself. But, according to him, modern physics speaks of a quite different table: “My scientific table is mostly emptiness. Sparsely scattered in that emptiness are numerous electric charges rushing about with great speed” (xii). Eddington contrasts the substantiality of the familiar table (a solid thing, easy to visualise as such) and the abstraction of the scientific table (mostly empty space, a set of physical measures related by mathematical formulae). The familiar world of common sense is a world of illusions, whereas the the scientific worl, the only real world according to modern physics, is a world of shadows.

What is the relation between the two worlds? Eddington poses the question and dramatises the divergence between the two worlds, but contrary to what Harman seems to think, he gives no answer of his own. He declares that premature attempts to determine their relation are harmful, more of a hindrance than a help, to research. In fact, Eddington refuses to commit himself on the ontological question posed in his introduction because he is convinced that it is empirical research, mobilising psychology and physiology as well as physics, which must give the answer. It is clear that he would have regarded Althusserianism as just such a premature and harmful attempt. But what would he have thought of OOO? We shall return to this question in the last part of this talk.

In his little text Harman explains very succinctly the difference between the two tables. But in opposition to Eddington’s supposed scientism, Harman affirms that these two tables are “equally unreal” (p6), that they are just fakes or simulacra (“utter shams”, 6). Assigning each table to one side of the gap that separates the famous “two cultures” dear to C.P.Snow (the culture of the humanities on one side, that of the sciences on the other), he finds that both are products of reductionism, which negates the reality of the table.

“The scientist reduces the table downward to tiny particles invisible to the eye; the humanist reduces it upward to a series of effects on people and other things” (6).

Refusing reductionism and its simulacra, Harman poses the existence of a third table (the “only real” table, 10) which serves as an emblem for a third culture to come whose paradigm could be taken from the arts which attempt to “establish objects deeper than the features through which they are announced, or allude to objects that cannot quite be made present” (THE THIRD TABLE, 14). Philosophy itself is to abandon its scientific pretentions in order to speak at last of the real world and its objects.

In WORD AND OBJECT Quine proposes a technique called “semantic ascent” to resolve certain problems in philosophy. He invites us to formulate our philosophical problems no longer in material terms, as questions concerning the components of the world (“objects”) but rather in formal terms, as questions concerning the correct use and the correct analysis of our linguistic expressions (“words”). The idea was to find common ground to discuss impartially the pretentions of rival points of view. Unfortunately, this method turned out to be useless to resolve most problems, as the important disputes concern just as much the terms to employ and their interpretation as soon as we take up an interesting philosophical problem.

Inversely, Graham Harman with his new ontology proposes a veritable semantic descent (or we could call it an “objectal descent”), to reverse the linguistic turn, and to replace it with an ontological turn. According to him the fundamental problems of ontology must be reformulated in terms of objects and their qualities. These objects are not the objects of our familiar world, let us recall that Harman declares that the familiar table is unreal, a simulacrum, an “utter sham”. The real object is a philosophical object, which “withdraws behind all its external effects” (10). We cannot touch the harmanian table (for we can never touch any real object) nor even know it.

“The real is something that cannot be known, but only loved” (12).

Thus Harman operates a reduction of the world to objects and their qualities which is intended to be in the first instance ontological and not epistemological (here Harman is mistaken, and the epistemological dimension is omnipresent in his work, but as the object of a denegation). This objectal reduction is difficult to argue for, and sometimes it is presented as a self-evident truth accessible to every person of good will and good sense, and Harman’s philosophy is trumpeted as a return to naiveté and concreteness, triumphing over post-structuralist pseudo-sophistication and its abstractions. But we shall see that this is not the case.

This reduction of the world to objects and their qualities amounts to a conversion of our philosophical vision that is disguised as a return to the real world of concrete objects:

“Instead of beginning with radical doubt, we start from naiveté. What philosophy shares with the lives of scientists,  bankers, and animals is that all are concerned with objects” (THE QUADRUPLE OBJECT, 5).

“Once we begin from naiveté rather than doubt, objects immediately take center stage” (idem, 7)

This “self-evidence” of the point of view of naïveté is in fact meticulously constructed and highly philosophically motivated. We must recall that Harman’s “objects” are not at all the objects of common sense (we cannot know them nor touch them). So the “naiveté” that Harman invokes here is not some primitive openness to the world (that would only be a variant of the “bucket theory of mind” and of knowledge, denounced by Karl Popper). This “naiveté” is a determinate point of view, a very particular perspective (the “naive point of view”, as the French translation so aptly calls it). Under cover of this word “naiveté”, Harman talks to us of a “naïf”  point of view, that is nevertheless an “objectal” point of view., that is to say not naïf at all but partisan. Harman deploys all his rhetorical resources to provoke in the reader the adoption of the objectal point of view as if it were self-evident. This “objectal conversion” is necessary, according to him, to at last get out of the tyranny of epistemology and the linguistic turn, and edify a new ontology, new foundation for a metaphysics capable of speaking of all objects. We have seen that this “self-evident” beginning implies both a conversion and a reduction.

We see the parallels and differences of object-oriented ontology in relation to Althusserianism. Both relegate the familiar object and the perceptual object to the status of social constructions. OOO goes even further and assigns the scientific object to the same status of simulacrum (“utter sham”): only philosophy can tell us the truth about objects. Both propose a meta-language, but OOO’s meta-language is so de-qualified that it is susceptible of different instanciations, and in fact no two members of the movement have the same concrete ontology. Finally, OOO spreads in making abundant, liberal (and here the word has all its import) use of the means that the internet makes available: blogs, discussion groups, facebook exchanges, twitter, podcasts, streaming.

I have spoken here principally of Graham Harman’s OOO because I do not believe that OOO exists in general and I also think that its apparent unity is a deceitful façade. There is no substance to the movement, it is rather a matter of agreement on a shared meta-language, ie on a certain terminology and set of themes, under the aegis of which many different positions can find shelter. I have spoken here almost exclusively of THE THIRD TABLE because Harman’s formulations change from book to book, and I find that in this little brochure Harman offers us his meta-language in a pure state. In his other books Harman, without noticing, slides constantly between a meta-ontological sense of object and a sense which corresponds to one possible instanciation of this meta-language, thus producing much conceptual confusion.

My major objection to Harman’s OOO is that it is a school philosophy dealing in generalities and abstractions far from the concrete joys and struggles of real human beings (“The world is filled primarily not with electrons or human praxis, but with ghostly objects withdrawing from all human and inhuman access”, THE THIRD TABLE, 12). Despite its promises,  Harman’s OOO does not bring us closer to the richness and complexity of the real world but in fact replaces the multiplicitous and variegated world with a set of bloodless and lifeless abstractions – his unknowable and untouchable, “ghostly”, objects. Not only are objects unknowable, but even whether something is a real object or not is unknowable: “we can never know for sure what is a real object and what isn’t”.

Yet Harman has legislated that his object is the only real object (cf. THE THIRD TABLE, where Harman calls his table, as compared to the table of everyday life and the scientist’s table, “the only real one”, 10, and “the onlyreal table”, 11. As for the everyday table and the scientific table: “both areequally unreal“, both are “utter shams”, 6.  “Whatever we capture, whatever we sit at or destroy is not the real table”, 12. And he accuses others of “reductionism”!). To say that the real object is unknowable (“the real is something that cannot be known”, p12) is an epistemological thesis. As is the claim that the object we know, the everyday or the scientific object, is unreal.

How can this help us in our lives? It is a doctrine of resignation and passivity: we cannot know the real object, the object we know is unreal, an “utter sham”, we cannot know what is or isn’t a real object. Harman’s objects do not withdraw, they transcend. They transcend our perception and our knowledge, they transcend all relations and interactions. As Harman reiterates, objects are deep (“objects are deeper than their appearance to the human mind but also deeper than their relations to one another”, 4, “the real table is a genuine reality deeper than any theoretical or practical encounter with it…deeper than any relations in which it might become involved”, 9-10). This “depth” is a key part of Harman’s ontology, which is not flat at all and is the negation of immanence. Rather, it is centered on this vertical dimension of depth and transcendence.

Harman practices a form of ontological critique which contains both relativist elements and dogmatic elements. At the level of explicit content Harman is freer , less dogmatic than Althusser, as he does not make science the queen of knowledge. Harman situates himself insistantly “after” the linguistic turn, after the so-called “epistemologies of access”, after deconstruction and post-structuralism. He considers that the time for construction has come, that we must construct a new philosophy by means of a return to the things themselves of the world – objects. But is this the case?

D) FEYERABEND AND THE HARMFULNESS OF THE ONTOLOGICAL TURN

1) EDDINGTON’S REPLY TO HARMAN: THE WAY OF RESEARCH

Feyerabend stands in opposition to this demand for a new construction, and wholeheartedly espouses the continued necessity of deconstruction. He rejects the idea that we need a new system or theoretical framework, arguing that in many cases a unified theoretical framework is just not necessary or even useful:

“a theoretical framework may not be needed (do I need a theoretical framework to get along with my neighbor?). Even a domain that uses theories may not need a theoretical framework (in periods of revolution theories are not used as frameworks but are broken into pieces which are then arranged this way and that way until something interesting seems to arise)” (Philosophy and Methodology of Military Intelligence, 13).

Further, not only is a unified framework often unnecessary, it can be a hindrance to our research and to the conduct of our lives: “frameworks always put undue constraints on any interesting activity” (ibid, 13). He emphasises that our ideas must be sufficiently complex to fit in and to cope with the complexity of our practices (11). More important than a new theoretical construction which only serves “to confuse people instead of helping them” we need ideas that have the complexity and the fluidity that come from close connection with concrete practice and with its “fruitful imprecision” (11). Lacking this connection, we get only school philosophies that “deceive people but do not help them”. They deceive people by replacing the concrete world with their own abstract construction “that gives some general and very mislead (sic!) outlines but never descends to details”. The result is a simplistic set of slogans and stereotypes that “is taken seriously only by people who have no original ideas and think that [such a school philosophy] might help them getting ideas”.

Applied to the the ontological turn, this means that an ontological system is useless, a hindrance to thought and action, whereas an ontology which is not crystallised into a system and principles, but which limits itself to an open set of rules of thumb and of free study of concrete cases is both acceptable and desirable. The detour through ontology is useless, because according to Feyerabend a more open and less technical approach is possible. In effect, Feyerabend indicates what Eddington could have replied to Harman: just like Althusserianism OOO must be considered a premature and harmful failure because it specifies in an apriori and dogmatic fashion what the elements of the world are. This failure is intrinsic to its transcendental approach: it is premature because it prejudges the paths and results of empirical research, it is harmful because it tends to exclude possible avenues of research and to close people’s minds, making them stupid.

Eddington’s position is in fact very complex. He gives a dramatised description of what amounts to the incommensurability of the world of physics and the familiar world of experience. This is implicit in the whole theme of the necessary “aloofness” (xv) that scientific conceptions must maintain with respect to familiar conceptions. He then goes on to pose the question of the relation, or “linkage”, between the two. Sometimes he seems to give primacy to the familiar world eg: “the whole scientific inquiry starts from the familiar world and in the end it must return to the familiar world” (xiii), and “Science aims at constructing a world which shall be symbolic of the world of commonplace experience” (xiii). Sometimes he gives primacy to the world of physics, and seems to declare that the familiar world is illusory, eg: “In removing our illusions we have removed the substance, for indeed we have seen that substance is one of the greatest of our illusions” (xvi), though he does attenuate this by adding: “Later perhaps we may inquire whether in our zeal to cut out all that is unreal we may not have used the knife too ruthlessly”. On the question of the relation between physics and philosophy he is no mere scientistic chauvinist. Indeed, he gives a certain primacy to the philosopher: “the scientist … has good and sufficient reasons for pursuing his investigations in the world of shadows and is content to leave to the philosopher the determination of its exact status in regard to reality” (xiv). But he considers that neither common sense nor philosophy must interfere with physical science’s ” freedom for autonomous development” (xv). His conclusion is that reflection on modern physics leads to ” a feeling of open-mindedness towards a wider significance transcending scientific measurement” (xvi) and warns against a priori closure: “After the physicist has quite finished his worldbuilding a linkage or identification is allowed; but premature attempts at linkage have been found to be entirely mischievous”.

As we can see, Graham Harman ‘s discussion of this text in THE THIRD TABLE makes a mess of Eddington’s position, treating him as advocating the scientistic primacy of the world of physics. Harman can then propose his own “solution”: the objects of both common sense and physics are “utter shams”, the real object is that of (Harman’s) philosophy. This is why I think that Harman’s OOO is a contemporary example of what Eddington calls “premature attempts at linkage” and that he finds “mischievous”, ie both failed and harmful.

2) A MACHIAN CRITIQUE OF OOO

My thesis is that much of OOO is a badly flawed epistemology masquerading as an ontology. An interesting confirmation of this thesis is the touting of Roy Bhaskar’s A REALIST THEORY OF SCIENCE. For those too young to remember: this book came out initially in 1975, after the major epistemological works by Popper, Kuhn, Lakatos and Feyerabend. It was an ontologising re-appropriation of their epistemological discoveries. It was hailed as a great contribution by the Anglophone Althusserians (I kid you not!), as it gave substance to their distinction between the theoretical object, produced by the theoretical practices of the sciences) and the real object. The Althusserians used Bhaskar to legitimate their posing of Althusserian Marxism and Lacanian psychoanalysis as sciences. Their universal critique of any philosophical view that did not square with theirs was to disqualify it as demonstrably belonging, sometimes in very roundabout and tortuous ways to the “problematic of the subject”. Does this begin to sound familiar? real object vs theoretical object, problematic of the subject = correlationism. These themes are not new, but go back to the dogmatic reaction of the 70s!). It is amusing to see that Bhaskar, who is a prime example of someone who invented an ontological correlate to epistemological insights, is now being used as the proponent of a non-correlationist “realist” position, to condemn those who supposedly give primacy to epistemology over ontology. The whole procedure is circular. That is to say, far from really asking the transcendental question of what must the world be like for science to be possible? (this is an ideological cover-up for the real historical stakes of Bhaskar’s intervention) Bhaskar proceeds to an ontologisation of insights and advances in epistemology, and so constrains future research with an a posteriori ontology projected backwards as if it were an a priori “neutral” precondition of science. So Harman’s supposed primacy of ontology is in fact based on his continual denegation of his de facto dependence on results imported from epistemology and on the dogmatic freezing and imposition of what is at best only a particular historical stage of scientific research and of epistemological reflection.

One of my biggest objections to OOO concerns the question of primacy, which remains moot in contemporary philosophy. As we have seen, Harman’s ontological turn gives primacy to (transcendental, meta-level) philosophy. Feyerabend articulates an Eddingtonian position, one that gives primacy neither to philosophy nor to physics, but defends the open-mindedness of empirical (though not necessarily scientific) research. I think this can be clarified by examining Feyerabend’s defense of the “way of the scientist” as against the “way of the philosopher”. Feyerabend’s references to Mach (and to Pauli) show that this “way of the scientist” is transversal, not respecting the boundaries between scientific disciplines nor those between the sciences and the humanities and the arts. So it is more properly called the “way of research”. Eddington too seems to espouse this Machian way out of the pitfalls of primacy.

Ernst Mach is often seen as a precursor of the logical positivists, an exponent of the idea that “things” are logical constructions built up out of the sensory qualities that compose the world, mere bundles of sensations. He would thus be a key example of what Graham Harman in THE QUADRUPLE OBJECT calls “overmining”. Feyerabend has shown in a number of essays that this vision of Mach’s “philosophy” (the quotation marks are necessary, according to Feyerabend “because Mach refused to be regarded as the proponent of a new “philosophy””, SCIENCE IN A FREE SOCIETY, p192) is erroneous, based on a misreading by the logical positivists that confounds his general ontology with one specific ontological hypothesis that Mach was at pains to describe as a provisional and research-relative specification of his more general proposal.

Following Ernst Mach, Feyerabend expounds the rudiments of what he calls a general methodology or a general cosmology (this ambiguity is important: Feyerabend, on general grounds but also after a close scrutiny of several important episodes in the history of physics, is proceeds as if there is no clear and sharp demarcation between ontology and epistemology, whereas Harman, without the slightest case study, is convinced of the existence of such a dichotomy). Feyerabend’s discussion of Mach’s ontology can be found in SCIENCE IN A FREE SOCIETY (NLB, 1978, p196-203) and in many other places, making it clear that it is one of the enduring inspirations of his work. Mach’s ontology can be summarised, according to Feyerabend, in two points:

i) the world is composed of elements an their relations

ii) the nature of these elements and their relations is to be specified by empirical research

One may note a resemblance with Graham Harman’s ontology, summarised in his “brief SR/OOO tutorial“:

i) Individual entities of various different scales (not just tiny quarks and electrons) are the ultimate stuff of the cosmos.

ii) These entities are never exhausted by their relations. Objects withdraw from relation.

The difference is illuminating. Whereas Mach leaves the nature of these elements open, allowing for the exploration of several hypotheses, Harman transcendentally reduces these possibilities to one: elements are objects (NB: this reduction of the possibilities to one, enshrined in a transcendental principle, is one of the reasons for calling Harman’s OOO an objectal reduction). Further, by allowing empirical research to specify the relations, Mach does not give himself an a priori principle of withdrawal: here again “withdrawal” is just one possibility among many. Another advantage of this ontology of unspecified elements is that it allows us to do research across disciplinary boundaries, including that between science and philosophy. Feyerabend talks of Mach’s ontology’s “disregard for distinctions between areas of research. Any method, any type of knowledge could enter the discussion of a particular problem” (p197). in my terminology Mach’s ontology is diachronic, evolving with and as part of empirical research. Harman’s ontology is synchronic, dictating and fixing transcendentally the elements of the world.

3) BEING IS MULTIPLE AND THUS POLITICAL

Feyerabend uses most often a dialogical method, although he was led to complain that this was often a one-sided dialogue. This was because many of the his philosophical reviewers were what he called “illiterate”, what I am in this talk calling “stupid”, that is to say instances of a dogmatic and decontextualised image of thought conjugated with a disindividuated academic professionalism. Of these failed dialogues Feyerabend writes (in SCIENCE IN A FREE SOCIETY, 10):

I publish them…because even a one-sided debate is more instructive than an essay and because I want to inform the wider public of the astounding illiteracy of some “professionals”

Fortunately, not all his dialogues were so one-sided. In his encounters with interlocutors Feyerabend tends to function like a zen master, trying to get people to change their attitude, to get them to “sense chaos” where they perceive “an orderly arrangement of well behaved things and processes” (cf. his LAST LETTER). A very instructive example of this can be seen in hiscorrespondence on military intelligence networks with Isaac Ben-Israel, over a 2 year period stretching from  September 1988 to October 1990.

Though Feyerabend mainly refers to the philosophy of science, after all it was his domain of specialisation for many long years, he gives sporadic indications that his remarks apply to all philosophy, to all “school philosophies”, and not just to epistemology and the philosophy of sciences. So it is possible to see in a very general way what Feyerabend’s ideas on ontology are in this epistolary dialogue which begins with considerations of school philosophy as a useless detour, comparing it unfavourably to a more “naive” unacademic critical approach (Feyerabend’s first letter, L1: p5-6), goes on to consider in a little more detail what an unacademic critical philosophy would look like (L2: p11-14)  proceeds to plead for the “non-demarcation” of the sciences and the arts-humanities” and for the need to see epistemology and ontology as parts of politics (L3: p21-23),, and culminates in L4-5 (p31-33) with a sketch of Feyerabend’s own views on ontology. This is an amazing document, as the dialogue form takes Feyerabend into a domain that he has not discussed before (intelligence networks) and permits a concise yet progressive exposition of his later ideas and of their “fruitful imprecision”.

Feyerabend tells us that ontological critique, or the detour through ontology, is unnecessary, because a more open and less technical approach is possible. He gives various figurations of that unacademic approach: the educated layman, discoverers and generals, certain Kenyan tribes, a lawyer interrogating experts, the Homeric Greek worldview, his own minimalist ontology. The advantages he cites of such an unacademic approach are:

1) ability to “work in partly closed surroundings” where there is a “flow of information in some direction, not in others” (p5)

2) action that is sufficiently complex to “fit in” to the complexity of our practices (p11) and of the real world (p12)

3) ability to work without a fixed “theoretical framework”,  to “work outside well-defined frames” (p22), to break up frameworks and to rearrange the pieces as the circumstances demand, to not be limited by the “undue constraints” inherent to any particular framework (p13)

4) ability to work not just outside the traditional prejudices of a particular domain (p5) but outside the boundaries between domains, such as the putative boundary between the arts and the sciences (p21)

5) an awareness of the political origins and consequences of seemingly apolitical academic subjects: ontology “without politics is incomplete and arbitrary” (p22).

But one could object that Feyerabend is a relativist and so that “empirical research” for him could give whatever result we want, because in his systemanything goes. In fact the best gloss of this polemical slogan is “anything could work (but mostly doesn’t)”. Feyerabend’s epistemological realism is supported by an ontological realism: “reality (or Being) has no well-defined structure but reacts in different ways to different approaches”. This is one reason why he sometimes refuses the label of “relativist”, because according to him “Relativism presupposes a fixed framework”. For Feyerabend, the transversality of communication between people belonging to apparently incommensurable structures shows that the notion of a frame of reference that is fixed and impermeable has only a limited applicability:

“people with different ways of life and different conceptions of reality can learn to communicate with each other, often even without a gestalt-switch, which means, as far as I am concerned, that the concepts they use and the perceptions they have are not nailed down but are ambiguous”.

Nevertheless, he distinguishes between Being, as ultimate reality, which is unknowable, and the multiple manifest realities which are produced by our interaction with it, and which are themselves knowable. Approach Being in one way, across decades of scientific experiment, and it produces elementary particles, approach it in another way and it produces the Homeric gods:

“I now distinguish between an ultimate reality, or Being. Being cannot be known, ever (I have arguments for that). What we do know are the various manifest realities, like the world of the Greek gods, modern cosmology etc. These are the results of an interaction between Being and one of its relatively independent parts” (32).

The difference with relativism is that there is no guarantee that the approach will work, Being is independent of us and must respond positively, which is often not the case.

Feyerabend draws the conclusion that the determination of what is real and what is a simulacrum cannot be the prerogative of an abstract ontology, and thus of the intellectuals who promulgate it. There is no fixed framework, the manifest realities are multiple, and Being is unknowable. Thus the determination of what is real depends on our choice in favour of one form of life or another, ie on a political decision. This leads to Feyerabend’s conclusion: ontology “without politics is incomplete and arbitrary”.

Inversely, Harman has repeated many times that ontology has nothing to do with politics. Seen through Feyerabend’s eyes Harman’s OOO is thus bothincomplete, because it is apolitical, and arbitrary, because it is a priori and monist, we have already said that, but also because it attributes to a little tribe of intellectuals the right to tell us what is real (Harman’s “ghostly objects withdrawing from all human and inhuman access”, THE THIRD TABLE, 12) and what is unreal (the simulacra of common sense, of the humanities, and of the sciences). It is also harmful because it is based on ghostly bloodless merely intelligible real objects that transcend any of the régimes and practices that give us qualitatively differentiated objects in any recognisable sense. Objects withdraw from the diverse truth-régimes (the sciences, the humanities, common sense, but also from religion and politics), i.e. etymologically they abstract themselves: real objects are abstractions, indeed they are abstraction itself. This is not a revolutionary new “weird” realism, this is regressive transcendent realism, cynically packaged as its opposite. I consider Harman’s OOO as a purified and consensualised (i.e. demarxised depoliticised descientised) version of Althusser’s ontology of the real object and of his anti-humanism, and as exhibiting the same defects as any other synchronic ontology.

E) CONCLUSION

The structure of my argument is very classical, and very abstract, as it remains wholly in the domain of philosophy, and even worse of first philosophy. I think that a consequent philosophical pluralism has its own dynamic that leads from a pluralism inside philosophy (eg Feyerabend’s methodological pluralism), to a pluralising of philosophy itself as an ontological realm and a cognitive régime claiming completeness and universality (eg Feyerabend’s Machian “way of research” and his later ontological pluralism: the target of “philosophy as a discourse that covers everything … an all-encompassing synthetic view of the world and what it all means”. Here I think comes the move of putting philosophy in relation to a non-philosophical outside (non-philosophical not meaning a negation but a wider practice, as in non-Euclidean geometries). François Laruelle has written on this sort of thing at length, but I don’t think he can claim exclusive ownership (nor even chronological priority)of this idea, nor is he even necessarily the best exemplar of the practice of such a non-philosophy. But at least his work is a gesture in the right direction. So a non-laruellian non-philosophy is a reasonable prolongation of pluralism. Feyerabend’s work is a good example of such a non-laruellian non-philosophy.

To conclude I would like to give some indications to show that these questions are, or can be, very practical. In his article NEW ONTOLOGIES Andrew Pickering presents the two ontologies that I discuss in terms of the contrast between the painters De Kooning and Mondrian. Mondrian’s paintings are examples of a synchronic approach, where the subect distances itself from the world in order to dominate it, according to a transcendent plan which imposes its abstract representations on a passive material. The painter foresees and imposes his order on everything, there is no room for surprises that emerge during the process of painting. The canvas does nothing, it is receptive rather than agentive, there is no exchange between the painter and his canvas, no dialogue.

On the other hand, De Kooning’s canvases participate themselves in the elaboration of the work. There is a continual back-and-forth between the painter and his canvas, “between the perception of emergent effects and the attempt to intensify them”. The De Kooningian approach is diachronic, it involves an immanent, concrete, incarnated, open process of engagement  in the world, whereas the Mondrianesque approach is synchronic and implies a transcendent , abstract, disincarnated, closed process of distanciation from the world. The Mondrianesque approach corresponds, according to Pickering, to Heideggerian “enframing”, while the De Kooningian approach practices aletheia, unveiling.

Pickering’s hope is that the diachronic practices which are still marginal in our society can come together and overflow or dissolve the dominant synchronic enframing. Pickering gives several concrete examples of diachronic practices, not only in art (De Kooning) but also in civil engineering (the ecological and adaptative management of a river) and also in psychiatry (anti-psychiatric experiments like Kingsley Hall, institutional psychotherapy like La Borde, favourising symmetric and non-hierarchical relations). He also talks of mathematics, music and architecture, to show in each case the concrete effects of both approaches. Thus we should keep in mind that even if the discussion in this paper is situated on the conceptual plane, the differences and disputes over ontology are inseparable from our concrete daily existence.

Stefano Mancuso, pionero en el estudio de la neurobiología de las plantas (La Vanguardia)

Victor-M Amela, Ima Sanchís, Lluís Amiguet

“Las plantas tienen neuronas, son seres inteligentes”

29/12/2010 – 02:03

"Las plantas tienen neuronas, son seres inteligentes"

Foto: KIM MANRESA

IMA SANCHÍS

Cerebro vegetal

Gracias a nuestros amigos de Redes, el programa de Eduard Punset, buscadores incansables de todo conocimiento científico que amplíe los límites del saber, de quiénes somos y qué papel desempeñamos en esta sopa de universos, descubrimos a Mancuso, que nos explica que las plantas, vistas a cámara rápida, se comportan como si tuvieran cerebro: tienen neuronas, se comunican mediante señales químicas, toman decisiones, son altruistas y manipuladoras. ¿Hace cinco años era imposible hablar de comportamiento de las plantas, hoy podemos empezar a hablar de su inteligencia¿… Puede que pronto empecemos a hablar de sus sentimientos. Mancuso estará en Redes el próximo día 2. No se lo pierdan.

Sorpréndame.

Las plantas son organismos inteligentes, pero se mueven y toman decisiones en un tiempo más largo que el del hombre.

Lo intuía.

Hoy sabemos que tienen familia y parientes y que reconocen su cercanía. Se comportan de manera totalmente distinta si a su lado hay parientes o hay extraños. Si son parientes no compiten: a través de las raíces, dividen el territorio de manera equitativa.

¿Un árbol puede voluntariamente mandar savia a una planta pequeña?

Sí. Las plantas requieren luz para vivir, ypara que una semilla llegue a la luz deben pasar muchos años; mientras tanto, son nutridas por árboles de su misma especie.

Curioso.

Los cuidados parentales sólo se dan en animales muy evolucionados y es increíble que se den en las plantas.

Entonces, se comunican.

Sí, en una selva todas las plantas están en comunicación subterránea a través de las raíces. Y también fabrican moléculas volátiles que avisan a plantas lejanas sobre lo que está sucediendo.

¿Por ejemplo?

Cuando una planta es atacada por un patógeno, inmediatamente produce moléculas volátiles que pueden viajar kilómetros, y que avisan a todas las demás para que preparen sus defensas.

¿Qué defensas?

Producen moléculas químicas que las convierten en indigeribles, y pueden ser muy agresivas. Hace diez años, en Botsuana introdujeron en un gran parque 200.000 antílopes, que comenzaron a comerse las acacias con intensidad. Tras pocas semanas muchos murieron y al cabo de seis meses murieron más de 10.000, y no advertían por qué. Hoy sabemos que fueron las plantas.

Demasiada predación.

Sí, y las plantas aumentaron hasta tal punto la concentración de taninos en sus hojas, que se convirtieron en un veneno.

¿Las plantas también son empáticas con otros seres?

Es difícil decirlo, pero hay una cosa segura: las plantas pueden manipular a los animales. Durante la polinización producen néctar y otras sustancias para atraer a los insectos. Las orquídeas producen flores que son muy similares a las hembras de algunos insectos, que, engañados, acuden a ellas. Y hay quien afirma que hasta el ser humano es manipulado por las plantas.

¿. ..?

Todas las drogas que usa el hombre (café, tabaco, opio, marihuana…) derivan de las plantas, ¿pero por qué las plantas producen una sustancia que convierte a humanos en dependientes? Porque así las propagamos. Las plantas utilizan al hombre como transporte. Hay investigaciones sobre ello.

Increíble.

Si mañana desaparecieran las plantas del planeta, en un mes toda la vida se extinguiría porque no habría comida ni oxígeno. Todo el oxígeno que respiramos viene de ellas. Pero si nosotros desapareciéramos, no pasaría nada. Somos dependientes de las plantas, pero las plantas no lo son de nosotros. Quien es dependiente está en una situación inferior, ¿no?

Las plantas son mucho más sensibles. Cuando algo cambia en el ambiente, como ellas no pueden escapar, han de ser capaces de sentir con mucha anticipación cualquier mínimo cambio para adaptarse.

¿Y cómo perciben?

Cada punta de raíz es capaz de percibir continuamente y a la vez como mínimo quince parámetros distintos físicos y químicos (temperatura, luz, gravedad, presencia de nutrientes, oxígeno).

Es su gran descubrimiento, y es suyo.

En cada punta de las raíces existen células similares a nuestras neuronas y su función es la misma: comunicar señales mediante impulsos eléctricos, igual que nuestro cerebro. En una planta puede haber millones de puntas de raíces, cada una con su pequeña comunidad de células; y trabajan en red como internet.

Ha encontrado el cerebro vegetal.

Sí, su zona de cálculo. La cuestión es cómo medir su inteligencia. Pero de una cosa estamos seguros: son muy inteligentes, su poder de resolver problemas, de adaptación, es grande. Hoy sobre el planeta el 99,6% de todo lo que está vivo son plantas.

… Y sólo conocemos el 10%.

Y en ese porcentaje tenemos todo nuestro alimento y la medicina. ¿Qué habrá en el restante 90%?… A diario, cientos de especies vegetales desconocidas se extinguen. Tal vez poseían la capacidad de una cura importante, no lo sabremos nunca. Debemos proteger las plantas por nuestra supervivencia.

¿Qué le emociona de las plantas?

Algunos comportamientos son muy emocionantes. Todas las plantas duermen, se despiertan, buscan la luz con sus hojas; tienen una actividad similar a la de los animales. Filmé el crecimiento de unos girasoles, y se ve clarísimo cómo juegan entre ellos.

¿Juegan?

Sí, establecen el comportamiento típico del juego que se ve en tantos animales. Cogimos una de esas pequeñas plantas y la hicimos crecer sola. De adulta tenía problemas de comportamiento: le costaba girar en busca del sol, le faltaba el aprendizaje a través del juego. Ver estas cosas es emocionante.

Leer más: http://www.lavanguardia.com/lacontra/20101229/54095622430/las-plantas-tienen-neuronas-son-seres-inteligentes.html#ixzz3A8PpebKp

It’s Time to Destroy Corporate Personhood (IO9)

July 21, 2014

It's Time to Destroy Corporate Personhood

The United States in the only country in the world that recognizes corporations as persons. It’s a so-called “legal fiction” that’s meant to uphold the rights of groups and to smooth business processes. But it’s a dangerous concept that’s gone too far — and could endanger social freedoms in the future.

Illustration from Judge Dredd: Mega City Two by Ulises Farinas

Corporate personhood is a legal concept that’s used in the U.S. to recognize corporations as individuals in the eyes of the law. Like actual people, corporations hold and exercise certain rights and protections under the law and the U.S. Constitution. As legal persons, they can sue and be sued, have the right to appear in court, enter into contracts, and own property — and they can do this separate from their members or shareholders. At the same time, it provides a single entity for taxation and regulation and it simplifies complex transactions — challenges that didn’t exist during the era of sole proprietorships or partnerships when the owners were held liable for the debts and affairs of the business.

That said, a corporation does not have the full suite of rights afforded to persons of flesh-and-blood. Corporations cannot vote, run for office, or bear arms — nor can they contribute to federal political campaigns. What’s more, the concept doesn’t claim that corporations are biological people in the literal sense of the term.

A “Legal Fiction”

It's Time to Destroy Corporate Personhood

“Corporations are ‘legal fictions’ — a fact or facts assumed or created by courts, used to create rights for convenience and to serve the ends of justice,” says ethicist and attorney-at-law Linda MacDonald Glenn. “The idea of ‘corporations as persons’ though, all started because of a headnote mistake in the 1886 case of Santa Clara County v. Pacific Railroad Co, 113, U.S. 394 — a mistake that has been perpetuated with profound consequences.

Mistake or no mistake, the doctrine was affirmed in 1888 during Pembina Consolidated Silver Mining Co. v. Pennsylvania, when the Court stated that, “Under the designation of ‘person’ there is no doubt that a private corporation is included [in the Fourteenth Amendment]. Such corporations are merely associations of individuals united for a special purpose and permitted to do business under a particular name and have a succession of members without dissolution.”

It’s a doctrine that’s held ever since, one that works off the conviction that corporations are organizations of people, and that people should not be deprived of their constitutional rights when they act collectively.

The concept may seem strange and problematic, but UCLA Law Professor Adam Winkler says corporate personhood has had profound and beneficial economic consequences:

It means that the obligations the law imposes on the corporation, such as liability for harms caused by the firm’s operations, are not generally extended to the shareholders. Limited liability protects the owners’ personal assets, which ordinarily can’t be taken to pay the debts of the corporation. This creates incentives for investment, promotes entrepreneurial activity, and encourages corporate managers to take the risks necessary for growth and innovation. That’s why the Supreme Court, in business cases, has held that “incorporation’s basic purpose is to create a legally distinct entity, with legal rights, obligations, powers, and privileges different from those of the natural individuals who created it, who own it, or whom it employs.

Of course, other nations don’t employ this “fiction”, yet they’ve found ways to cope with these challenges.

Living in a World of Make-believe

Moreover, the problem with evoking a fiction is that it can lead us down some strange paths. By living in a world of make-believe, courts have extended other rights to corporations beyond those necessary. It’s hardly a fiction anymore, with “person” now having a wider meaning than ever before.

It's Time to Destroy Corporate Personhood

(YanLev/Shutterstock)

Here’s what Judge O’Dell-Seneca said last year in the Hallowich v Range case:

Corporations, companies and partnership have no spiritual nature, feelings, intellect, beliefs, thoughts, emotions or sensations because they do not exist in the manner that humankind exists…They cannot be ‘let alone’ by government because businesses are but grapes, ripe upon the vine of the law, that the people of this Commonwealth raise, tend and prune at their pleasure and need.

To this list of attributes, MacDonald Glenn adds a lack of conscience.

“I’ve heard it said that if a corporation had a psychological profile done, it would be a psychopath,” she told io9. ” The concept of corporations was created partially to shield natural persons from liability; and it allowed individuals to create something, a business, that was larger than themselves and could exist in perpetuity. But it’s twisted reasoning to allow them to have equal or higher status than ‘natural’ persons or other sentient beings. A corporation cannot laugh or love; it doesn’t enjoy the warm breezes of summer, or mourn the loss of a loved one. In short, corporations are not sentient beings; they are artifacts.”

Similarly, solicitor general Elena Kagan has warned against expanding the notion of corporate personhood. In 2009 she said: “Few of us are only our economic interests. We have beliefs. We have convictions. [Corporations] engage the political process in an entirely different way, and this is what makes them so much more damaging.”

The New York Times has also come out in condemnation of the concept:

The law also gives corporations special legal status: limited liability, special rules for the accumulation of assets and the ability to live forever. These rules put corporations in a privileged position in producing profits and aggregating wealth. Their influence would be overwhelming with the full array of rights that people have.

One of the main areas where corporations’ rights have long been limited is politics. Polls suggest that Americans are worried about the influence that corporations already have with elected officials. The drive to give corporations more rights is coming from the court’s conservative bloc — a curious position given their often-proclaimed devotion to the text of the Constitution.

The founders of this nation knew just what they were doing when they drew a line between legally created economic entities and living, breathing human beings. The court should stick to that line.

Causing Harm

I asked MacDonald Glenn if the concept of corporate personhood is demeaning or damaging tobona fide persons, particularly women.

“It’s about sentience — the ability to feel pleasure and pain,” she responded. “Corporate personhood emphasizes profits, property, assets. It should be noted that corporations were given legal status as persons before women were.”

MacDonald Glenn says that although the Declaration of Independence starts out idealistically with the words, “We hold these truths to be self-evident, that all men are created equal”, we still live in very hierarchical class-based society.

“Although we have made significant strides towards recognizing the value of all persons, generally speaking, the wealthier you are, the more powerful you are, the more influence you exert,” she says. “So, if corporations are the ones with the money, they become the ones who have the power and influence. The recent Supreme court decisions reinforce that and, sadly, it encourages social stratification — a system not very different than those portrayed recently in recent movies, such as The Hunger Games or Elysium. No notion of ‘all (wo)men are created equally’ there.”

It's Time to Destroy Corporate Personhood

The notion of fictitious persons can be harmful to women in other ways as well. If it can be argued that artifacts are persons — objects devoid of an inner psychological life — it’s conceivable that other crazy fictions can be devised as well — such as fetal personhood. It’s something that should make pro-life advocates very nervous.

At the same time, while corporations are thought of as persons, an entire subset of nonhuman animals deserving of personhood status are refused to be recognized as such. In the future, the concept could lead to the attribution of personhood onto artificial intelligences or robots devoid of sentient capacities. Furthermore, the practice of recognizing artifacts as persons diminishes what it truly means to be a genuine person.

Clearly, corporations deserve rights and protections, but certainly not under the rubric of something as precious and cherished as personhood.

The Hobby Lobby Decision

Which brings us to the controversial Hobby Lobby case — a prime example of what can happen when corporate personhood is taken too far. In this controversial case, the owners of a craft store claimed that their personal religious beliefs would be offended if they had to provide certain forms of birth control coverage to employees.

It's Time to Destroy Corporate PersonhoodEXPAND

(Nicholas Eckhart)

“The purpose of extending rights to corporations is to protect the rights of people associated with the corporation, including shareholders, officers, and employees,” Justice Samuel Alito wrote in the ensuing decision. “Protecting the free-exercise rights of closely held corporations thus protects the religious liberty of the humans who own and control them.”

Of course, the Supreme Court justices failed to acknowledge a number of aspects indelible to the U.S. Constitution, including the right to be free from religion, not to the mention the fact that corporate personhood was never the intention of the Founding Fathers in the first place.

Indeed, as Washington Post’s Dana Milbank recently pointed out, the decision went way too far: “…corporations enjoy rights that ‘natural persons’ do not. The act of incorporating allows officers to avoid personal responsibility for corporate actions. Corporations have the benefits of personhood without those pesky responsibilities.”

And as MacDonald Glenn told me, the decision doesn’t protect religious liberties of individuals — it gives an artifact human rights, previously only reserved to natural persons.

“It’s form of corporate idolatry,” MacDonald Glenn told io9. “Granting the rights of citizens to corporate structures creates a disproportionate impact where the rights of those with wealth supersede the rights of those without.”

Related: 

Hilariously Useless Comments About Science from the US Supreme Court

Is the universe a bubble? Let’s check: Making the multiverse hypothesis testable (Science Daily)

Date: July 17, 2014

Source: Perimeter Institute

Summary: Scientists are working to bring the multiverse hypothesis, which to some sounds like a fanciful tale, firmly into the realm of testable science. Never mind the Big Bang; in the beginning was the vacuum. The vacuum simmered with energy (variously called dark energy, vacuum energy, the inflation field, or the Higgs field). Like water in a pot, this high energy began to evaporate — bubbles formed.

Screenshot from a video of Matthew Johnson explaining the related concepts of inflation, eternal inflation, and the multiverse (see http://youtu.be/w0uyR6JPkz4). Credit: Image courtesy of Perimeter Institute

Perimeter Associate Faculty member Matthew Johnson and his colleagues are working to bring the multiverse hypothesis, which to some sounds like a fanciful tale, firmly into the realm of testable science.

Never mind the big bang; in the beginning was the vacuum. The vacuum simmered with energy (variously called dark energy, vacuum energy, the inflation field, or the Higgs field). Like water in a pot, this high energy began to evaporate — bubbles formed.

Each bubble contained another vacuum, whose energy was lower, but still not nothing. This energy drove the bubbles to expand. Inevitably, some bubbles bumped into each other. It’s possible some produced secondary bubbles. Maybe the bubbles were rare and far apart; maybe they were packed close as foam.

But here’s the thing: each of these bubbles was a universe. In this picture, our universe is one bubble in a frothy sea of bubble universes.

That’s the multiverse hypothesis in a bubbly nutshell.

It’s not a bad story. It is, as scientists say, physically motivated — not just made up, but rather arising from what we think we know about cosmic inflation.

Cosmic inflation isn’t universally accepted — most cyclical models of the universe reject the idea. Nevertheless, inflation is a leading theory of the universe’s very early development, and there is some observational evidence to support it.

Inflation holds that in the instant after the big bang, the universe expanded rapidly — so rapidly that an area of space once a nanometer square ended up more than a quarter-billion light years across in just a trillionth of a trillionth of a trillionth of a second. It’s an amazing idea, but it would explain some otherwise puzzling astrophysical observations.

Inflation is thought to have been driven by an inflation field — which is vacuum energy by another name. Once you postulate that the inflation field exists, it’s hard to avoid an “in the beginning was the vacuum” kind of story. This is where the theory of inflation becomes controversial — when it starts to postulate multiple universes.

Proponents of the multiverse theory argue that it’s the next logical step in the inflation story. Detractors argue that it is not physics, but metaphysics — that it is not science because it cannot be tested. After all, physics lives or dies by data that can be gathered and predictions that can be checked.

That’s where Perimeter Associate Faculty member Matthew Johnson comes in. Working with a small team that also includes Perimeter Faculty member Luis Lehner, Johnson is working to bring the multiverse hypothesis firmly into the realm of testable science.

“That’s what this research program is all about,” he says. “We’re trying to find out what the testable predictions of this picture would be, and then going out and looking for them.”

Specifically, Johnson has been considering the rare cases in which our bubble universe might collide with another bubble universe. He lays out the steps: “We simulate the whole universe. We start with a multiverse that has two bubbles in it, we collide the bubbles on a computer to figure out what happens, and then we stick a virtual observer in various places and ask what that observer would see from there.”

Simulating the whole universe — or more than one — seems like a tall order, but apparently that’s not so.

“Simulating the universe is easy,” says Johnson. Simulations, he explains, are not accounting for every atom, every star, or every galaxy — in fact, they account for none of them.

“We’re simulating things only on the largest scales,” he says. “All I need is gravity and the stuff that makes these bubbles up. We’re now at the point where if you have a favourite model of the multiverse, I can stick it on a computer and tell you what you should see.”

That’s a small step for a computer simulation program, but a giant leap for the field of multiverse cosmology. By producing testable predictions, the multiverse model has crossed the line between appealing story and real science.

In fact, Johnson says, the program has reached the point where it can rule out certain models of the multiverse: “We’re now able to say that some models predict something that we should be able to see, and since we don’t in fact see it, we can rule those models out.”

For instance, collisions of one bubble universe with another would leave what Johnson calls “a disk on the sky” — a circular bruise in the cosmic microwave background. That the search for such a disk has so far come up empty makes certain collision-filled models less likely.

Meanwhile, the team is at work figuring out what other kinds of evidence a bubble collision might leave behind. It’s the first time, the team writes in their paper, that anyone has produced a direct quantitative set of predictions for the observable signatures of bubble collisions. And though none of those signatures has so far been found, some of them are possible to look for.

The real significance of this work is as a proof of principle: it shows that the multiverse can be testable. In other words, if we are living in a bubble universe, we might actually be able to tell.

Video: https://www.youtube.com/watch?v=w0uyR6JPkz4

Journal References:

  1. Matthew C. Johnson, Hiranya V. Peiris, Luis Lehner. Determining the outcome of cosmic bubble collisions in full general relativityPhysical Review D, 2012; 85 (8) DOI: 10.1103/PhysRevD.85.083516
  2. Carroll L. Wainwright, Matthew C. Johnson, Hiranya V. Peiris, Anthony Aguirre, Luis Lehner, Steven L. Liebling. Simulating the universe(s): from cosmic bubble collisions to cosmological observables with numerical relativity.Journal of Cosmology and Astroparticle Physics, 2014; 2014 (03): 030 DOI:10.1088/1475-7516/2014/03/030
  3. Carroll L. Wainwright, Matthew C. Johnson, Anthony Aguirre, Hiranya V. Peiris.Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full General Relativitysubmitted to arXiv, 2014 [link]
  4. Stephen M. Feeney, Matthew C. Johnson, Jason D. McEwen, Daniel J. Mortlock, Hiranya V. Peiris. Hierarchical Bayesian detection algorithm for early-universe relics in the cosmic microwave backgroundPhysical Review D, 2013; 88 (4) DOI: 10.1103/PhysRevD.88.043012