Arquivo da tag: Pós-humano

Cientistas propõem projeto para criar genoma humano sintético (O Globo)

O Globo, 02/06/2016

Imagem de reprodução de DNA de hélice quádrupla – Divulgação/Jean-Paul Rodriguez

WASHINGTON — Um grupo de cientistas propôs, nesta quinta-feira, um projeto ambicioso para criar um genoma humano sintético, que tornaria possível a criação de seres humanos sem a necessidade de pais biológicos. Esta possibilidade levanta polêmica sobre o quanto a vida humana pode ou deve ser manipulada.

O projeto, que surgiu em uma reunião de cientistas da Universidade Harvard, nos EUA, no mês passado, tem como objetivo desenvolver e testar o genoma sintético em células dentro de laboratório ao longo de dez anos. O genoma sintético humano envolve a utilização de produtos químicos para criar o DNA presente nos cromossomas humanos. A meta foi relatada na revista “Science” pelos 25 especialistas envolvidos.

Os cientistas propuseram lançar, ainda este ano, o que chamaram de Projeto de Escrita do Genoma Humano e afirmaram que iriam envolver o público nessa discussão, que incluiria questões éticas, legais e sociais.

Os especialistas esperam arrecadar US$ 100 milhões — o equivalente a R$ 361 milhões — em financiamento público e privado para lançar o projeto este ano. No entanto, eles consideram que os custos totais serão inferiores aos US$ 3 milhões utilizados no Projeto do Genoma Humano original, que mapeou pela primeira vez o DNA humano.

O novo projecto “incluirá a engenharia completa do genoma de linhas de células humanas e de outros organismos importantes para a agricultura e saúde pública, ou aqueles que interpretar as funções biológicas humanas”, escreveram na “Science” os 25 cientistas, liderados pelo geneticista Jef Boeke, do Centro Médico Langone, da Universidade de Nova York.

Preternatural machines (AEON)

by 

Robots came to Europe before the dawn of the mechanical age. To a medieval world, they were indistinguishable from magic

E R Truitt is a medieval historian at Bryn Mawr College in Pennsylvania. Her book, Medieval Robots: Mechanism, Magic, Nature, and Art, is out in June.

Edited by Ed Lake

In 807 the Abbasid caliph in Baghdad, Harun al-Rashid, sent Charlemagne a gift the like of which had never been seen in the Christian empire: a brass water clock. It chimed the hours by dropping small metal balls into a bowl. Instead of a numbered dial, the clock displayed the time with 12 mechanical horsemen that popped out of small windows, rather like an Advent calendar. It was a thing of beauty and ingenuity, and the Frankish chronicler who recorded the gift marvelled how it had been ‘wondrously wrought by mechanical art’. But given the earliness of the date, what’s not clear is quite what he might have meant by that.

Certain technologies are so characteristic of their historical milieux that they serve as a kind of shorthand. The arresting title credit sequence to the TV series Game of Thrones (2011-) proclaims the show’s medieval setting with an assortment of clockpunk gears, waterwheels, winches and pulleys. In fact, despite the existence of working models such as Harun al-Rashid’s gift, it was another 500 years before similar contraptions started to emerge in Europe. That was at the turn of the 14th century, towards the end of the medieval period – the very era, in fact, whose political machinations reportedly inspired the plot of Game of Thrones.

When mechanical clockwork finally took off, it spread fast. In the first decades of the 14th century, it became so ubiquitous that, in 1324, the treasurer of Lincoln Cathedral offered a substantial donation to build a new clock, to address the embarrassing problem that ‘the cathedral was destitute of what other cathedrals, churches, and convents almost everywhere in the world are generally known to possess’. It’s tempting, then, to see the rise of the mechanical clock as a kind of overnight success.

But technological ages rarely have neat boundaries. Throughout the Latin Middle Ages we find references to many apparent anachronisms, many confounding examples of mechanical art. Musical fountains. Robotic servants. Mechanical beasts and artificial songbirds. Most were designed and built beyond the boundaries of Latin Christendom, in the cosmopolitan courts of Baghdad, Damascus, Constantinople and Karakorum. Such automata came to medieval Europe as gifts from foreign rulers, or were reported in texts by travellers to these faraway places.

In the mid-10th century, for instance, the Italian diplomat Liudprand of Cremona described the ceremonial throne room in the Byzantine emperor’s palace in Constantinople. In a building adjacent to the Great Palace complex, Emperor Constantine VII received foreign guests while seated on a throne flanked by golden lions that ‘gave a dreadful roar with open mouth and quivering tongue’ and switched their tails back and forth. Next to the throne stood a life-sized golden tree, on whose branches perched dozens of gilt birds, each singing the song of its particular species. When Liudprand performed the customary prostration before the emperor, the throne rose up to the ceiling, potentate still perched on top. At length, the emperor returned to earth in a different robe, having effected a costume change during his journey into the rafters.

The throne and its automata disappeared long ago, but Liudprand’s account echoes a description of the same marvel that appears in a Byzantine manual of courtly etiquette, written – by the Byzantine emperor himself, no less – at around the same time. The contrast between the two accounts is telling. The Byzantine one is preoccupied with how the special effects slotted into certain rigid courtly rituals. It was during the formal introduction of an ambassador, the manual explains, that ‘the lions begin to roar, and the birds on the throne and likewise those in the trees begin to sing harmoniously, and the animals on the throne stand upright on their bases’. A nice refinement of royal protocol. Liudprand, however, marvelled at the spectacle. He hazarded a guess that a machine similar to a winepress might account for the rising throne; as for the birds and lions, he admitted: ‘I could not imagine how it was done.’

Other Latin Christians, confronted with similarly exotic wonders, were more forthcoming with theories. Engineers in the West might have lacked the knowledge to copy these complex machines or invent new ones, but thanks to gifts such as Harun al-Rashid’s clock and travel accounts such as Liudprand’s, different kinds of automata became known throughout the Christian world. In time, scholars and philosophers used their own scientific ideas to account for them. Their framework did not rely on a thorough understanding of mechanics. How could it? The kind of mechanical knowledge that had flourished since antiquity in the East had been lost to Europe following the decline of the western Roman Empire.

Instead, they talked about what they knew: the hidden powers of Nature, the fundamental sympathies between celestial bodies and earthly things, and the certainty that demons existed and intervened in human affairs. Arthur C Clarke’s dictum that any sufficiently advanced technology is indistinguishable from magic was rarely more apposite. Yet the very blurriness of that boundary made it fertile territory for the medieval Christian mind. In time, the mechanical age might have disenchanted the world – but its eventual victory was much slower than the clock craze might suggest. And in the meantime, there were centuries of magical machines.

In the medieval Latin world, Nature could – and often did – act predictably. But some phenomena were sufficiently weird and rare that they could not be considered of a piece with the rest of the natural world. They therefore were classified as preternatural: literally, praeter naturalis or ‘beyond nature’.

What might fall into this category? Pretty much any freak occurrence or deviation from the ordinary course of things: a two-headed sheep, for example. Then again, some phenomena qualified as preternatural because their causes were not readily apparent and were thus difficult to know. Take certain hidden – but essential – characteristics of objects, such as the supposedly fire-retardant skin of the salamander, or the way that certain gems were thought to detect or counteract poison. Magnets were, of course, a clear case of the preternatural at work.

If the manifestations of the preternatural were various, so were its causes. Nature herself might be responsible – just because she often behaved predictably did not mean that she was required to do so – but so, equally, might demons and angels. People of great ability and learning could use their knowledge, acquired from ancient texts, to predict preternatural events such as eclipses. Or they might harness the secret properties of plants or natural laws to bring about certain desired outcomes. Magic was largely a matter of manipulating this preternatural domain: summoning demons, interpreting the stars, and preparing a physic could all fall under the same capacious heading.

All of which is to say, there were several possible explanations for the technological marvels that were arriving from the east and south. Robert of Clari, a French knight during the disastrous Fourth Crusade of 1204, described copper statues on the Hippodrome that moved ‘by enchantment’. Several decades later, Franciscan missionaries to the Mongol Empire reported on the lifelike artificial birds at the Khan’s palace and speculated that demons might be the cause (though they didn’t rule out superior engineering as an alternative theory).

Does a talking statue owe its powers to celestial influence or demonic intervention?

Moving, speaking statues might also be the result of a particular alignment of planets. While he taught at the cathedral school in Reims, Gerbert of Aurillac, later Pope Sylvester II (999-1003), introduced tools for celestial observation (the armillary sphere and the star sphere) and calculation (the abacus and Arabic numerals) to the educated elites of northern Europe. His reputation for learning was so great that, more than 100 years after his death, he was also credited with making a talking head that foretold the future. According to some accounts, he accomplished this through demonic magic, which he had learnt alongside the legitimate subjects of science and mathematics; according to others, he used his superior knowledge of planetary motion to cast the head at the precise moment of celestial conjunction so that it would reveal the future. (No doubt he did his calculations with an armillary sphere.)

Because the category of the preternatural encompassed so many objects and phenomena, and because there were competing, rationalised explanations for preternatural things, it could be difficult to discern the correct cause. Does a talking statue owe its powers to celestial influence or demonic intervention? According to one legend, Albert the Great – a 13th-century German theologian, university professor, bishop, and saint – used his knowledge to make a prophetic robot. One of Albert’s brothers in the Dominican Order went to visit him in his cell, knocked on the door, and was told to enter. When the friar went inside he saw that it was not Brother Albert who had answered his knock, but a strange, life-like android. Thinking that the creature must be some kind of demon, the monk promptly destroyed it, only to be scolded for his rashness by a weary and frustrated Albert, who explained that he had been able to create his robot because of a very rare planetary conjunction that happened only once every 30,000 years.

In legend, fiction and philosophy, writers offered explanations for the moving statues, artificial animals and musical figures that they knew were part of the world beyond Latin Christendom. Like us, they used technology to evoke particular places or cultures. The golden tree with artificial singing birds that confounded Liudprand on his visit to Constantinople appears to have been a fairly common type of automaton: it appears in the palaces of Samarra and Baghdad and, later, in the courts of central India. In the early 13th century, the sultan of Damascus sent a metal tree with mechanical songbirds as a gift to the Holy Roman Emperor Frederick II. But this same object also took root in the Western imagination: we find writers of fiction in medieval Europe including golden trees with eerily lifelike artificial birds in many descriptions of courts in Babylon and India.

In one romance from the early 13th century, sorcerers use gemstones with hidden powers combined with necromancy to make the birds hop and chirp. In another, from the late 12th century, the king harnesses the winds to make the golden branches sway and the gilt birds sing. There were several different species of birds represented on the king’s fabulous tree, each with its own birdsong, so exact that real birds flocked to the tree in hopes of finding a mate. ‘Thus the blackbirds, skylarks, jaybirds, starlings, nightingales, finches, orioles and others which flocked to the park in high spirits on hearing the beautiful birdsong, were quite unhappy if they did not find their partner!’

Of course, the Latin West did not retain its innocence of mechanical explanations forever. Three centuries after Gerbert taught his students how to understand the heavens with an armillary sphere, the enthusiasm for mechanical clocks began to sweep northern Europe. These giant timepieces could model the cosmos, chime the hour, predict eclipses and represent the totality of human history, from the fall of humankind in the Garden of Eden to the birth and death of Jesus, and his promised return.

Astronomical instruments, like astrolabes and armillary spheres, oriented the viewer in the cosmos by showing the phases of the moon, the signs of the zodiac and the movements of the planets. Carillons, programmed with melodies, audibly marked the passage of time. Large moving figures of people, weighted with Christian symbolism, appear as monks, Jesus, the Virgin Mary. They offered a master narrative that fused past, present and future (including salvation). The monumental clocks of the late medieval period employed cutting-edge technology to represent secular and sacred chronology in one single timeline.

Secular powers were no slower to embrace the new technologies. Like their counterparts in distant capitals, European rulers incorporated mechanical marvels into their courtly pageantry. The day before his official coronation in Westminster Abbey in 1377, Richard II of England was ‘crowned’ by a golden mechanical angel – made by the goldsmiths’ guild – during his coronation pageant in Cheapside.

And yet, although medieval Europeans had figured out how to build the same kinds of complex automata that people in other places had been designing and constructing for centuries, they did not stop believing in preternatural causes. They merely added ‘mechanical’ to the list of possible explanations. Just as one person’s ecstatic visions might equally be attributed to divine inspiration or diabolical trickery, a talking or moving statue could be ascribed to artisanal or engineering know-how, the science of the stars, or demonic art. Certainly the London goldsmiths in 1377 were in no doubt about how the marvellous angel worked. But because a range of possible causes could animate automata, reactions to them in this late medieval period tended to depend heavily on the perspective of the individual.

At a coronation feast for the queen at the court of Ferdinand I of Aragon in 1414, theatrical machinery – of the kind used in religious Mystery Plays – was used for part of the entertainment. A mechanical device called a cloud, used for the arrival of any celestial being (gods, angels and the like), swept down from the ceiling. The figure of Death, probably also mechanical, appeared above the audience and claimed a courtier and jester named Borra for his own. Other guests at the feast had been forewarned, but nobody told Borra. A chronicler reported on this marvel with dry exactitude:
Death threw down a rope, they [fellow guests] tied it around Borra, and Death hanged him. You would not believe the racket that he made, weeping and expressing his terror, and he urinated into his underclothes, and urine fell on the heads of the people below. He was quite convinced he was being carried off to Hell. The king marvelled at this and was greatly amused.

Such theatrical tricks sound a little gimcrack to us, but if the very stage machinery might partake of uncanny forces, no wonder Borra was afraid.

Nevertheless, as mechanical technology spread throughout Europe, mechanical explanations of automata (and machines in general) gradually prevailed over magical alternatives. By the end of the 17th century, the realm of the preternatural had largely vanished. Technological marvels were understood to operate within the boundaries of natural laws rather than at the margins of them. Nature went from being a powerful, even capricious entity to an abstract noun denoted with a lower-case ‘n’: predictable, regular, and subject to unvarying law, like the movements of a mechanical clock.

This new mechanistic world-view prevailed for centuries. But the preternatural lingered, in hidden and surprising ways. In the 19th century, scientists and artists offered a vision of the natural world that was alive with hidden powers and sympathies. Machines such as the galvanometer – to measure electricity – placed scientists in communication with invisible forces. Perhaps the very spark of life was electrical.

Even today, we find traces of belief in the preternatural, though it is found more often in conjunction with natural, rather than artificial, phenomena: the idea that one can balance an egg on end more easily at the vernal equinox, for example, or a belief in ley lines and other Earth mysteries. Yet our ongoing fascination with machines that escape our control or bridge the human-machine divide, played out countless times in books and on screen, suggest that a touch of that old medieval wonder still adheres to the mechanical realm.

30 March 2015

‘Não somos ratos de laboratório’, diz diretor da Sangamo Biosciences (O Globo)

Aparelho para sequenciamento genético. Para Lanphier, pesquisas com células-tronco não-reprodutivas são as únicas aceitáveis – David Paul Morris BLOOMBERG

Edward Lanphier comanda entidade ligada à medicina regenerativa e pede um freio nas pesquisas de manipulação do DNA com células reprodutivas

POR FABIO TEIXEIRA

RIO – Edward Lanphier comanda a Sangamo Biosciences, uma das entidades que formam a Aliança para a Medicina Regenerativa (ARM, na sigla em inglês), organização que reúne mais de 200 empresas no mundo da área de biotecnologia e instituições de pesquisa. A Aliança pediu uma moratória de prazo indefinido para pesquisas e prática de manipulação do DNA de células reprodutivas.

O debate sobre o tema, que já dura anos, esquentou com o desenvolvimento de técnicas que permitem que a edição de genes ocorra na prática, o que abre a possibilidade de gerar bebês sob medida.

Lanphier anunciou o pedido de interrupção nas pesquisas em um documento assinado por ele e outros membros da aliança. O texto, publicado na “Nature”, revista científica de renome internacional, declara que este tipo de pesquisa não deve ser levada adiante.

Edward Lanphier, diretor-presidente da Sangamo Biosciences – Divulgação

Enquanto nos Estados Unidos e em países europeus ainda não há uma definição prática sobre se é permitida ou não a manipulação genética de células reprodutivas, no Brasil, estudos deste tipo já foram proibidos. A resolução foi publicada em 2004 pela Comissão Nacional de Ética em Pesquisa (Conep), órgão do Ministério da Saúde. Ela diz: “As pesquisas com intervenção para modificação do genoma humano só poderão ser realizadas em células somáticas (não-reprodutivas).” Agora a questão seria o uso ilícito de técnicas desenvolvidas no exterior.

Em entrevista publicada esta segunda-feira na revista digital O GLOBO a Mais, Lanphier explica por que considera que até mesmo a pesquisa básica deve ser banida.

A moratória é geral?

Sim. O pedido de moratória é para que tenhamos um tempo para que todas as partes discutam o assunto. É um pedido. Mas a premissa da qual partimos é que mesmo com essa discussão existe uma linha que não pode ser ultrapassada.

Qual é o principal risco de editar o genoma de células reprodutivas (espermatozoides e óvulos)?

O grande problema é ético, apesar de haver também riscos de segurança e técnicos, que limitam o uso prático. A questão ética ultrapassa a barreira da legislação e políticas de cada país. Ela é fundamental.

Se é possível alterar o genoma, é possível escolher a cor do cabelo, dos olhos ou até da pele de um bebê?

Vai além disso. O problema é não só poder alterar as características de um indivíduo, mas também de todas as futuras gerações. Não somos ratos de laboratório, muito menos algo como um milho transgênico. Como espécie, nós humanos decidimos que somos únicos. Por décadas, os países desenvolvidos debateram a modificação de genes em células reprodutivas e se posicionaram contra isso.

É possível alterar genes que ditam características como inteligência ou até comportamento?

Essa é a nossa preocupação. Pois o indivíduo alterado passará as mudanças para as gerações futuras. Aberta a oportunidade deste tipo de pesquisa, ela pode ser usada para objetivos que não têm valor terapêutico, de tratamento de doenças. É um caminho sem volta. Nós, como sociedade, precisamos pensar no que nos torna humanos. No passado já nos posicionamos contra ações deste tipo, que podem nos levar a uma sociedade pautada pela eugenia.

O senhor poderia explicar a diferença entre a manipulação de células somáticas (não-reprodutivas) e a de óvulos e espermatozoides?

Existe uma diferença fundamental. É preto e branco. Na manipulação de células somáticas, você busca alterar um gene para criar uma resistência no indivíduo contra uma doença específica. Você não altera os genes de futuras gerações, caso a pessoa tenha filhos. A única coisa que se tenta fazer é curar doenças. Existe, porém, uma linha que não deve ser ultrapassada. E ela é alterar óvulos e espermatozoides, pois eles conferem hereditariedade para as características manipuladas.

Se o senhor muda uma única característica e ela é passada para gerações futuras, não é possível que outras mutações inesperadas aconteçam?

Isso é perfeitamente possível. É uma das questões que levantamos. Atualmente a natureza disto e suas possíveis consequências são completamente desconhecidas. Há muitas questões sem respostas. Precisamos responder a todas antes de sequer considerar a questão maior, que é a ética do processo. Ainda é muito cedo. Por isso, pedimos uma moratória.

Quais limites o senhor sente que é necessário criar a longo prazo?

Propusemos a moratória justamente para discutir o assunto. Não existe justificativa para realizar alterações genéticas em células reprodutivas.

O senhor cita uma possibilidade de rejeição da sociedade contra este tipo de pesquisa. O temor é de que isso atinja também a pesquisa com as demais células?

Seria uma rejeição motivada por falta de conhecimento.

Que linhas de estudo são consideradas promissoras?

As doenças com mais chances de serem curadas por este tipo de pesquisa são aquelas que têm um gene específico associado. É o caso de hemofilia, anemia falciforme e vários tipos de câncer. Essas são as oportunidades mais imediatas que se abrem com a pesquisa. Tecnicamente e teoricamente é possível ainda usar a tecnologia para alterar mais de um gene, para curar doenças relacionadas a múltiplos genes.

Há algum argumento a favor da alteração de genes em óvulos e espermatozoides?

Não. Mesmo em situações onde pais tenham genes com falhas ligadas a doenças hereditárias, não se justifica. Há exames pré-natais e tratamentos de fertilização in vitro para contornar estes problemas. Não há justificativa para editar o genoma humano em células reprodutivas.

Se é possível alterar o genoma humano, não é necessário questionar o que nos torna humanos? Não estaríamos criando uma nova espécie?

A grande questão é que, se mudarmos o DNA, mudamos a espécie.

Brasileiro faz música em dueto com fungo (BBC)

10 março 2015

Crédito: Edurado Miranda e Ed Braun

Biocomputador com mofo toca dueto com piano

Um músico brasileiro apresentou na Grã-Bretanha um dueto inédito: no piano, ele interagiu com um fungo.

E mofo toca música? Nas mãos de Eduardo Miranda, sim.

Especialista em música computadorizada, ele transformou a decomposição em composição: seu novo trabalho usa culturas do fungo Physarum polycephalumcomo um componente central de um biocomputador interativo, que recebe sinais de som e envia de volta as respostas.

“A composição, Biocomputer Music, se desenvolve como uma interação entre mim e a máquina Physarum,” disse Miranda.

“Eu toco alguma coisa, o sistema escuta, toca alguma coisa de volta, e então eu respondo, e assim por diante.”

Brasileiro de Porto Alegre, Miranda leciona na Universidade de Plymouth, na Inglaterra.

Ele disse à BBC Brasil que Heitor Villa-Lobos tem uma grande inflluência em sua obra e que gostaria de levar a apresentação Biocomputer para o Brasil, mas que, por enquanto, questões técnicas impedem que ele viaje com o equipamento.

Funcionamento

O mofo Physarum forma um componente eletrônico vivo e mutante em um circuito que processa sons captados por um microfone treinado no piano.

Credito: Eduardo Miranda e Ed Braun

Projeto foi desenvolvido na Universidade de Plymouth

Pequenos tubos formados pelo Physarum têm a propriedade elétrica de agir como uma resistência variável que muda de acordo com tensões aplicadas anteriormente, de acordo com Ed Braund, aluno de doutorado no Centro Interdisciplinar de Computer Music Research na Universidade de Plymouth.

“As notas de piano são transformados em uma onda elétrica complexa que enviamos através de um desses túbulos Physarum. A resistência Physarum muda em função das entradas anteriores, e as notas musicais viram, então, uma nova saída que é então enviada de volta para o piano. O biocomputador atua como um dispositivo de memória”, acrescenta Miranda.

“Quando você diz a ele para tocar novamente, ele vai embaralhar as notas enviadas. Pode até gerar alguns sons que não estavam nas notas tocadas. A máquina tem um pouco de ‘criatividade’.”

Enquanto o pianista toca piano na forma convencional, utilizando as teclas, o biocomputador induz notas por pequenos eletroímãs que pairam milímetros acima das cordas de metal, imbuindo a música com um tom etéreo.

Acaso

Miranda compara seu uso de um biocomputador às técnicas “aleatórias” do compositor de vanguarda americano John Cage (1912-1992), que se voltou para o livro chinês de mudanças i-ching e ao lançamento de dados para controlar partes de suas composições.

Credito: Eduardo Miranda e Ed Braund

Som de computador tem traço ‘etéreo’

“John Cage acreditava no acaso, mas não na aleatoriedade. Ele queria aproveitar a estrutura que estava fora de seu controle. Aqui nós temos o efeito, programado em uma máquina viva. Eu acho que isso é o sonho de John Cage realizado.”

Miranda vem explorando há algum tempo o uso de computadores em peças interativas de composições eletrônicas, mas valoriza a simplicidade do processador Physarum.

“O que eu ouço é muito diferente de ter um computador digital programado com cadeias de dados. Não é inteligente, mas é vivo. O que é interessante…”

A estreia de Biocomputer Music ocorreu no Peninsular Arts Contemporary Music Festival “Biomusic” no dia 1º de março.

When Exponential Progress Becomes Reality (Medium)

Niv Dror

“I used to say that this is the most important graph in all the technology business. I’m now of the opinion that this is the most important graph ever graphed.”

Steve Jurvetson

Moore’s Law

The expectation that your iPhone keeps getting thinner and faster every two years. Happy 50th anniversary.

Components get cheapercomputers get smallera lot of comparisontweets.

In 1965 Intel co-founder Gordon Moore made his original observation, noticing that over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years. The prediction was specific to semiconductors and stretched out for a decade. Its demise has long been predicted, and eventually will come to an end, but continues to be valid to this day.

Expanding beyond semiconductors, and reshaping all kinds of businesses, including those not traditionally thought of as tech.

Yes, Box co-founder Aaron Levie is the official spokesperson for Moore’s Law, and we’re all perfectly okay with that. His cloud computing company would not be around without it. He’s grateful. We’re all grateful. In conversations Moore’s Law constantly gets referenced.

It has become both a prediction and an abstraction.

Expanding far beyond its origin as a transistor-centric metric.

But Moore’s Law of integrated circuits is only the most recent paradigm in a much longer and even more profound technological trend.

Humanity’s capacity to compute has been compounding for as long as we could measure it.

5 Computing Paradigms: Electromechanical computer build by IBM for the 1890 U.S. Census → Alan Turing’s relay based computer that cracked the Nazi Enigma → Vacuum-tube computer predicted Eisenhower’s win in 1952 → Transistor-based machines used in the first space launches → Integrated-circuit-based personal computer

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines Google’s Director of Engineering, futurist, and author Ray Kurzweil proposed “The Law of Accelerating Returns”, according to which the rate of change in a wide variety of evolutionary systems tends to increase exponentially. A specific paradigm, a method or approach to solving a problem (e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the paradigm exhausts its potential. When this happens, a paradigm shift, a fundamental change in the technological approach occurs, enabling the exponential growth to continue.

Kurzweil explains:

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing’s relay-based machine that cracked the Nazi enigma code, to the vacuum tube computer that predicted Eisenhower’s win in 1952, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

This graph, which venture capitalist Steve Jurvetson describes as the most important concept ever to be graphed, is Kurzweil’s 110 year version of Moore’s Law. It spans across five paradigm shifts that have contributed to the exponential growth in computing.

Each dot represents the best computational price-performance device of the day, and when plotted on a logarithmic scale, they fit on the same double exponential curve that spans over a century. This is a very long lasting and predictable trend. It enables us to plan for a time beyond Moore’s Law, without knowing the specifics of the paradigm shift that’s ahead. The next paradigm will advance our ability to compute to such a massive scale, it will be beyond our current ability to comprehend.

The Power of Exponential Growth

Human perception is linear, technological progress is exponential. Our brains are hardwired to have linear expectations because that has always been the case. Technology today progresses so fast that the past no longer looks like the present, and the present is nowhere near the future ahead. Then seemingly out of nowhere, we find ourselves in a reality quite different than what we would expect.

Kurzweil uses the overall growth of the internet as an example. The bottom chart being linear, which makes the internet growth seem sudden and unexpected, whereas the the top chart with the same data graphed on a logarithmic scale tell a very predictable story. On the exponential graph internet growth doesn’t come out of nowhere; it’s just presented in a way that is more intuitive for us to comprehend.

We are still prone to underestimate the progress that is coming because it’s difficult to internalize this reality that we’re living in a world of exponential technological change. It is a fairly recent development. And it’s important to get an understanding for the massive scale of advancements that the technologies of the future will enable. Particularly now, as we’ve reachedwhat Kurzweil calls the “Second Half of the Chessboard.”

(In the end the emperor realizes that he’s been tricked, by exponents, and has the inventor beheaded. In another version of the story the inventor becomes the new emperor).

It’s important to note that as the emperor and inventor went through the first half of the chessboard things were fairly uneventful. The inventor was first given spoonfuls of rice, then bowls of rice, then barrels, and by the end of the first half of the chess board the inventor had accumulated one large field’s worth — 4 billion grains — which is when the emperor started to take notice. It was only as they progressed through the second half of the chessboard that the situation quickly deteriorated.

# of Grains on 1st half: 4,294,967,295

# of Grains on 2nd half: 18,446,744,069,414,600,000

Mind-bending nonlinear gains in computing are about to get a lot more realistic in our lifetime, as there have been slightly more than 32 doublings of performance since the first programmable computers were invented.

Kurzweil’s Predictions

Kurzweil is known for making mind-boggling predictions about the future. And his track record is pretty good.

“…Ray is the best person I know at predicting the future of artificial intelligence.” —Bill Gates

Ray’s prediction for the future may sound crazy (they do sound crazy), but it’s important to note that it’s not about the specific prediction or the exact year. What’s important to focus on is what the they represent. These predictions are based on an understanding of Moore’s Law and Ray’s Law of Accelerating Returns, an awareness for the power of exponential growth, and an appreciation that information technology follows an exponential trend. They may sound crazy, but they are not based out of thin air.

And with that being said…

Second Half of the Chessboard Predictions

“By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.”

“By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.”

To expand image → https://twitter.com/nivo0o0/status/564309273480409088

Not quite there yet…

“By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.”

These clones are cute.

“By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.”

Multiplying our intelligence a billionfold by linking our neocortex to a synthetic neocortex in the cloud — what does that actually mean?

In March 2014 Kurzweil gave an excellent talk at the TED Conference. It was appropriately called: Get ready for hybrid thinking.

Here is a summary:

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

These are the highlights:

Nanobots will connect our neocortex to a synthetic neocortex in the cloud, providing an extension of our neocortex.

Our thinking then will be a hybrid of biological and non-biological thinking(the non-biological portion is subject to the Law of Accelerating Returns and it will grow exponentially).

The frontal cortex and neocortex are not really qualitatively different, so it’s a quantitative expansion of the neocortex (like adding processing power).

The last time we expanded our neocortex was about two million years ago. That additional quantity of thinking was the enabling factor for us to take aqualitative leap and advance language, science, art, technology, etc.

We’re going to again expand our neocortex, only this time it won’t be limited by a fixed architecture of inclosure. It will be expanded without limits, by connecting our brain directly to the cloud.

We already carry a supercomputer in our pocket. We have unlimited access to all the world’s knowledge at our fingertips. Keeping in mind that we are prone to underestimate technological advancements (and that 2045 is not a hard deadline) is it really that far of a stretch to imagine a future where we’re always connected directly from our brain?

Progress is underway. We’ll be able to reverse engineering the neural cortex within five years. Kurzweil predicts that by 2030 we’ll be able to reverse engineer the entire brain. His latest book is called How to Create a Mind… This is the reason Google hired Kurzweil.

Hybrid Human Machines

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

“We’re going to become increasingly non-biological…”

“We’ll also have non-biological bodies…”

“If the biological part went away it wouldn’t make any difference…”

They* will be as realistic as real reality.”

Impact on Society

technological singularity —“the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization” — is beyond the scope of this article, but these advancements will absolutely have an impact on society. Which way is yet to be determined.

There may be some regret

Politicians will not know who/what to regulate.

Evolution may take an unexpected twist.

The rich-poor gap will expand.

The unimaginable will become reality and society will change.

The Cathedral of Computation (The Atlantic)

We’re not living in an algorithmic culture so much as a computational theocracy.

Algorithms are everywhere, supposedly. We are living in an “algorithmic culture,” to use the author and communication scholar Ted Striphas’s name for it. Google’s search algorithms determine how we access information. Facebook’s News Feed algorithms determine how we socialize. Netflix’s and Amazon’s collaborative filtering algorithms choose products and media for us. You hear it everywhere. “Google announced a change to its algorithm,” a journalist reports. “We live in a world run by algorithms,” a TED talk exhorts. “Algorithms rule the world,” a news report threatens. Another upgrades rule to dominion: “The 10 Algorithms that Dominate Our World.”

Here’s an exercise: The next time you hear someone talking about algorithms, replace the term with “God” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.

It’s part of a larger trend. The scientific revolution was meant to challenge tradition and faith, particularly a faith in religious superstition. But today, Enlightenment ideas like reason and science are beginning to flip into their opposites. Science and technology have become so pervasive and distorted, they have turned into a new type of theology.

The worship of the algorithm is hardly the only example of the theological reversal of the Enlightenment—for another sign, just look at the surfeit of nonfiction books promising insights into “The Science of…” anything, from laughter to marijuana. But algorithms hold a special station in the new technological temple because computers have become our favorite idols.

In fact, our purported efforts to enlighten ourselves about algorithms’ role in our culture sometimes offer an unexpected view into our zealous devotion to them. The media scholar Lev Manovich had this to say about “The Algorithms of Our Lives”:

Software has become a universal language, the interface to our imagination and the world. What electricity and the combustion engine were to the early 20th century, software is to the early 21st century. I think of it as a layer that permeates contemporary societies.

This is a common account of algorithmic culture, that software is a fundamental, primary structure of contemporary society. And like any well-delivered sermon, it seems convincing at first. Until we think a little harder about the historical references Manovich invokes, such as electricity and the engine, and how selectively those specimens characterize a prior era. Yes, they were important, but is it fair to call them paramount and exceptional?

It turns out that we have a long history of explaining the present via the output of industry. These rationalizations are always grounded in familiarity, and thus they feel convincing. But mostly they are metaphorsHere’s Nicholas Carr’s take on metaphorizing progress in terms of contemporary technology, from the 2008 Atlantic cover story that he expanded into his bestselling book The Shallows:

The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.”

Carr’s point is that there’s a gap between the world and the metaphors people use to describe that world. We can see how erroneous or incomplete or just plain metaphorical these metaphors are when we look at them in retrospect.

Take the machine. In his book Images of Organization, Gareth Morgan describes the way businesses are seen in terms of different metaphors, among them the organization as machine, an idea that forms the basis for Taylorism.

Gareth Morgan’s metaphors of organization (Venkatesh Rao/Ribbonfarm)

We can find similar examples in computing. For Larry Lessig, the accidental homophony between “code” as the text of a computer program and “code” as the text of statutory law becomes the fulcrum on which his argument that code is an instrument of social control balances.

Each generation, we reset a belief that we’ve reached the end of this chain of metaphors, even though history always proves us wrong precisely because there’s always another technology or trend offering a fresh metaphor. Indeed, an exceptionalism that favors the present is one of the ways that science has become theology.

In fact, Carr fails to heed his own lesson about the temporariness of these metaphors. Just after having warned us that we tend to render current trends into contingent metaphorical explanations, he offers a similar sort of definitive conclusion:

Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.

As with the machinic and computational metaphors that he critiques, Carr settles on another seemingly transparent, truth-yielding one. The real firmament is neurological, and computers are fitzing with our minds, a fact provable by brain science. And actually, software and neuroscience enjoy a metaphorical collaboration thanks to artificial intelligence’s idea that computing describes or mimics the brain. Compuplasting-as-thought reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.

* * *

The metaphor of mechanical automation has always been misleading anyway, with or without the computation. Take manufacturing. The goods people buy from Walmart appear safely ensconced in their blister packs, as if magically stamped out by unfeeling, silent machines (robots—those original automata—themselves run by the tinier, immaterial robots algorithms).

But the automation metaphor breaks down once you bother to look at how even the simplest products are really produced. The photographer Michael Wolf’s images of Chinese factory workers and the toys they fabricate show that finishing consumer goods to completion requires intricate, repetitive human effort.

Michael Wolf Photography

Eyelashes must be glued onto dolls’ eyelids. Mickey Mouse heads must be shellacked. Rubber ducky eyes must be painted white. The same sort of manual work is required to create more complex goods too. Like your iPhone—you know, the one that’s designed in California but “assembled in China.” Even though injection-molding machines and other automated devices help produce all the crap we buy, the metaphor of the factory-as-automated machine obscures the fact that manufacturing isn’t as machinic nor as automated as we think it is.

The algorithmic metaphor is just a special version of the machine metaphor, one specifying a particular kind of machine (the computer) and a particular way of operating it (via a step-by-step procedure for calculation). And when left unseen, we are able to invent a transcendental ideal for the algorithm. The canonical algorithm is not just a model sequence but a concise and efficient one. In its ideological, mythic incarnation, the ideal algorithm is thought to be some flawless little trifle of lithe computer code, processing data into tapestry like a robotic silkworm. A perfect flower, elegant and pristine, simple and singular. A thing you can hold in your palm and caress. A beautiful thing. A divine one.

But just as the machine metaphor gives us a distorted view of automated manufacture as prime mover, so the algorithmic metaphor gives us a distorted, theological view of computational action.

“The Google search algorithm” names something with an initial coherence that quickly scurries away once you really look for it. Googling isn’t a matter of invoking a programmatic subroutine—not on its own, anyway. Google is a monstrosity. It’s a confluence of physical, virtual, computational, and non-computational stuffs—electricity, data centers, servers, air conditioners, security guards, financial markets—just like the rubber ducky is a confluence of vinyl plastic, injection molding, the hands and labor of Chinese workers, the diesel fuel of ships and trains and trucks, the steel of shipping containers.

Once you start looking at them closely, every algorithm betrays the myth of unitary simplicity and computational purity. You may remember the Netflix Prize, a million dollar competition to build a better collaborative filtering algorithm for film recommendations. In 2009, the company closed the book on the prize, adding a faux-machined “completed” stamp to its website.

But as it turns out, that method didn’t really improve Netflix’s performance very much. The company ended up downplaying the ratings and instead using something different to manage viewer preferences: very specific genres like “Emotional Hindi-Language Movies for Hopeless Romantics.” Netflix calls them “altgenres.”

An example of a Netflix altgenre in action (tumblr/Genres of Netflix)

While researching an in-depth analysis of altgenres published a year ago at The Atlantic, Alexis Madrigal scraped the Netflix site, downloading all 76,000+ micro-genres using not an algorithm but a hackneyed, long-running screen-scraping apparatus. After acquiring the data, Madrigal and I organized and analyzed it (by hand), and I built a generator that allowed our readers to fashion their own altgenres based on different grammars (like “Deep Sea Forbidden Love Mockumentaries” or “Coming-of-Age Violent Westerns Set in Europe About Cats”).

Netflix VP Todd Yellin explained to Madrigal why the process of generating altgenres is no less manual than our own process of reverse engineering them. Netflix trains people to watch films, and those viewers laboriously tag the films with lots of metadata, including ratings of factors like sexually suggestive content or plot closure. These tailored altgenres are then presented to Netflix customers based on their prior viewing habits.

One of the hypothetical, “gonzo” altgenres created by The Atlantic‘s Netflix Genre Generator (The Atlantic)

Despite the initial promise of the Netflix Prize and the lurid appeal of a “million dollar algorithm,” Netflix operates by methods that look more like the Chinese manufacturing processes Michael Wolf’s photographs document. Yes, there’s a computer program matching viewing habits to a database of film properties. But the overall work of the Netflix recommendation system is distributed amongst so many different systems, actors, and processes that only a zealot would call the end result an algorithm.

The same could be said for data, the material algorithms operate upon. Data has become just as theologized as algorithms, especially “big data,” whose name is meant to elevate information to the level of celestial infinity. Today, conventional wisdom would suggest that mystical, ubiquitous sensors are collecting data by the terabyteful without our knowledge or intervention. Even if this is true to an extent, examples like Netflix’s altgenres show that data is created, not simply aggregated, and often by means of laborious, manual processes rather than anonymous vacuum-devices.

Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture.

* * *

If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work.

Unfortunately, most computing systems don’t want to admit that they are burlesques. They want to be innovators, disruptors, world-changers, and such zeal requires sectarian blindness. The exception is games, which willingly admit that they are caricatures—and which suffer the consequences of this admission in the court of public opinion. Games know that they are faking it, which makes them less susceptible to theologization. SimCity isn’t an urban planning tool, it’s  a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like.

A Google Maps Street View vehicle roams the streets of Washington D.C. Google Maps entails algorithms, but also other things, like internal combustion engine automobiles. (justgrimes/Flickr)

Just as it’s not really accurate to call the manufacture of plastic toys “automated,” it’s not quite right to call Netflix recommendations or Google Maps “algorithmic.” Yes, true, there are algorithmsw involved, insofar as computers are involved, and computers run software that processes information. But that’s just a part of the story, a theologized version of the diverse, varied array of people, processes, materials, and machines that really carry out the work we shorthand as “technology.” The truth is as simple as it is uninteresting: The world has a lot of stuff in it, all bumping and grinding against one another.

I don’t want to downplay the role of computation in contemporary culture. Striphas and Manovich are right—there are computers in and around everything these days. But the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like “algorithm” have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

This attitude blinds us in two ways. First, it allows us to chalk up any kind of computational social change as pre-determined and inevitable. It gives us an excuse not to intervene in the social shifts wrought by big corporations like Google or Facebook or their kindred, to see their outcomes as beyond our influence. Second, it makes us forget that particular computational systems are abstractions, caricatures of the world, one perspective among many. The first error turns computers into gods, the second treats their outputs as scripture.

Computers are powerful devices that have allowed us to mimic countless other machines all at once. But in so doing, when pushed to their limits, that capacity to simulate anything reverses into the inability or unwillingness to distinguish one thing from anything else. In its Enlightenment incarnation, the rise of reason represented not only the ascendency of science but also the rise of skepticism, of incredulity at simplistic, totalizing answers, especially answers that made appeals to unseen movers. But today even as many scientists and technologists scorn traditional religious practice, they unwittingly invoke a new theology in so doing.

Algorithms aren’t gods. We need not believe that they rule the world in order to admit that they influence it, sometimes profoundly. Let’s bring algorithms down to earth again. Let’s keep the computer around without fetishizing it, without bowing down to it or shrugging away its inevitable power over us, without melting everything down into it as a new name for fate. I don’t want an algorithmic culture, especially if that phrase just euphemizes a corporate, computational theocracy.

But a culture with computers in it? That might be all right.

Robôs inteligentes podem levar ao fim da raça humana, diz Stephen Hawking (Folha de S.Paulo)

SALVADOR NOGUEIRA

COLABORAÇÃO PARA FOLHA

16/12/2014 02h03

O físico britânico Stephen Hawking está causando novamente. Em entrevista à rede BBC, ele alertou para os perigos do desenvolvimento de máquinas superinteligentes.

“As formas primitivas de inteligência artificial que temos agora se mostraram muito úteis. Mas acho que o desenvolvimento de inteligência artificial completa pode significar o fim da raça humana”, disse o cientista.

Ele ecoa um número crescente de especialistas –de filósofos a tecnologistas– que aponta as incertezas trazidas pelo desenvolvimento de máquinas pensantes.

Alex Argozino/Editoria de Arte/Folhapress
Robô

Recentemente, outro luminar a se pronunciar foi Elon Musk, sul-africano que fez fortuna ao criar um sistema de pagamentos para internet e agora desenvolve foguetes e naves para o programa espacial americano.

Em outubro, falando a alunos do MIT (Instituto de Tecnologia de Massachusetts), lançou um alerta parecido.

“Acho que temos de ser muito cuidadosos com inteligência artificial. Se eu tivesse que adivinhar qual é a nossa maior ameaça existencial, seria provavelmente essa.”

Para Musk, a coisa é tão grave que ele acredita na necessidade de desenvolver mecanismos de controle, talvez em nível internacional, “só para garantir que não vamos fazer algo bem idiota”.

SUPERINTELIGÊNCIA

A preocupação vem de longe. Em 1965, Gordon Moore, co-fundador da Intel, notou que a capacidade dos computadores dobrava a cada dois anos, aproximadamente.

Como o efeito é exponencial, em pouco tempo conseguimos sair de modestas máquinas de calcular a supercomputadores capazes de simular a evolução do Universo. Não é pouca coisa.

Os computadores ainda não ultrapassaram a capacidade de processamento do cérebro humano. Por pouco.

“O cérebro como um todo executa cerca de 10 mil trilhões de operações por segundo”, diz o físico Paul Davies, da Universidade Estadual do Arizona. “O computador mais rápido atinge 360 trilhões, então a natureza segue na frente. Mas não por muito tempo.”

Alguns tecnologistas comemoram essa ultrapassagem iminente, como o inventor americano Ray Kurzweil, que atualmente tem trabalhado em parceria com o Google para desenvolver o campo da IA (inteligência artificial).

Ele estima que máquinas com capacidade intelectual similar à humana surgirão em 2029. É mais ou menos o de tempo imaginado por Musk para o surgimento da ameaça.

“A inteligência artificial passará a voar por seus próprios meios, se reprojetando a um ritmo cada vez maior”, sugeriu Hawking.

O resultado: não só as máquinas seriam mais inteligentes que nós, como estariam em constante aprimoramento. Caso desenvolvam a consciência, o que farão conosco?

Kurzweil prefere pensar que nos ajudarão a resolver problemas sociais e se integrarão à civilização. Mas até ele admite que não há garantias. “Acho que a melhor defesa é cultivar valores como democracia, tolerância, liberdade”, disse à Folha.

Para ele, máquinas criadas nesse ambiente aprenderiam os mesmos valores. “Não é uma estratégia infalível”, diz Kurzweil. “Mas é o melhor que podemos fazer.”

Enquanto Musk sugere um controle sobre a tecnologia, Kurzweil acredita que já passamos o ponto de não-retorno –estamos a caminho da “singularidade tecnológica”, quando a IA alterará radicalmente a civilização.

Jovens ‘biohackers’ instalam chips na mão para abrir a porta de casa (Folha de S.Paulo)

LETÍCIA MORI

DE SÃO PAULO

07/12/2014 02h00

Paulo Cesar Saito, 27, não usa mais chave para entrar em seu apartamento, em Pinheiros. Desde o mês passado, a porta “reconhece” quando ele chega. Basta espalmar a mão na frente da fechadura e ela se abre.

A mágica está no chip que ele próprio (com a ajuda de uma amiga que estuda medicina) implantou na mão. Pouco maior que um grão de arroz, o chip tem tecnologia de reconhecimento por radiofrequência. Quando está próximo, uma base na porta desencadeia uma ação pré-programada. No caso, abrir a fechadura.

Instalar modificações tecnológicas no próprio corpo é uma das atividades de um movimento que surgiu em 2008 nos EUA e é chamado mundo afora de biohacking: se envolver com experimentos em biologia fora de grandes laboratórios.

São basicamente os mesmos nerds que desenvolvem geringonças eletrônicas na garagem e se aprofundam no conhecimento de sistemas de informática. Só que agora eles se aventuram no campo da biotecnologia.

Os grupos de DIYBio (do-it-yourself biology, ou “biologia faça-você-mesmo”) importam conceitos do movimento hacker: acesso à informação, divulgação do conhecimento e soluções simples e baratas para melhorar a vida. E são abertos para cientistas amadores —jovens na graduação ou pessoas não necessariamente formadas em biologia.

Saito, por exemplo, começou a cursar física e meteorologia na USP, mas agora se dedica somente à sua start-up na área de tecnologia. O seu envolvimento com o biohacking se resume a modificações corporais –ele também vai instalar um pequeno ímã no dedo. “Como trabalho com equipamentos eletrônicos, tomo muitos choques. O ímã faz você sentir campos magnéticos, evitando o choque”, diz.

Já seu sócio, Erico Perrella, 23, graduando em química ambiental na USP, é um dos principais entusiastas da DIYBio em São Paulo. Ele também tem uma microcicatriz do chip que instalou junto com o amigo. O aparelhinho tem 12 mm de comprimento e uma cobertura biocompatível para que não seja rejeitado pelo corpo. A proteção impede que o chip se mova de lugar e, por não grudar no tecido interno, é de fácil remoção. Perrella também é um dos organizadores de um grupo de DIYBio que se encontra toda segunda-feira.

O movimento está começando na capital paulista, mas mundialmente já chama a atenção —há laboratórios em cerca de 50 cidades, a maioria nos EUA e na Europa. O grupo de Perrella trabalha para montar em São Paulo o primeiro “wetlab” de DIYBio: um espaço estéril, com equipamentos específicos para materiais biológicos.

Eles se reúnem no Garoa Hacker Clube, espaço para entusiastas em tecnologia. O local, no entanto, tem infraestrutura voltada para projetos com hardware, eletrônica etc. “Para um wetlab’ é preciso uma área limpa, que parece mais uma cozinha do que uma oficina”, diz o estudante de química Otto Werner Heringer, 24, integrante do grupo. “O Garoa já tem uma área assim, nossa ideia é levar e deixar mais equipamentos [no local]”

Aproveitar espaços “geeks” é comum no movimento. O Open Wetlab de Amsterdam, por exemplo, começou como parte da Waag Society, um instituto sem fins lucrativos que promove arte, ciência e tecnologia.

Certos experimentos exigem equipamentos complexos, que podem custar milhares de dólares. “A solução é montar algumas coisas e consertar equipamentos velhos que a universidade iria jogar fora”, explica Heringer.

Grande parte dos biohackers se dedica mais à montagem dos equipamentos do que a experimentos. Heringer já fez uma centrífuga usando uma peça impressa em 3D encaixada em uma furadeira. Agora está montando um contador de células. Ajudado por amigos, Perrella criou biorreatores com material reciclado de uma mineradora.

Para esses jovens entusiasmados, são grandes as vantagens de fazer ciência fora da academia ou da indústria.

Longe do controle minucioso da universidade, é possível desenvolver projetos sem a aprovação de diversos comitês e conselhos. “O ambiente [acadêmico] é muito engessado. Você fica desestimulado”, diz Heringer.

O trabalho dos amadores acaba até contribuindo para a ciência “formal”. Heringer está criando com amigos uma pipetadora automática no InovaLab da Escola Politécnica da USP baseada em um projeto de DIYBio e financiada por um fundo de ex-alunos. “A gente nunca conseguiria financiamento pelos meios normais da USP. E, se conseguisse, ia demorar!”, diz ele.

SEGURANÇA

O amplo acesso gera preocupações: laboratórios amadores não poderiam criar organismos nocivos? Defensores dizem que, para quem pratica DIYBio, interessa manter tudo dentro dos padrões de segurança –se algo der errado, o controle vai aumentar e tornar a vida mais difícil.

Não existe no Brasil uma regulação para laboratórios amadores. Nos EUA, o FBI monitora o movimento e há restrições ao uso de alguns materiais, porém não há regulação específica.

O cientista francês Thomas Landrain, que estuda o movimento, argumenta em sua pesquisa que os espaços ainda não têm sofisticação suficiente para gerar problemas.

Mas, apesar da limitação técnica, os laboratórios permitem inúmeras possibilidades. “Quem se dedica tem uma crença profunda no potencial transformador dessas novas tecnologias”, explica Perrella, que tem um projeto de mineração com uso de bactérias. Há grupos que focam a saúde, criando sensores de contaminação em alimentos ou “mapas biológicos” que podem monitorar a evolução de doenças.

É possível trabalhar com DNA Barcode, método que identifica a qual espécie pertence um tecido. “Daria para checar qual é a carne da esfirra do Habib’s”, diz Perrella, citando um experimentocom análise de carne que já está sendo feito no OpenWetlab, em Amsterdam. Dá até para descobrir qual é o vizinho que não recolhe o cocô do cachorro. Foi o que fez o alemão Sascha Karberg, comparando pelo de cães da vizinhança com o “presente” à sua porta. O método usado em projetos como esse pode ser encontrado por outros “biohackers”. O risco é aumentar as brigas entre vizinhos.

“What is ecological engineering?” (Inhabiting the Anthropocene)

“What is ecological engineering?”
by Ingo Schlupp

CITATION:
Mitsch, W.J. 2012. Ecological Engineering, Vol. 45, pp. 5-12.
ON-LINE AVAILABILITY:
doi:10.1016/j.ecoleng.2012.04.013

ABSTRACT:
Ecological engineering, defined as the design of sustainable ecosystems that integrate human society with its natural environment for the benefit of both, has developed over the last 30 years, and rapidly over the last 10 years. Its goals include the restoration of ecosystems that have been substantially disturbed by human activities and the development of new sustainable ecosystems that have both human and ecological values. It is especially needed as conventional energy sources diminish and amplification of nature’s ecosystem services is needed even more. There are now several universities developing academic pro- grams or departments called ecological engineering, ecological restoration, or similar terms, the number of manuscripts submitted to the journal Ecological Engineering continue to increase at an rapid rate, and the U.S. National Science Foundation now has a specific research focus area called ecological engineer- ing. There are many private firms now developing and even prospering that are now specializing in the restoration of streams, rivers, lakes, forests, grasslands, and wetlands, the rehabilitation of minelands and urban brownfields, and the creation of treatment wetlands and phytoremediation sites. It appears that the perfect synchronization of academy, publishing, research resources, and practice is beginning to develop. Yet the field still does not have a formal accreditation in engineering and receives guarded acceptance in the university system and workplace alike.
William Mitsch is one of the founders of the field of Ecological Engineering, which specializes on managing and restoring ecosystems. There seems to be an obvious connection between the Anthropocene idea and this relatively new field. The Mitsch paper is a good place to start to understand the effort to be more deliberate and thoughtful about ways we intervene in natural systems—something that has run amok in the Anthropocene.

But it is important to me to put Ecological Engineering into a biological context. One of the key concepts that come to the mind of a biologist when we think of the Anthropocence is how almost any organism manipulates its environment. (Zev Trachtenberg has posted on the related idea of “niche construction.”) This is sometimes an apparent byproduct of physiological functions like plants releasing oxygen into the air (thereby making the planet hospitable to most animals) or a very clear, active manipulation like the beaver dam that creates a pond. The pond directly serves the beavers, but many organisms benefit from the existence of the novel pond. Others drown, of course. This kind of large scale and far reaching effect is classified as ecosystem engineering and has become a key concept in ecology. We now recognize that ecosystem engineering has many consequences, including a large increase in species richness. (In the Further Reading section I list a recent meta-analysis by Romero et al. in the highly respected journal Biological Reviews which just made this point.)

So, animals manipulate their environment all the time, how about humans? How are our efforts different? Often we simply mimic nature: we put artificial reefs in place of natural ones. These fake reefs have some of the same functions as natural reefs built by corals, mainly providing hard substrate for other animals to grow upon. Because corals provide more that just a substrate and are living, breathing part of the reef, other functions cannot be mimicked.

Humans have taken ecosystem engineering to a new dimension, partly creating the very Anthropocene we are discussing here. Like almost every other species on the planet our own species has altered the environment from Day 1, but when did we cross the threshold and became the masters of ecosystem engineering? Was it the invention of agriculture? Or any other milestone in the evolution of humanity?

Whenever it was, for our own species ecosystem engineering is obviously now very active and has resulted in planet-wide alterations. This leads me back to Ecological Engineering: it is an applied science, pioneered by Mitsch, who has promoted it since the early 1990’s. What is intriguing about this field is that it is by definition transdisciplinary, but it suffers from a problem that all of transdisciplinary approaches have, namely limited acceptance in the “pure” fields.

It is necessary for us to realize that Ecosystem Engineering, when done by humans has a moral and political dimension to it, but an engineering approach has additional aspects to think about: Engineering might be a misleading term, as it implies that we have control over all the moving parts. The science of Ecology is far from having a complete understanding of the dynamics that govern ecosystems; can we manage something we don’t understand all that well? At the same time we may have already altered all “natural” systems to a point where we are unable to research them as if they were naturals. Maybe this is the biological version of Heisenberg’s uncertainty principle.

FURTHER READING:
Mitsch, W.J., 1993. Ecological engineering—a cooperative role with the planetary life–support systems. Environmental Science and Technology, 27, 438–445. DOI: 10.1021/es00040a600. One of Mitsch’s early papers that helped launch the field.

Romero, G.Q. et al. 2014. Ecosystem engineering effects on species diversity across ecosystems: a meta-analysis. Biological Reviews, DOI: 10.1111/brv.12138. This paper argues that ecosystem engineering increases the number of species, but the effects depend e.g. on latitude (they are stronger in the tropics) and other factors.

Geoengineering Gone Wild: Newsweek Touts Turning Humans Into Hobbits To Save Climate (Climate Progress)

POSTED ON DECEMBER 5, 2014 AT 9:37 AM

Matamata, New Zealand - "Hobbiton," site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings.

A Newsweek cover story touts genetically engineering humans to be smaller, with better night vision (like, say, hobbits) to save the Earth. Matamata, New Zealand, or “Hobbiton,” site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings. CREDIT: SHUTTERSTOCK

Newsweek has an entire cover story devoted to raising the question, “Can Geoengineering Save the Earth?” After reading it, though, you may not realize the answer is a resounding “no.” In part that’s because Newsweek manages to avoid quoting even one of the countless general critics of geoengineering in its 2700-word (!) piece.

20141205cover600-x-800Geoengineering is not a well-defined term, but at its broadest, it is the large-scale manipulation of the Earth and its biosphere to counteract the effects of human-caused global warming. Global warming itself is geo-engineering — originally unintentional, but now, after decades of scientific warnings, not so much.

I have likened geoengineering to a dangerous, never tested, course of chemotherapy prescribed to treat a condition curable through diet and exercise — or, in this case, greenhouse gas emissions reduction. If your actual doctor were to prescribe such a treatment, you would get another doctor.

The media likes geoengineering stories because they are clickbait involving all sorts of eye-popping science fiction (non)solutions to climate change that don’t actually require anything of their readers (or humanity) except infinite credulousness. And so Newsweek informs us that adorable ants might solve the problem or maybe phytoplankton can if given Popeye-like superstrength with a diet of iron or, as we’ll see, maybe we humans can, if we allow ourselves to be turned into hobbit-like creatures. The only thing they left out was time-travel.

The author does talk to an unusually sober expert supporter of geoengineering, climatologist Ken Caldeira. Caldeira knows that of all the proposed geoengineering strategies, only one makes even the tiniest bit of sense — and he knows even that one doesn’t make much sense. That would be the idea of spewing vast amounts of tiny particulates (sulfate aerosols) into the atmosphere to block sunlight, mimicking the global temperature drops that follow volcanic eruptions. But they note the caveat: “that said, Caldeira doesn’t believe any method of geoengineering is really a good solution to fighting climate change — we can’t test them on a large scale, and implementing them blindly could be dangerous.”

Actually, it’s worse than that. As Caldeira told me in 2009, “If we keep emitting greenhouse gases with the intent of offsetting the global warming with ever increasing loadings of particles in the stratosphere, we will be heading to a planet with extremely high greenhouse gases and a thick stratospheric haze that we would need to maintain more-or-less indefinitely. This seems to be a dystopic world out of a science fiction story.”

And the scientific literature has repeatedly explained that the aerosol-cooling strategy — or indeed any large-scale effort to manipulate sunlight — is very dangerous. Just last month, the UK Guardian reported that the aerosol strategy “risks ‘terrifying’ consequences including droughts and conflicts,” according to recent studies.

“Billions of people would suffer worse floods and droughts if technology was used to block warming sunlight, the research found.”

And remember, this dystopic world where billions suffer is the best geoengineering strategy out there. And it still does nothing to stop the catastrophic acidification of the ocean.

There simply is no rational or moral substitute for aggressive greenhouse gas cuts. But Newsweek quickly dispenses with that supposedly “seismic shift in what has become a global value system” so it can move on to its absurdist “reimagining of what it means to be human”:

In a paper released in 2012, S. Matthew Liao, a philosopher and ethicist at New York University, and some colleagues proposed a series of human-engineering projects that could make our very existence less damaging to the Earth. Among the proposals were a patch you can put on your skin that would make you averse to the flavor of meat (cattle farms are a notorious producer of the greenhouse gas methane), genetic engineering in utero to make humans grow shorter (smaller people means fewer resources used), technological reengineering of our eyeballs to make us better at seeing at night (better night vision means lower energy consumption)….

Yes, let’s turn humans into hobbits (who are “about 3 feet tall” and “their night vision is excellent“). Anyone can see that could easily be done for billions of people in the timeframe needed to matter. Who could imagine any political or practical objection?

Now you may be thinking that Newsweek can’t possibly be serious devoting ink to such nonsense. But if not, how did the last two paragraphs of the article make it to print:

Geoengineering, Liao argues, doesn’t address the root cause. Remaking the planet simply attempts to counteract the damage that’s been done, but it does nothing to stop the burden humans put on the planet. “Human engineering is more of an upstream solution,” says Liao. “You get right to the source. If we’re smaller on average, then we can have a smaller footprint on the planet. You’re looking at the source of the problem.”

It might be uncomfortable for humans to imagine intentionally getting smaller over generations or changing their physiology to become averse to meat, but why should seeding the sky with aerosols be any more acceptable? In the end, these are all actions we would enact only in worst-case scenarios. And when we’re facing the possible devastation of all mankind, perhaps a little humanity-wide night vision won’t seem so dramatic.

Memo to Newsweek: We are already facing the devastation of all mankind. And science has already provided the means of our “rescue,” the means of reducing “the burden humans put on the planet” — the myriad carbon-free energy technologies that reduce greenhouse gas emissions. Perhaps LED lighting would make a slightly more practical strategy than reengineering our eyeballs, though perhaps not one dramatic enough to inspire one of your cover stories.

As Caldeira himself has said elsewhere of geoengineering, “I think that 99% of our effort to avoid climate change should be put on emissions reduction, and 1% of our effort should be looking into these options.” So perhaps Newsweek will consider 99 articles on the real solutions before returning to the magical thinking of Middle Earth.

Médicos que ‘ressuscitam mortos’ querem testar técnica em humanos (BBC)

Técnica para estender vidas por algumas horas nunca foi testada em humanos

“Quando seu corpo está com temperatura de 10 graus, sem atividade cerebral, batimento cardíaco e sangue – é um consenso que você está morto”, diz o professor Peter Rhee, da universidade do Arizona. “Mas ainda assim, nós conseguimos trazer você de volta.”

Rhee não está exagerando. Com Samuel Tisherman, da Universidade de Maryland, nos Estados Unidos, ele comprovou que é possível manter o corpo em estado “suspenso” por horas.

O procedimento já foi testado com animais e é o mais radical possível. Envolve retirar todo o sangue do corpo e esfriá-lo até 20 graus abaixo da sua temperatura normal.

Quando o problema no corpo do paciente é resolvido, o sangue volta a ser bombeado, reaquecendo lentamente o sistema. Quando a temperatura do sangue chega a 30 graus, o coração volta a bater.

Os animais submetidos a esse teste tiveram poucos efeitos colaterais ao despertar. “Eles ficam um pouco grogue por um tempo, mas no dia seguinte já estão bem”, diz Tisherman.

Testes com humanos

Tisherman causou um frisson internacional este ano quando anunciou que está pronto para fazer testes com humanos. As primeiras cobaias seriam vítimas de armas de fogo em Pittsburgh, na Pensilvânia.

Nesse caso, são pacientes cujos corações já pararam de bater e que não teriam mais chances de sobreviver, pelas técnicas convencionais. O médico americano teme que, por conta de manchetes imprecisas na imprensa, tenha-se criado uma ideia equivocada da sua pesquisa

Peter Rhee ajudou a criar técnica inovadora que envolve retirar o sangue do paciente

“Quando as pessoas pensam no assunto, elas pensam em viajantes espaciais sendo congelados e acordados em Júpiter, ou no [personagem] Han Solo, de Guerra nas Estrelas”, diz Tisherman.

“Isso não ajuda, porque é importante que as pessoas saibam que não se trata de ficção científica.”

Os esforços para trazer as pessoas de volta do que se acredita ser a morte já existem há décadas. Tisherman começou seus estudos com Peter Safar, que nos anos 1960 criou a técnica pioneira de reanimação cardiorrespiratória. Com uma massagem cardíaca, é possível manter o coração artificialmente ativo por um tempo.

“Sempre fomos criados para acreditar que a morte é um momento absoluto, e que quando morremos não tem mais volta”, diz Sam Parnia, da Universidade Estadual de Nova York.

“Com a descoberta básica da reanimação cardiorrespiratória nós passamos a entender que as células do corpo demoram horas para atingir uma morte irreversível. Mesmo depois que você já virou um cadáver, ainda existe como resgatá-lo.”

Recentemente, um homem de 40 anos no Texas sobreviveu por três horas e meia com a reanimação cardiorrespiratória.

Segundo os médicos de plantão, “todo mundo com dois braços foi chamado para se revezar fazendo as compressões no peito do paciente”.

Durante a massagem, ele continuava consciente e conversando com os médicos, mas caso o procedimento fosse interrompido, ele morreria. Eventualmente ele se recuperou e acabou sobrevivendo.

Esse caso de rescucitação ao longo de um grande período só funcionou porque não havia uma grande lesão no corpo do paciente. Mas isso é raro.

‘Limbo’

A técnica desenvolvida agora por Tisherman é baseada na ideia de que baixas temperaturas mantêm o corpo vivo por mais tempo – cerca de uma ou duas horas.

O sangue é retirado e no seu lugar é colocada uma solução salina que ajuda a rebaixar a temperatura do corpo para algo como 10 a 15 graus Celsius.

Em experiência com porcos, cerca de 90% deles se recuperaram quando o sangue foi bombeado de volta. Cada animal passou mais de uma hora no “limbo”.

Técnica de massagem cardíaca já ajuda a estender a vida de pessoas com paradas

“É uma das coisas mais incríveis de se observar: quando o coração começa a bater de novo”, diz Rhee.

Após a operação, foram realizados vários testes para avaliar se houve dano cerebral. Aparentemente nenhum porco apresentou problemas.

O desafio de obter permissão para testar em humanos tem sido enorme até agora. Tisherman e Rhee finalmente receberam permissão para testar sua técnica com vítimas de tiros em Pittsburgh.

Um dos problemas a ser contornado é ver como os pacientes se adaptam com o sangue de outra pessoa. Os porcos receberam o próprio sangue congelado, mas no caso dos humanos será necessário usar o estoque do banco de sangues.

Se der certo, os médicos acreditam que a técnica poderia ser aplicada não só vítimas de lesões, como tiros e facadas, mas em pessoas com ataque cardíaco.

A pesquisa também está levando a outros estudos sobre qual seria a melhor solução química para reduzir o metabolismo do corpo humano.

Leia a versão desta reportagem original em inglês no site BBC Future.

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.


Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

Cockroach cyborgs use microphones to detect, trace sounds (Science Daily)

Date: November 6, 2014

Source: North Carolina State University

Summary: Researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.


North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster. Credit: Eric Whitmire.

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.

The researchers have also developed technology that can be used as an “invisible fence” to keep the biobots in the disaster area.

“In a collapsed building, sound is the best way to find survivors,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and senior author of two papers on the work.

The biobots are equipped with electronic backpacks that control the cockroach’s movements. Bozkurt’s research team has created two types of customized backpacks using microphones. One type of biobot has a single microphone that can capture relatively high-resolution sound from any direction to be wirelessly transmitted to first responders.

The second type of biobot is equipped with an array of three directional microphones to detect the direction of the sound. The research team has also developed algorithms that analyze the sound from the microphone array to localize the source of the sound and steer the biobot in that direction. The system worked well during laboratory testing. Video of a laboratory test of the microphone array system is available athttp://www.youtube.com/watch?v=oJXEPcv-FMw.

“The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter — like people calling for help — from sounds that don’t matter — like a leaking pipe,” Bozkurt says. “Once we’ve identified sounds that matter, we can use the biobots equipped with microphone arrays to zero in on where those sounds are coming from.”

A research team led by Dr. Edgar Lobaton has previously shown that biobots can be used to map a disaster area. Funded by National Science Foundation CyberPhysical Systems Program, the long-term goal is for Bozkurt and Lobaton to merge their research efforts to both map disaster areas and pinpoint survivors. The researchers are already working with collaborator Dr. Mihail Sichitiu to develop the next generation of biobot networking and localization technology.

Bozkurt’s team also recently demonstrated technology that creates an invisible fence for keeping biobots in a defined area. This is significant because it can be used to keep biobots at a disaster site, and to keep the biobots within range of each other so that they can be used as a reliable mobile wireless network. This technology could also be used to steer biobots to light sources, so that the miniaturized solar panels on biobot backpacks can be recharged. Video of the invisible fence technology in practice can be seen at http://www.youtube.com/watch?v=mWGAKd7_fAM.

A paper on the microphone sensor research, “Acoustic Sensors for Biobotic Search and Rescue,” was presented Nov. 5 at the IEEE Sensors 2014 conference in Valencia, Spain. Lead author of the paper is Eric Whitmire, a former undergraduate at NC State. The paper was co-authored by Tahmid Latif, a Ph.D. student at NC State, and Bozkurt.

The paper on the invisible fence for biobots, “Towards Fenceless Boundaries for Solar Powered Insect Biobots,” was presented Aug. 28 at the 36th Annual International IEEE EMBS Conference in Chicago, Illinois. Latif was the lead author. Co-authors include Tristan Novak, a graduate student at NC State, Whitmire and Bozkurt.

The research was supported by the National Science Foundation under grant number 1239243.

Amputees discern familiar sensations across prosthetic hand (Science Daily)

Date: October 8, 2014

Source: Case Western Reserve University

Summary: Patients connected to a new prosthetic system said they ‘felt’ their hands for the first time since they lost them in accidents. In the ensuing months, they began feeling sensations that were familiar and were able to control their prosthetic hands with more — well — dexterity.

Medical researchers are helping restore the sense of touch in amputees. Credit: Image courtesy of Case Western Reserve University

Even before he lost his right hand to an industrial accident 4 years ago, Igor Spetic had family open his medicine bottles. Cotton balls give him goose bumps.

Now, blindfolded during an experiment, he feels his arm hairs rise when a researcher brushes the back of his prosthetic hand with a cotton ball.

Spetic, of course, can’t feel the ball. But patterns of electric signals are sent by a computer into nerves in his arm and to his brain, which tells him different. “I knew immediately it was cotton,” he said.

That’s one of several types of sensation Spetic, of Madison, Ohio, can feel with the prosthetic system being developed by Case Western Reserve University and the Louis Stokes Cleveland Veterans Affairs Medical Center.

Spetic was excited just to “feel” again, and quickly received an unexpected benefit. The phantom pain he’d suffered, which he’s described as a vice crushing his closed fist, subsided almost completely. A second patient, who had less phantom pain after losing his right hand and much of his forearm in an accident, said his, too, is nearly gone.

Despite having phantom pain, both men said that the first time they were connected to the system and received the electrical stimulation, was the first time they’d felt their hands since their accidents. In the ensuing months, they began feeling sensations that were familiar and were able to control their prosthetic hands with more — well — dexterity.

To watch a video of the research, click here: http://youtu.be/l7jht5vvzR4.

“The sense of touch is one of the ways we interact with objects around us,” said Dustin Tyler, an associate professor of biomedical engineering at Case Western Reserve and director of the research. “Our goal is not just to restore function, but to build a reconnection to the world. This is long-lasting, chronic restoration of sensation over multiple points across the hand.”

“The work reactivates areas of the brain that produce the sense of touch, said Tyler, who is also associate director of the Advanced Platform Technology Center at the Cleveland VA. “When the hand is lost, the inputs that switched on these areas were lost.”

How the system works and the results will be published online in the journal Science Translational Medicine Oct. 8.

“The sense of touch actually gets better,” said Keith Vonderhuevel, of Sidney, Ohio, who lost his hand in 2005 and had the system implanted in January 2013. “They change things on the computer to change the sensation.

“One time,” he said, “it felt like water running across the back of my hand.”

The system, which is limited to the lab at this point, uses electrical stimulation to give the sense of feeling. But there are key differences from other reported efforts.

First, the nerves that used to relay the sense of touch to the brain are stimulated by contact points on cuffs that encircle major nerve bundles in the arm, not by electrodes inserted through the protective nerve membranes.

Surgeons Michael W Keith, MD and J. Robert Anderson, MD, from Case Western Reserve School of Medicine and Cleveland VA, implanted three electrode cuffs in Spetic’s forearm, enabling him to feel 19 distinct points; and two cuffs in Vonderhuevel’s upper arm, enabling him to feel 16 distinct locations.

Second, when they began the study, the sensation Spetic felt when a sensor was touched was a tingle. To provide more natural sensations, the research team has developed algorithms that convert the input from sensors taped to a patient’s hand into varying patterns and intensities of electrical signals. The sensors themselves aren’t sophisticated enough to discern textures, they detect only pressure.

The different signal patterns, passed through the cuffs, are read as different stimuli by the brain. The scientists continue to fine-tune the patterns, and Spetic and Vonderhuevel appear to be becoming more attuned to them.

Third, the system has worked for 2 ½ years in Spetic and 1½ in Vonderhueval. Other research has reported sensation lasting one month and, in some cases, the ability to feel began to fade over weeks.

A blindfolded Vonderhuevel has held grapes or cherries in his prosthetic hand — the signals enabling him to gauge how tightly he’s squeezing — and pulled out the stems.

“When the sensation’s on, it’s not too hard,” he said. “When it’s off, you make a lot of grape juice.”

Different signal patterns interpreted as sandpaper, a smooth surface and a ridged surface enabled a blindfolded Spetic to discern each as they were applied to his hand. And when researchers touched two different locations with two different textures at the same time, he could discern the type and location of each.

Tyler believes that everyone creates a map of sensations from their life history that enables them to correlate an input to a given sensation.

“I don’t presume the stimuli we’re giving is hitting the spots on the map exactly, but they’re familiar enough that the brain identifies what it is,” he said.

Because of Vonderheuval’s and Spetic’s continuing progress, Tyler is hopeful the method can lead to a lifetime of use. He’s optimistic his team can develop a system a patient could use at home, within five years.

In addition to hand prosthetics, Tyler believes the technology can be used to help those using prosthetic legs receive input from the ground and adjust to gravel or uneven surfaces. Beyond that, the neural interfacing and new stimulation techniques may be useful in controlling tremors, deep brain stimulation and more.


Journal Reference:

  1. D. W. Tan, M. A. Schiefer, M. W. Keith, J. R. Anderson, J. Tyler, D. J. Tyler. A neural interface provides long-term stable natural touch perception. Science Translational Medicine, 2014; 6 (257): 257ra138 DOI:10.1126/scitranslmed.3008669

*   *   *

Mind-controlled prosthetic arms that work in daily life are now a reality (Science Daily)

Date: October 8, 2014

Source: Chalmers University of Technology

Summary: For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. A novel osseointegrated (bone-anchored) implant system gives patients new opportunities in their daily life and professional activities.


For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. Credit: Image courtesy of Chalmers University of Technology

For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. A novel osseointegrated (bone-anchored) implant system gives patients new opportunities in their daily life and professional activities.

In January 2013 a Swedish arm amputee was the first person in the world to receive a prosthesis with a direct connection to bone, nerves and muscles. An article about this achievement and its long-term stability will now be published in the Science Translational Medicine journal.

“Going beyond the lab to allow the patient to face real-world challenges is the main contribution of this work,” says Max Ortiz Catalan, research scientist at Chalmers University of Technology and leading author of the publication.

“We have used osseointegration to create a long-term stable fusion between man and machine, where we have integrated them at different levels. The artificial arm is directly attached to the skeleton, thus providing mechanical stability. Then the human’s biological control system, that is nerves and muscles, is also interfaced to the machine’s control system via neuromuscular electrodes. This creates an intimate union between the body and the machine; between biology and mechatronics.”

The direct skeletal attachment is created by what is known as osseointegration, a technology in limb prostheses pioneered by associate professor Rickard Brånemark and his colleagues at Sahlgrenska University Hospital. Rickard Brånemark led the surgical implantation and collaborated closely with Max Ortiz Catalan and Professor Bo Håkansson at Chalmers University of Technology on this project.

The patient’s arm was amputated over ten years ago. Before the surgery, his prosthesis was controlled via electrodes placed over the skin. Robotic prostheses can be very advanced, but such a control system makes them unreliable and limits their functionality, and patients commonly reject them as a result.

Now, the patient has been given a control system that is directly connected to his own. He has a physically challenging job as a truck driver in northern Sweden, and since the surgery he has experienced that he can cope with all the situations he faces; everything from clamping his trailer load and operating machinery, to unpacking eggs and tying his children’s skates, regardless of the environmental conditions (read more about the benefits of the new technology below).

The patient is also one of the first in the world to take part in an effort to achieve long-term sensation via the prosthesis. Because the implant is a bidirectional interface, it can also be used to send signals in the opposite direction — from the prosthetic arm to the brain. This is the researchers’ next step, to clinically implement their findings on sensory feedback.

“Reliable communication between the prosthesis and the body has been the missing link for the clinical implementation of neural control and sensory feedback, and this is now in place,” says Max Ortiz Catalan. “So far we have shown that the patient has a long-term stable ability to perceive touch in different locations in the missing hand. Intuitive sensory feedback and control are crucial for interacting with the environment, for example to reliably hold an object despite disturbances or uncertainty. Today, no patient walks around with a prosthesis that provides such information, but we are working towards changing that in the very short term.”

The researchers plan to treat more patients with the novel technology later this year.

“We see this technology as an important step towards more natural control of artificial limbs,” says Max Ortiz Catalan. “It is the missing link for allowing sophisticated neural interfaces to control sophisticated prostheses. So far, this has only been possible in short experiments within controlled environments.”

More about: How the technology works

The new technology is based on the OPRA treatment (osseointegrated prosthesis for the rehabilitation of amputees), where a titanium implant is surgically inserted into the bone and becomes fixated to it by a process known as osseointegration (Osseo = bone). A percutaneous component (abutment) is then attached to the titanium implant to serve as a metallic bone extension, where the prosthesis is then fixated. Electrodes are implanted in nerves and muscles as the interfaces to the biological control system. These electrodes record signals which are transmitted via the osseointegrated implant to the prostheses, where the signals are finally decoded and translated into motions.

More about: Benefits of the new technology, compared to socket prostheses

Direct skeletal attachment by osseointegration means:

  • Increased range of motion since there are no physical limitations by the socket — the patient can move the remaining joints freely
  • Elimination of sores and pain caused by the constant pressure from the socket
  • Stable and easy attachment/detachment
  • Increased sensory feedback due to the direct transmission of forces and vibrations to the bone (osseoperception)
  • The prosthesis can be worn all day, every day
  • No socket adjustments required (there is no socket)

Implanting electrodes in nerves and muscles means that:

  • Due to the intimate connection, the patients can control the prosthesis with less effort and more precisely, and can thus handle smaller and more delicate items.
  • The close proximity between source and electrode also prevents activity from other muscles from interfering (cross-talk), so that the patient can move the arm to any position and still maintain control of the prosthesis.
  • More motor signals can be obtained from muscles and nerves, so that more movements can be intuitively controlled in the prosthesis.
  • After the first fitting of the controller, little or no recalibration is required because there is no need to reposition the electrodes on every occasion the prosthesis is worn (as opposed to superficial electrodes).
  • Since the electrodes are implanted rather than placed over the skin, control is not affected by environmental conditions (cold and heat) that change the skin state, or by limb motions that displace the skin over the muscles. The control is also resilient to electromagnetic interference (noise from other electric devices or power lines) as the electrodes are shielded by the body itself.
  • Electrodes in the nerves can be used to send signals to the brain as sensations coming from the prostheses.

Journal Reference:

  1. M. Ortiz-Catalan, B. Hakansson, R. Branemark. An osseointegrated human-machine gateway for long-term sensory feedback and motor control of artificial limbs. Science Translational Medicine, 2014; 6 (257): 257re6 DOI:10.1126/scitranslmed.3008933

Our Microbiome May Be Looking Out for Itself (New York Times)

A highly magnified view of Enterococcus faecalis, a bacterium that lives in the human gut. Microbes may affect our cravings, new research suggests.CreditCenters for Disease Control and Prevention

Your body is home to about 100 trillion bacteria and other microbes, collectively known as your microbiome. Naturalists first became aware of our invisible lodgers in the 1600s, but it wasn’t until the past few years that we’ve become really familiar with them.

This recent research has given the microbiome a cuddly kind of fame. We’ve come to appreciate how beneficial our microbes are — breaking down our food, fighting off infections and nurturing our immune system. It’s a lovely, invisible garden we should be tending for our own well-being.

But in the journal Bioessays, a team of scientists has raised a creepier possibility. Perhaps our menagerie of germs is also influencing our behavior in order to advance its own evolutionary success — giving us cravings for certain foods, for example.

Maybe the microbiome is our puppet master.

“One of the ways we started thinking about this was in a crime-novel perspective,” said Carlo C. Maley, an evolutionary biologist at the University of California, San Francisco, and a co-author of the new paper. “What are the means, motives and opportunity for the microbes to manipulate us? They have all three.”

The idea that a simple organism could control a complex animal may sound like science fiction. In fact, there are many well-documented examples of parasites controlling their hosts.

Some species of fungi, for example, infiltrate the brains of ants and coax them to climb plants and clamp onto the underside of leaves. The fungi then sprout out of the ants and send spores showering onto uninfected ants below.

How parasites control their hosts remains mysterious. But it looks as if they release molecules that directly or indirectly can influence their brains.

Our microbiome has the biochemical potential to do the same thing. In our guts, bacteria make some of the same chemicals that our neurons use to communicate with one another, such as dopamine and serotonin. And the microbes can deliver these neurological molecules to the dense web of nerve endings that line the gastrointestinal tract.

A number of recent studies have shown that gut bacteria can use these signals to alter the biochemistry of the brain. Compared with ordinary mice, those raised free of germs behave differently in a number of ways. They are more anxious, for example, and have impaired memory.

Adding certain species of bacteria to a normal mouse’s microbiome can reveal other ways in which they can influence behavior. Some bacteria lower stress levels in the mouse. When scientists sever the nerve relaying signals from the gut to the brain, this stress-reducing effect disappears.

Some experiments suggest that bacteria also can influence the way their hosts eat. Germ-free mice develop more receptors for sweet flavors in their intestines, for example. They also prefer to drink sweeter drinks than normal mice do.

Scientists have also found that bacteria can alter levels of hormones that govern appetite in mice.

Dr. Maley and his colleagues argue that our eating habits create a strong motive for microbes to manipulate us. “From the microbe’s perspective, what we eat is a matter of life and death,” Dr. Maley said.

Different species of microbes thrive on different kinds of food. If they can prompt us to eat more of the food they depend on, they can multiply.

Microbial manipulations might fill in some of the puzzling holes in our understandings about food cravings, Dr. Maley said. Scientists have tried to explain food cravings as the body’s way to build up a supply of nutrients after deprivation, or as addictions, much like those for drugs like tobacco and cocaine.

But both explanations fall short. Take chocolate: Many people crave it fiercely, but it isn’t an essential nutrient. And chocolate doesn’t drive people to increase their dose to get the same high. “You don’t need more chocolate at every sitting to enjoy it,” Dr. Maley said.

Perhaps, he suggests, the certain kinds of bacteria that thrive on chocolate are coaxing us to feed them.

John F. Cryan, a neuroscientist at University College Cork in Ireland who was not involved in the new study, suggested that microbes might also manipulate us in ways that benefited both them and us. “It’s probably not a simple parasitic scenario,” he said.

Research by Dr. Cryan and others suggests that a healthy microbiome helps mammals develop socially. Germ-free mice, for example, tend to avoid contact with other mice.

That social bonding is good for the mammals. But it may also be good for the bacteria.

“When mammals are in social groups, they’re more likely to pass on microbes from one to the other,” Dr. Cryan said.

“I think it’s a very interesting and compelling idea,” said Rob Knight, a microbiologist at the University of Colorado, who was also not involved in the new study.

If microbes do in fact manipulate us, Dr. Knight said, we might be able to manipulate them for our own benefit — for example, by eating yogurt laced with bacteria that would make us crave healthy foods.

“It would obviously be of tremendous practical importance,” Dr. Knight said. But he warned that research on the microbiome’s effects on behavior was “still in its early stages.”

The most important thing to do now, Dr. Knight and other scientists said, was to run experiments to see if microbes really are manipulating us.

Mark Lyte, a microbiologist at the Texas Tech University Health Sciences Center who pioneered this line of research in the 1990s, is now conducting some of those experiments. He’s investigating whether particular species of bacteria can change the preferences mice have for certain foods.

“This is not a for-sure thing,” Dr. Lyte said. “It needs scientific, hard-core demonstration.”

The rise of data and the death of politics (The Guardian)

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

The Observer, Sunday 20 July 2014

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisionsthat such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0”) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer

For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.

The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.

This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.

Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.

Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.

This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, driveUber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

RoboCup: the World Championship on robotics!

21 July 2014

www.robocup.org

RoboCup was founded in 1997 with the main goal of “developing by 2050 a Robot Soccer team capable of winning against the human team champion of the FIFA World Cup”. In the next years, RoboCup proposed several soccer platforms that have been established as standard platforms for robotics research. This domain demonstrated the capability of capturing key aspects of complex real world problems, stimulating the development of a wide range of technologies, including the design of electrical-mechanical-computational integrated techniques for autonomous robots. After more than 15 years of RoboCup, nowadays robot soccer represents only a part of the available platforms. RoboCup encompasses other leagues that, in addition to Soccer, cover Rescue (Robots and Simulation), @Home (assistive robots in home environments), Sponsored and @Work (Industrial environments), as well as RoboCupJunior leagues for young students. These domains offer a wide range of platforms for researchers with the potential to speed up the developments in the mobile robotics field.

RoboCup has already grown into a project which gets worldwide attention. Every year, multiple tournaments are organized in different countries all over the world, where teams from all over the world participate in various disciplines. There are tournaments in Germany, Portugal, China, Brazil, etc.  In 2014, RoboCup will be hosted for the 1st time in South America, in Brazil.

Meet Jibo, the cute social robot that knows the family (New Scientist)

14:00 16 July 2014 by Hal Hodson

It doesn’t just recognise you – it can field your phone calls and chat to you at dinner

IN SUITE 712 of the Eventi Hotel, high above the sticky June bustle of Midtown Manhattan, New York, one of the world’s most advanced consumer robots awaits command.

“Wake up, Jibo,” says Cynthia Breazeal, his creator. The robot’s round head shakes awake. He lets out a tinkling noise, then a yawn. Jibo’s two-part body twists and stretches and his face, with a single digital eye, switches on and turns to look at us. He looks like a Pixar character come to life.

Jibo is the first robot designed to be used by the whole family. He’s not a niche robot with a single purpose, like a Roomba, nor is he a toy. Available for $499 through an Indiegogo crowdfunding campaign that starts this week, Jibo is designed to tap into the social fabric of a household and help out. The first model, which will ship in 2015, will perform simple tasks like taking voice reminders, fielding phone calls and messages – connecting to the family’s phones through Wi-Fi. He will also act as the heart of the home connecting to iPads, TVs and games consoles. More complex skills include automatically identifying the faces in a room and taking pictures on request and reading a story to a child.

Breazeal chats casually to the robot: “How are you doing, Jibo?”

“I’m great, thanks for asking,” he says, cocking his head slightly as his digital eye curves into a grin. Jibo explains all the different things he can do, after a quick dance to Simon and Garfunkel’s 59th Street Bridge Song.

“I would say this is the first social, personal robot,” says Illah Nourbakhsh, a roboticist at Carnegie Mellon University in Pittsburgh. Jibo’s body language and expressions are designed to convey emotional states in the same way humans do, while his sensors and programming are tuned to our presence. Jibo knows when someone enters a room, and can identify who it is if he can see their face or hear their voice. The idea is that Jibo’s social skills help him to fit seamlessly into the household.

Jibo’s body and head movements are complex and smooth enough to convey convincing human-like body language but he cannot move around. For that, he relies on the humans in the household to pick him up – he weighs a mere 2.7 kilos – and move him from place to place. Jibo charges up via wireless pads plugged in around the house, or he can run on batteries for about 30 minutes away from a power source. When he joins the family at the dinner table, for instance.

Jibo turns to face whoever is talking, so an absent family member can use him to video chat as the rest of the family sit around the table. “With Jibo, you feel like you’re really part of the group dynamic,” says Breazeal.

“I think that’s enormous, I love it,” says Ken Goldberg, a roboticist from the University of California in Berkeley. Goldberg works on robots that can move around their environment and manipulate it, more in line with the traditional notion of the home robot. But such tasks are difficult to perfect: the dream of the robot butler is a long way off. “Right now, the most state-of-the-art robot still takes a good 20 minutes to fold a small towel,” Goldberg says.

Breazeal’s research at the MIT Media Lab, along with that of Bilge Mutlu at the University of Wisconsin-Madison, has shown how important it is for robot-human communication that robots can express emotion. “The ability to turn your head around and pay attention to something else has been taken for granted, but it’s huge,” says Mutlu.

Breazeal is also opening Jibo up to developers as a platform on which to build new kinds of apps, such as ones that let the robot place takeaway orders for “the usual” on request, or that control the lighting and heating in a home, or even keep an eye on activity patterns to make sure that senior household members are moving enough.

But socially aware robots raise new ethical questions. Would it be appropriate, for instance, for Jibo to announce that the senior family member he has been watching has fallen down and cannot get up? “We’re going to have a really interesting dilemma about when a robot can violate privacy to save a life,” Nourbakhsh says.

“The big deal with this is its optimisation for sociality,” says Nourbakhsh. “For the first time in history, we humans are going to have complex interactions with machines.”

This article appeared in print under the headline “The first family robot”

Sobre o exoesqueleto na abertura da Copa do Mundo

JC e-mail 4973, de 16 de junho de 2014

Ciência Hoje On-line: Lance polêmico

Aguardada demonstração de exoesqueleto criado por grupo do brasileiro Miguel Nicolelis recebe pouco destaque na abertura da Copa do Mundo, mas pode ser marco da ciência nacional

Centenas de milhões de telespectadores em todo o mundo. Muito rebuliço na imprensa pela demonstração que faria um brasileiro com deficiência andar com um exoesqueleto robótico. Olhares atentos de uma comunidade científica reticente. Por tudo isso, a demonstração de poucos segundos, realizada em meio a outras atrações da festa de abertura do mundial de futebol, na beira do campo e cortada pela transmissão oficial, merecia um tratamento melhor.

Mesmo assim, a iniciativa liderada pelo brasileiro Miguel Nicolelis, da Universidade Duke, nos Estados Unidos, levou ciência a um dos maiores eventos esportivos do mundo e apontou possibilidades tecnológicas da medicina do futuro – apesar de não ter entregado exatamente o prometido e das críticas pela maneira pouco transparente como foi realizada.

A demonstração do projeto Andar de novo ocorreu pouco antes de a bola rolar na Arena Corinthians para a primeira partida do mundial. Em meio à festa, enquanto a transmissão se dividia entre a chegada da seleção brasileira ao estádio e as papagaiadas do espetáculo, o atleta Juliano Pinto utilizou a mente para comandar o equipamento robótico e chutar de leve a Brazuca, bola oficial do torneio. A proposta inicial era de que o exoesqueleto andasse cerca de 20 metros, o que não aconteceu – na verdade, ele não andou, nem dobrou o joelho ou deslocou seu centro de gravidade de maneira significativa, apenas moveu a perna.

Leia a matéria completa na CH On-line, que tem conteúdo exclusivo atualizado diariamente: http://cienciahoje.uol.com.br/noticias/2014/06/lance-polemico

* * *

JC e-mail 4973, de 16 de junho de 2014

O show do exoesqueleto

Artigo de Roberto Lent publicado em O Globo de 14/6. A prévia publicação nas revistas especializadas é o selo de qualidade do produto científico, como o selo do Inmetro o é para os produtos industriais

Não deve ter sido por acaso que a Fifa reduziu para poucos segundos a exibição do exoesqueleto do cientista Miguel Nicolelis na abertura da Copa do Mundo. Talvez a prudência lhe tenha imposto essa medida.

Em torno desse polêmico episódio, há várias questões a considerar. Primeiro: não é possível aferir a originalidade e o impacto científico e prático da propalada tecnologia de comando cerebral do exoesqueleto sob feedback sensorial eletrônico. A razão é simples: Nicolelis ainda não a publicou em revistas especializadas. Sua produção científica e sua capacidade de trabalho permitem supor que o fará brevemente para a avaliação da comunidade científica da área. Ficaremos aguardando. Mas o fato é que até o momento pouco se pode comentar sobre o experimento da Copa que não sejam especulações.

Segundo: a exibição pública, para milhões de pessoas em todo o mundo, do chute à bola efetuado por um paraplégico vestindo o exoesqueleto é em si uma iniciativa importante para valorizar a ciência perante a sociedade. No entanto, do modo como foi feita, viola um princípio ético básico da divulgação científica – só se deve divulgar ao público leigo o que antes se publica nas revistas especializadas. Elitismo? Falta de espírito democrático? Não, responsabilidade social. A prévia publicação nas revistas especializadas é o selo de qualidade do produto científico, como o selo do Inmetro o é para os produtos industriais, a licença da Anvisa para os medicamentos, o carimbo do Ministério da Agricultura para os produtos agrícolas. Essas revistas, antes de publicar qualquer artigo, submetem-no a uma rigorosa revisão por especialistas. Além disso, os autores têm que apresentar todos os detalhes dos métodos que empregaram e dos resultados que obtiveram. Na abertura da Copa, o show do exoesqueleto representou uma ruptura com esse princípio. Talvez tenha sido isso que a Fifa percebeu a tempo.

Mas há outras questões em jogo: uma delas é o contraste entre o financiamento que o projeto Nicolelis obteve e o que conseguem obter os pesquisadores brasileiros de nossas universidades, com todo o crescimento dos recursos conseguido nos últimos anos. A Finep, agência de financiamento do Ministério da Ciência, Tecnologia e Inovação, colocou R$ 33 milhões no exoesqueleto. Nada errado nisso: trata-se de uma agência de inovação, cuja missão é justamente investir em projetos ousados, assumindo os riscos, que de resto são inerentes a todos os projetos científicos. Mas é inevitável comparar: o edital recentemente lançado por outras agências do mesmo ministério para a criação de Institutos Nacionais de Ciência e Tecnologia anunciou que proverá no máximo R$ 10 milhões para cada um dos grupos que vencerem uma acirrada concorrência. Como esses R$ 10 milhões se destinam a grupos que associam vários pesquisadores independentes, cada pesquisador contará com algo em torno de R$ 1 milhão para o seu projeto.

Três a um foi a vitória da seleção brasileira; 33 a 1 foi a vitória de Nicolelis sobre a comunidade científica brasileira.

Roberto Lent é professor do Instituto de Ciências Biomédicas da Universidade Federal do Rio de Janeiro

(O Globo)
http://oglobo.globo.com/opiniao/o-show-do-exoesqueleto-12856030#ixzz34nrMrffx

*   *   *

JC e-mail 4973, de 16 de junho de 2014

Baixaram a bola da Ciência Brasileira

Artigo de Marcelo Träsel publicado no Zero Hora

Uma das atrações mais esperadas na abertura da Copa do Mundo de 2014 era a apresentação do exoesqueleto desenvolvido pelo cientista brasileiro Miguel Nicolelis, que permitiria a um paraplégico dar o chute inicial do primeiro jogo. Ele de fato fez um paraplégico se levantar e dar um pontapé numa bola de futebol no Itaquerão. Mas quase ninguém viu esse marco da ciência _ pelo menos, não ao vivo, porque a Rede Globo o exibiu por apenas alguns segundos, em meia tela, para mostrar o ônibus da Seleção Brasileira chegando ao estádio no mesmo momento.

Nicolelis havia dado a impressão, em suas entrevistas, de que o tal chute ocorreria no início do jogo, não que seria apenas um acontecimento paralelo à beira do gramado enquanto J.Lo, Pitbull e Claudia Leitte erravam o playback do tema da Copa do Mundo. Existe a possibilidade de que ele tenha recebido promessas e estas tenham sido descumpridas. Também houve boatos de que a Fifa teria impedido o chute no centro do campo por medo de o peso do aparelho prejudicar o gramado.

De qualquer forma, a ausência de cerimônia com que esse enorme passo do engenho brasileiro foi tratado, primeiro, pela organização da festa de abertura e, depois, pela principal emissora do país, simboliza com perfeição o espaço que damos à ciência no imaginário nacional. Até 2012, o Brasil investia cerca de 1,3% do PIB em pesquisa e desenvolvimento de tecnologia, bastante menos do que a média dos países da OCDE, que era de 2,3%. Conforme estudo da ONG Battelle, em 2014 o país deve investir os mesmos 1,3% de 2012, bem mais do que os 0,6% da Argentina, mas ainda longe dos 3,6% da Coreia do Sul, ou dos 2% da China.

As cerimônias de abertura da Copa, das Olímpiadas e outros eventos do gênero servem não apenas para mostrar um país, mas também para projetar um ideal, o desejo de uma sociedade para o futuro. O Brasil, na Copa de 2014, poderia ter projetado se tornar também uma potência mundial em pesquisa e produção de tecnologia, mas preferiu continuar sendo apenas uma potência esportiva e cultural _ o país do futebol e do Carnaval.

Marcelo Träsel é pesquisador e professor de comunicação na PUCRS.

(Zero Hora)
http://wp.clicrbs.com.br/opiniaozh/2014/06/16/artigo-baixaram-a-bola-da-ciencia-brasileira/?topo=13,1,1,,,13

“Exoesqueleto é um grande ganho”, diz jovem do chute inaugural da Copa (Zero Hora)

JC e-mail 4974, de 17 de junho de 2014

Paraplégico rebate contestações ao projeto do neurocientista Miguel Nicolelis

Por três segundos na última quinta-feira, Juliano Alves Pinto, 29 anos, apresentou às câmeras um projeto de R$ 33 milhões: o exoesqueleto que permitiu o jovem paraplégico dar o pontapé inaugural da Copa do Mundo. Se ao projeto do neurocientista Miguel Nicolelis não faltaram críticas, o paciente não economiza elogios ao experimento.
– Aqueles que criticam são pessoas sem informação sobre o projeto – defendeu Juliano na manhã desta segunda-feira em entrevista a Zero Hora.

Questionamentos ao experimento científico se baseiam na dimensão da demonstração frente à grandeza da promessa, classificada quase como um milagre: munido de uma veste robótica, um paraplégico levantaria de uma cadeira de rodas, caminharia até o gramado do Itaquerão e chutaria uma bola acionando apenas a força do pensamento. Não foi o que ocorreu.
– O tempo foi muito curto para que isso acontecesse – constatou o jovem.

O uso do exoesqueleto representou mais um aprendizado na vida do morador de Gália – cidade de 7 mil habitantes a cerca de 400 quilômetros da capital paulista. Há 7 anos e meio, ele perdeu o movimento das pernas ao fraturar a coluna em um acidente de trânsito – no qual perdeu um irmão de 27 anos. Sob a nova condição em cima de uma cadeira de rodas, teve de readquirir as habilidades comprometidas:

– Minha vida mudou. Antes eu conseguia fazer as minhas coisas e, de repente, precisava das pessoas para me ajudar. Tive de reaprender a fazer tudo sozinho. Hoje, levo uma vida praticamente independente, dirijo, pratico esportes, me troco, tomo banho.

Passados os segundos de fama e a repercussão posterior à abertura do Mundial – na sua cidade, foi recebido com carreata -, Juliano retoma a rotina habitual. Ainda nesta semana, participa de um campeonato que representa uma das suas motivações: o atletismo. Para o futuro, ele busca ajuda para a compra de uma nova cadeira de corrida para participar de torneios e, quem sabe, acumular pontos para se tornar profissional. Paralimpíadas em mente?

– Sonho sim. Não perco as esperanças, nunca – diz o galiense.

Confira os principais trechos da entrevista que o jovem concedeu a Zero Hora, por telefone, nesta segunda-feira:

Como ocorreu a seleção para participar do projeto Andar de Novo e da abertura da Copa?
Sou paciente da AACD (Associação de Assistência à Criança Deficiente) de São Paulo, onde o projeto já estava acontecendo e onde estavam sendo selecionados alguns pacientes. Há uns seis meses, surgiu o convite para mim e eu aceitei. Ao todo, foram selecionados 10 pacientes, oito continuaram e três foram pré-selecionados para fazer a demonstração na Copa, mas todos os outros estavam preparados para usar o exoesqueleto. Depois veio a notícia, faltando uns quatro dias para o evento, que eu fui o escolhido.

Qual foi a sensação quando você recebeu a notícia?
Fiquei muito feliz não só por estar fazendo parte do projeto e representando todos eles, mas representando todos que também têm uma deficiência como eu e sonham, um dia, ter um bem-estar melhor para a sua vida. Creio que toda essa parte da ciência vem para nos ajudar, é um bem-estar a mais para a pessoa.

Como foi a preparação e o treinamento para o projeto?
Estávamos cercados de grandes profissionais não só na parte da ciência, mas também fisiatras, fisioterapeutas. Deu tudo certo. Eu saía de Gália de madrugada, chegava em São Paulo às 8h, ficava o dia todo em treinamento e voltava para a casa.

Por que você foi o escolhido?
Eu estava mais preparado para o dia da Copa. Não que os outros não estivessem, mas eu me enquadrava melhor no perfil que eles procuravam.

Qual foi a sensação ao vestir o exoesqueleto?
Posso dizer por mim e acho que pelos outros pacientes que também tiveram a oportunidade de andar no exoesqueleto que é muito bom. Você está em uma cadeira de rodas e, por mais que ela permita que você se locomova normalmente mesmo sem ter a mobilidade das pernas, você poder trocar alguns passos novamente, é um grande ganho. No meu caso, depois de sete anos e meio, o exoesqueleto trouxe isso de volta. É algo muito satisfatório, de muita alegria, você novamente poder fazer algo que perdeu lá atrás.

Foi como caminhar novamente?
A sensação, sim. Creio que isso depende, também, da gente começar a se adaptar mais… mas, poxa, é uma sensação bem real, mesmo.

Pelo sua sensação, será possível, no futuro, trocar a cadeira de rodas pelo exoesqueleto?
Creio que sim. Durante esse pouco tempo que acompanhei o doutor Nicolelis e sua equipe, percebi que eles têm um grande potencial para que isso venha a acontecer. Mesmo que haja críticas, que as pessoas não acreditem, estando ali e presenciando o projeto, creio que isso será possível, sim.

Inicialmente, a expectativa era que você levantaria da cadeira de rodas, caminharia até a bola e a chutaria. Não foi o que aconteceu. Como você avalia o resultado da experiência?
Como o próprio Miguel Nicolelis abordou, o tempo foi muito curto para que isso viesse a acontecer. A gente se enquadrou dentro de um roteiro da Fifa. Muita gente questionou por que fizemos o que fizemos na abertura também nos ensaios, mas foi porque o tempo era aquele. Para a gente fazer tudo isso(levantar, caminhar e chutar), teríamos que ter um tempo maior, não tinha como. É como o doutor Nicolelis falou, não existe na história uma demonstração da parte robótica dessa maneira em 29 segundos. Conseguimos fazer em 16 segundos, e menos apareceu na mídia. Então, a gente se enquadrou no padrão que nos passaram, fizemos aquilo para obedecer o tempo que chegou até nós. Não que a gente tenha fugido do que foi dito, mas nos adequamos dentro do tempo que tínhamos.

Então pode-se dizer que foi um sucesso?
Com certeza. Foi um marco, algo que entrou para a história.

Apesar da ampla divulgação do projeto, o chute ganhou apenas três segundos na televisão. Você ficou chateado com a pouca visibilidade dada no momento?
Eu não tinha conhecimento que havia sido transmitido em tão pouco tempo. Quando comecei a acompanhar vi que, realmente, foi pouco mesmo. Mas, depois, foi amplamente abordado, a mídia trouxe bastante o assunto, mas acho que poderia, sim, ter se dado um tempo maior para a apresentação, ter focado mais. Não sei se posso dizer que fiquei triste, mas posso dizer que gostaria que tivesse sido dado um tempo maior.

Críticos ao neurocientista Miguel Nicolelis disseram que o projeto foi um fracasso. O que você tem a dizer a eles?
Aqueles que criticam são pessoas sem informação sobre o projeto. Eles se baseiam no que pensam, mas eu creio que, se essas pessoas estivessem vivenciando o que os pacientes viveram durante todo esse tempo, tenho certeza que os pensamentos e argumentos seriam diferentes. Não tem como você falar de uma coisa que você não conhece, como dizer que o produto é bom se você não conheceu e não sabe detalhes. Então, eu creio que essas pessoas não têm informações corretas acerca do que está acontecendo.

O que mudou na sua rotina desde quinta-feira passada?
Estou procurando viver uma rotina normal. Agora, vou voltar a treinar e quero levar a minha rotina normal. O que mudou foi aparecer bastante na mídia, foi um assunto que ficou bastante visto, mas acho que isso não tem me atrapalhado. O que eu quero fazer é deixar as coisas bem claras, não me esconder, e estar disposto a esclarecer o projeto também.

Quais são seus planos?
O projeto continua, e estou buscando a minha classificação nos jogos de atletismo que participo. Tenho o sonho de conseguir um equipamento melhor, uma cadeira de corrida, para disputar e conseguir um índice para um nacional ou até um mundial. No Brasil não se acha, apenas com representantes, e o preço vai lá em cima porque é uma cadeira importada.

(Débora Ely / Zero Hora)
http://zh.clicrbs.com.br/rs/noticias/planeta-ciencia/noticia/2014/06/exoesqueleto-e-um-grande-ganho-diz-jovem-do-chute-inaugural-da-copa-4528138.html

Abelhas “biônicas” vão ajudar a monitorar mudanças climáticas na Amazônia (O Globo)

JC e-mail 4966, de 04 de junho de 2014

Microssensores instalados em insetos vão colher dados sobre seu comportamento e do ambiente

Nas suas idas e vindas das colmeias, as abelhas interagem com boa parte do ambiente em sua volta, além de realizarem um importante trabalho de polinização de plantas que muito contribui para a manutenção da biodiversidade e a produção de alimentos em todo mundo. Agora, enxames delas vão assumir um outro papel, o de estações meteorológicas “biônicas”, para ajudar a monitorar os efeitos das mudanças climáticas na Amazônia e em seu próprio comportamento.

Desde a semana passada, pesquisadores do Instituto Tecnológico Vale (ITV) e da CSIRO, agência federal de pesquisas científicas da Austrália, estão instalando microssensores em 400 abelhas de um apiário no município de Santa Bárbara do Pará, a uma hora de distância de Belém, na primeira fase da experiência, que também visa descobrir as causas do chamado Distúrbio de Colapso de Colônias (CCD, na sigla em inglês), que só nos Estados Unidos já provocou a morte de 35% desses insetos criados em cativeiro.

– Não sabemos como as abelhas vão se comportar diante das projeções de aumento da temperatura e mudanças no clima devido ao aquecimento global – conta o físico Paulo de Souza, pesquisador-visitante do ITV e da CSIRO e responsável pela experiência. – Assim, entender como elas vão se adaptar a estas mudanças é importante para podermos estimar o que pode acontecer no futuro.

Souza explica que os microssensores usados no experimento são capazes de gerar a própria energia e captar e armazenar dados não só do comportamento das abelhas como da temperatura, umidade e nível de insolação do ambiente. Tudo isso espremido em um pequeno quadrado com 2,5 milímetros de lado com peso de 5,4 miligramas, o que faz com que as abelhas, da espécie Apis mellifera africanizadas, com em média 70 miligramas de peso, sintam como se estivessem “carregando uma mochila nas costas”.

– Mas isso não afeta o comportamento delas, que se adaptam muito rapidamente à instalação dos microssensores – garante.

Já a partir no próximo semestre, os pesquisadores deverão começar a instalar os microssensores, que custam US$ 0,30 (cerca de R$ 0,70) cada, em espécies nativas da Amazônia não dotadas de ferrão. Segundo Souza, estas abelhas são ainda mais importantes para a polinização das plantas da região, e são também mais sensíveis a mudanças no ambiente. Assim, a escala da experiência deve aumentar, com a utilização de 10 mil dos pequenos aparelhos ao longo de várias gerações de abelhas, que vivem em média dois meses.

O tamanho dos atuais sensores, porém, não permite que o dispositivo seja instalado em insetos menores, como mosquitos. Por isso, o grupo de Paulo de Souza já trabalha numa nova geração de microssensores com um décimo de milímetro, ou o equivalente a um grão de areia. Segundo o pesquisador, os novos sensores, que devem ficar prontos em quatro anos, terão as mesmas capacidades dos atuais, com a vantagem de serem “ativos”, isto é, vão poder transmitir em tempo real os dados coletados.

– Quando tivermos os sensores deste tamanho, poderemos aplicá-los na forma de spray nas colmeias, além de usá-los para monitorar outras espécies de insetos, como mosquitos transmissores de doenças – diz. – Mas a vantagem principal é que com eles vamos poder fazer das abelhas e outros insetos verdadeiras estações meteorológicas ambulantes, permitindo um monitoramento ambiental numa escala sem precedentes, já que cada abelha ou mosquito vai atuar como um agente de campo.

(Cesar Baima / O Globo)
http://oglobo.globo.com/sociedade/ciencia/abelhas-bionicas-vao-ajudar-monitorar-mudancas-climaticas-na-amazonia-12712798#ixzz33gDI4XQy

Cientistas lançam robô que pode fazer cirurgias em fetos ainda no útero (O Globo)

JC e-mail 4964, de 02 de junho de 2014

Máquina seria capaz de prevenir doenças congênitas

Cientistas britânicos lançaram nesta semana um pequeno robô, capaz de operar fetos ainda no útero das mães. A máquina, que custou cerca de R$ 30 milhões, pode revolucionar o tratamento de más formações congênitas.

O minúsculo aparelho é capaz de fornecer imagens em 3D dos bebês imersos na placenta. Com a visão do “paciente”, o robô começa as intervenções médicas, controladas por uma equipe de especialistas que ficam nos bastidores. A invenção poderia, por exemplo, fazer cirurgias ou até implantar células-tronco em órgãos com deformações da criança.

O projeto é coordenado por engenheiros da University College London (UCL) e Universidade Católica da Lovaina, na Bélgica. De acordo com o líder da pesquisa, Sebastien Ourselin, o máquina evitará riscos tanto às mães quanto aos bebês.

– O objetivo é criar tecnologias cirúrgicas menos invasivas para tratar uma ampla gama de doenças no útero, com muito menos risco para ambos – disse Ourselin ao The Guardian.

O primeiro alvo em vista dos médicos é o tratamento de casos mais graves de espinha bífida, má formação da espinha dorsal que pode atinge um entre cada mil fetos. Ela ocorre quando a coluna não é plenamente desenvolvida, dando margem para que líquido amniótico penetre e leve consigo germes que poderiam atingir o cérebro e prejudicar o crescimento da criança. A intenção é que o novo robô possa fechar esses espaços na espinha, prevenindo a doença.

No entanto, cientistas alertam que operações deste tipo têm elevado risco cirúrgico, com fortes chances de sequelas nas mães. Intervenções médicas em fetos só podem ser realizadas após, pelo menos, 26 semanas de gestação. O procedimento é praticamente impossível atualmente.

O robô é composto por uma sonda muito fina e altamente flexível. A cabeça do equipamento teria um fio equipado com uma pequena câmera que iria usar pulsos de laser e ultra-som detecção – uma combinação conhecida como imagens foto-acústica – para gerar uma fotografia 3D no interior do útero. Estas imagens, então, seriam utilizadas pelos cirurgiões para orientar a sonda para a sua meta: a lacuna na coluna do feto.

(O Globo com Agências)
http://oglobo.globo.com/sociedade/saude/cientistas-lancam-robo-que-pode-fazer-cirurgias-em-fetos-ainda-no-utero-12674796#ixzz33UfgLb9W