Arquivo da tag: Ciborgue

When Exponential Progress Becomes Reality (Medium)

Niv Dror

“I used to say that this is the most important graph in all the technology business. I’m now of the opinion that this is the most important graph ever graphed.”

Steve Jurvetson

Moore’s Law

The expectation that your iPhone keeps getting thinner and faster every two years. Happy 50th anniversary.

Components get cheapercomputers get smallera lot of comparisontweets.

In 1965 Intel co-founder Gordon Moore made his original observation, noticing that over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years. The prediction was specific to semiconductors and stretched out for a decade. Its demise has long been predicted, and eventually will come to an end, but continues to be valid to this day.

Expanding beyond semiconductors, and reshaping all kinds of businesses, including those not traditionally thought of as tech.

Yes, Box co-founder Aaron Levie is the official spokesperson for Moore’s Law, and we’re all perfectly okay with that. His cloud computing company would not be around without it. He’s grateful. We’re all grateful. In conversations Moore’s Law constantly gets referenced.

It has become both a prediction and an abstraction.

Expanding far beyond its origin as a transistor-centric metric.

But Moore’s Law of integrated circuits is only the most recent paradigm in a much longer and even more profound technological trend.

Humanity’s capacity to compute has been compounding for as long as we could measure it.

5 Computing Paradigms: Electromechanical computer build by IBM for the 1890 U.S. Census → Alan Turing’s relay based computer that cracked the Nazi Enigma → Vacuum-tube computer predicted Eisenhower’s win in 1952 → Transistor-based machines used in the first space launches → Integrated-circuit-based personal computer

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines Google’s Director of Engineering, futurist, and author Ray Kurzweil proposed “The Law of Accelerating Returns”, according to which the rate of change in a wide variety of evolutionary systems tends to increase exponentially. A specific paradigm, a method or approach to solving a problem (e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the paradigm exhausts its potential. When this happens, a paradigm shift, a fundamental change in the technological approach occurs, enabling the exponential growth to continue.

Kurzweil explains:

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing’s relay-based machine that cracked the Nazi enigma code, to the vacuum tube computer that predicted Eisenhower’s win in 1952, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

This graph, which venture capitalist Steve Jurvetson describes as the most important concept ever to be graphed, is Kurzweil’s 110 year version of Moore’s Law. It spans across five paradigm shifts that have contributed to the exponential growth in computing.

Each dot represents the best computational price-performance device of the day, and when plotted on a logarithmic scale, they fit on the same double exponential curve that spans over a century. This is a very long lasting and predictable trend. It enables us to plan for a time beyond Moore’s Law, without knowing the specifics of the paradigm shift that’s ahead. The next paradigm will advance our ability to compute to such a massive scale, it will be beyond our current ability to comprehend.

The Power of Exponential Growth

Human perception is linear, technological progress is exponential. Our brains are hardwired to have linear expectations because that has always been the case. Technology today progresses so fast that the past no longer looks like the present, and the present is nowhere near the future ahead. Then seemingly out of nowhere, we find ourselves in a reality quite different than what we would expect.

Kurzweil uses the overall growth of the internet as an example. The bottom chart being linear, which makes the internet growth seem sudden and unexpected, whereas the the top chart with the same data graphed on a logarithmic scale tell a very predictable story. On the exponential graph internet growth doesn’t come out of nowhere; it’s just presented in a way that is more intuitive for us to comprehend.

We are still prone to underestimate the progress that is coming because it’s difficult to internalize this reality that we’re living in a world of exponential technological change. It is a fairly recent development. And it’s important to get an understanding for the massive scale of advancements that the technologies of the future will enable. Particularly now, as we’ve reachedwhat Kurzweil calls the “Second Half of the Chessboard.”

(In the end the emperor realizes that he’s been tricked, by exponents, and has the inventor beheaded. In another version of the story the inventor becomes the new emperor).

It’s important to note that as the emperor and inventor went through the first half of the chessboard things were fairly uneventful. The inventor was first given spoonfuls of rice, then bowls of rice, then barrels, and by the end of the first half of the chess board the inventor had accumulated one large field’s worth — 4 billion grains — which is when the emperor started to take notice. It was only as they progressed through the second half of the chessboard that the situation quickly deteriorated.

# of Grains on 1st half: 4,294,967,295

# of Grains on 2nd half: 18,446,744,069,414,600,000

Mind-bending nonlinear gains in computing are about to get a lot more realistic in our lifetime, as there have been slightly more than 32 doublings of performance since the first programmable computers were invented.

Kurzweil’s Predictions

Kurzweil is known for making mind-boggling predictions about the future. And his track record is pretty good.

“…Ray is the best person I know at predicting the future of artificial intelligence.” —Bill Gates

Ray’s prediction for the future may sound crazy (they do sound crazy), but it’s important to note that it’s not about the specific prediction or the exact year. What’s important to focus on is what the they represent. These predictions are based on an understanding of Moore’s Law and Ray’s Law of Accelerating Returns, an awareness for the power of exponential growth, and an appreciation that information technology follows an exponential trend. They may sound crazy, but they are not based out of thin air.

And with that being said…

Second Half of the Chessboard Predictions

“By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.”

“By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.”

To expand image → https://twitter.com/nivo0o0/status/564309273480409088

Not quite there yet…

“By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.”

These clones are cute.

“By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.”

Multiplying our intelligence a billionfold by linking our neocortex to a synthetic neocortex in the cloud — what does that actually mean?

In March 2014 Kurzweil gave an excellent talk at the TED Conference. It was appropriately called: Get ready for hybrid thinking.

Here is a summary:

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

These are the highlights:

Nanobots will connect our neocortex to a synthetic neocortex in the cloud, providing an extension of our neocortex.

Our thinking then will be a hybrid of biological and non-biological thinking(the non-biological portion is subject to the Law of Accelerating Returns and it will grow exponentially).

The frontal cortex and neocortex are not really qualitatively different, so it’s a quantitative expansion of the neocortex (like adding processing power).

The last time we expanded our neocortex was about two million years ago. That additional quantity of thinking was the enabling factor for us to take aqualitative leap and advance language, science, art, technology, etc.

We’re going to again expand our neocortex, only this time it won’t be limited by a fixed architecture of inclosure. It will be expanded without limits, by connecting our brain directly to the cloud.

We already carry a supercomputer in our pocket. We have unlimited access to all the world’s knowledge at our fingertips. Keeping in mind that we are prone to underestimate technological advancements (and that 2045 is not a hard deadline) is it really that far of a stretch to imagine a future where we’re always connected directly from our brain?

Progress is underway. We’ll be able to reverse engineering the neural cortex within five years. Kurzweil predicts that by 2030 we’ll be able to reverse engineer the entire brain. His latest book is called How to Create a Mind… This is the reason Google hired Kurzweil.

Hybrid Human Machines

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

“We’re going to become increasingly non-biological…”

“We’ll also have non-biological bodies…”

“If the biological part went away it wouldn’t make any difference…”

They* will be as realistic as real reality.”

Impact on Society

technological singularity —“the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization” — is beyond the scope of this article, but these advancements will absolutely have an impact on society. Which way is yet to be determined.

There may be some regret

Politicians will not know who/what to regulate.

Evolution may take an unexpected twist.

The rich-poor gap will expand.

The unimaginable will become reality and society will change.

The Cathedral of Computation (The Atlantic)

We’re not living in an algorithmic culture so much as a computational theocracy.

Algorithms are everywhere, supposedly. We are living in an “algorithmic culture,” to use the author and communication scholar Ted Striphas’s name for it. Google’s search algorithms determine how we access information. Facebook’s News Feed algorithms determine how we socialize. Netflix’s and Amazon’s collaborative filtering algorithms choose products and media for us. You hear it everywhere. “Google announced a change to its algorithm,” a journalist reports. “We live in a world run by algorithms,” a TED talk exhorts. “Algorithms rule the world,” a news report threatens. Another upgrades rule to dominion: “The 10 Algorithms that Dominate Our World.”

Here’s an exercise: The next time you hear someone talking about algorithms, replace the term with “God” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.

It’s part of a larger trend. The scientific revolution was meant to challenge tradition and faith, particularly a faith in religious superstition. But today, Enlightenment ideas like reason and science are beginning to flip into their opposites. Science and technology have become so pervasive and distorted, they have turned into a new type of theology.

The worship of the algorithm is hardly the only example of the theological reversal of the Enlightenment—for another sign, just look at the surfeit of nonfiction books promising insights into “The Science of…” anything, from laughter to marijuana. But algorithms hold a special station in the new technological temple because computers have become our favorite idols.

In fact, our purported efforts to enlighten ourselves about algorithms’ role in our culture sometimes offer an unexpected view into our zealous devotion to them. The media scholar Lev Manovich had this to say about “The Algorithms of Our Lives”:

Software has become a universal language, the interface to our imagination and the world. What electricity and the combustion engine were to the early 20th century, software is to the early 21st century. I think of it as a layer that permeates contemporary societies.

This is a common account of algorithmic culture, that software is a fundamental, primary structure of contemporary society. And like any well-delivered sermon, it seems convincing at first. Until we think a little harder about the historical references Manovich invokes, such as electricity and the engine, and how selectively those specimens characterize a prior era. Yes, they were important, but is it fair to call them paramount and exceptional?

It turns out that we have a long history of explaining the present via the output of industry. These rationalizations are always grounded in familiarity, and thus they feel convincing. But mostly they are metaphorsHere’s Nicholas Carr’s take on metaphorizing progress in terms of contemporary technology, from the 2008 Atlantic cover story that he expanded into his bestselling book The Shallows:

The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.”

Carr’s point is that there’s a gap between the world and the metaphors people use to describe that world. We can see how erroneous or incomplete or just plain metaphorical these metaphors are when we look at them in retrospect.

Take the machine. In his book Images of Organization, Gareth Morgan describes the way businesses are seen in terms of different metaphors, among them the organization as machine, an idea that forms the basis for Taylorism.

Gareth Morgan’s metaphors of organization (Venkatesh Rao/Ribbonfarm)

We can find similar examples in computing. For Larry Lessig, the accidental homophony between “code” as the text of a computer program and “code” as the text of statutory law becomes the fulcrum on which his argument that code is an instrument of social control balances.

Each generation, we reset a belief that we’ve reached the end of this chain of metaphors, even though history always proves us wrong precisely because there’s always another technology or trend offering a fresh metaphor. Indeed, an exceptionalism that favors the present is one of the ways that science has become theology.

In fact, Carr fails to heed his own lesson about the temporariness of these metaphors. Just after having warned us that we tend to render current trends into contingent metaphorical explanations, he offers a similar sort of definitive conclusion:

Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.

As with the machinic and computational metaphors that he critiques, Carr settles on another seemingly transparent, truth-yielding one. The real firmament is neurological, and computers are fitzing with our minds, a fact provable by brain science. And actually, software and neuroscience enjoy a metaphorical collaboration thanks to artificial intelligence’s idea that computing describes or mimics the brain. Compuplasting-as-thought reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.

* * *

The metaphor of mechanical automation has always been misleading anyway, with or without the computation. Take manufacturing. The goods people buy from Walmart appear safely ensconced in their blister packs, as if magically stamped out by unfeeling, silent machines (robots—those original automata—themselves run by the tinier, immaterial robots algorithms).

But the automation metaphor breaks down once you bother to look at how even the simplest products are really produced. The photographer Michael Wolf’s images of Chinese factory workers and the toys they fabricate show that finishing consumer goods to completion requires intricate, repetitive human effort.

Michael Wolf Photography

Eyelashes must be glued onto dolls’ eyelids. Mickey Mouse heads must be shellacked. Rubber ducky eyes must be painted white. The same sort of manual work is required to create more complex goods too. Like your iPhone—you know, the one that’s designed in California but “assembled in China.” Even though injection-molding machines and other automated devices help produce all the crap we buy, the metaphor of the factory-as-automated machine obscures the fact that manufacturing isn’t as machinic nor as automated as we think it is.

The algorithmic metaphor is just a special version of the machine metaphor, one specifying a particular kind of machine (the computer) and a particular way of operating it (via a step-by-step procedure for calculation). And when left unseen, we are able to invent a transcendental ideal for the algorithm. The canonical algorithm is not just a model sequence but a concise and efficient one. In its ideological, mythic incarnation, the ideal algorithm is thought to be some flawless little trifle of lithe computer code, processing data into tapestry like a robotic silkworm. A perfect flower, elegant and pristine, simple and singular. A thing you can hold in your palm and caress. A beautiful thing. A divine one.

But just as the machine metaphor gives us a distorted view of automated manufacture as prime mover, so the algorithmic metaphor gives us a distorted, theological view of computational action.

“The Google search algorithm” names something with an initial coherence that quickly scurries away once you really look for it. Googling isn’t a matter of invoking a programmatic subroutine—not on its own, anyway. Google is a monstrosity. It’s a confluence of physical, virtual, computational, and non-computational stuffs—electricity, data centers, servers, air conditioners, security guards, financial markets—just like the rubber ducky is a confluence of vinyl plastic, injection molding, the hands and labor of Chinese workers, the diesel fuel of ships and trains and trucks, the steel of shipping containers.

Once you start looking at them closely, every algorithm betrays the myth of unitary simplicity and computational purity. You may remember the Netflix Prize, a million dollar competition to build a better collaborative filtering algorithm for film recommendations. In 2009, the company closed the book on the prize, adding a faux-machined “completed” stamp to its website.

But as it turns out, that method didn’t really improve Netflix’s performance very much. The company ended up downplaying the ratings and instead using something different to manage viewer preferences: very specific genres like “Emotional Hindi-Language Movies for Hopeless Romantics.” Netflix calls them “altgenres.”

An example of a Netflix altgenre in action (tumblr/Genres of Netflix)

While researching an in-depth analysis of altgenres published a year ago at The Atlantic, Alexis Madrigal scraped the Netflix site, downloading all 76,000+ micro-genres using not an algorithm but a hackneyed, long-running screen-scraping apparatus. After acquiring the data, Madrigal and I organized and analyzed it (by hand), and I built a generator that allowed our readers to fashion their own altgenres based on different grammars (like “Deep Sea Forbidden Love Mockumentaries” or “Coming-of-Age Violent Westerns Set in Europe About Cats”).

Netflix VP Todd Yellin explained to Madrigal why the process of generating altgenres is no less manual than our own process of reverse engineering them. Netflix trains people to watch films, and those viewers laboriously tag the films with lots of metadata, including ratings of factors like sexually suggestive content or plot closure. These tailored altgenres are then presented to Netflix customers based on their prior viewing habits.

One of the hypothetical, “gonzo” altgenres created by The Atlantic‘s Netflix Genre Generator (The Atlantic)

Despite the initial promise of the Netflix Prize and the lurid appeal of a “million dollar algorithm,” Netflix operates by methods that look more like the Chinese manufacturing processes Michael Wolf’s photographs document. Yes, there’s a computer program matching viewing habits to a database of film properties. But the overall work of the Netflix recommendation system is distributed amongst so many different systems, actors, and processes that only a zealot would call the end result an algorithm.

The same could be said for data, the material algorithms operate upon. Data has become just as theologized as algorithms, especially “big data,” whose name is meant to elevate information to the level of celestial infinity. Today, conventional wisdom would suggest that mystical, ubiquitous sensors are collecting data by the terabyteful without our knowledge or intervention. Even if this is true to an extent, examples like Netflix’s altgenres show that data is created, not simply aggregated, and often by means of laborious, manual processes rather than anonymous vacuum-devices.

Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture.

* * *

If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work.

Unfortunately, most computing systems don’t want to admit that they are burlesques. They want to be innovators, disruptors, world-changers, and such zeal requires sectarian blindness. The exception is games, which willingly admit that they are caricatures—and which suffer the consequences of this admission in the court of public opinion. Games know that they are faking it, which makes them less susceptible to theologization. SimCity isn’t an urban planning tool, it’s  a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like.

A Google Maps Street View vehicle roams the streets of Washington D.C. Google Maps entails algorithms, but also other things, like internal combustion engine automobiles. (justgrimes/Flickr)

Just as it’s not really accurate to call the manufacture of plastic toys “automated,” it’s not quite right to call Netflix recommendations or Google Maps “algorithmic.” Yes, true, there are algorithmsw involved, insofar as computers are involved, and computers run software that processes information. But that’s just a part of the story, a theologized version of the diverse, varied array of people, processes, materials, and machines that really carry out the work we shorthand as “technology.” The truth is as simple as it is uninteresting: The world has a lot of stuff in it, all bumping and grinding against one another.

I don’t want to downplay the role of computation in contemporary culture. Striphas and Manovich are right—there are computers in and around everything these days. But the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like “algorithm” have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

This attitude blinds us in two ways. First, it allows us to chalk up any kind of computational social change as pre-determined and inevitable. It gives us an excuse not to intervene in the social shifts wrought by big corporations like Google or Facebook or their kindred, to see their outcomes as beyond our influence. Second, it makes us forget that particular computational systems are abstractions, caricatures of the world, one perspective among many. The first error turns computers into gods, the second treats their outputs as scripture.

Computers are powerful devices that have allowed us to mimic countless other machines all at once. But in so doing, when pushed to their limits, that capacity to simulate anything reverses into the inability or unwillingness to distinguish one thing from anything else. In its Enlightenment incarnation, the rise of reason represented not only the ascendency of science but also the rise of skepticism, of incredulity at simplistic, totalizing answers, especially answers that made appeals to unseen movers. But today even as many scientists and technologists scorn traditional religious practice, they unwittingly invoke a new theology in so doing.

Algorithms aren’t gods. We need not believe that they rule the world in order to admit that they influence it, sometimes profoundly. Let’s bring algorithms down to earth again. Let’s keep the computer around without fetishizing it, without bowing down to it or shrugging away its inevitable power over us, without melting everything down into it as a new name for fate. I don’t want an algorithmic culture, especially if that phrase just euphemizes a corporate, computational theocracy.

But a culture with computers in it? That might be all right.

Cockroach cyborgs use microphones to detect, trace sounds (Science Daily)

Date: November 6, 2014

Source: North Carolina State University

Summary: Researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.


North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster. Credit: Eric Whitmire.

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors in the aftermath of a disaster.

The researchers have also developed technology that can be used as an “invisible fence” to keep the biobots in the disaster area.

“In a collapsed building, sound is the best way to find survivors,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and senior author of two papers on the work.

The biobots are equipped with electronic backpacks that control the cockroach’s movements. Bozkurt’s research team has created two types of customized backpacks using microphones. One type of biobot has a single microphone that can capture relatively high-resolution sound from any direction to be wirelessly transmitted to first responders.

The second type of biobot is equipped with an array of three directional microphones to detect the direction of the sound. The research team has also developed algorithms that analyze the sound from the microphone array to localize the source of the sound and steer the biobot in that direction. The system worked well during laboratory testing. Video of a laboratory test of the microphone array system is available athttp://www.youtube.com/watch?v=oJXEPcv-FMw.

“The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter — like people calling for help — from sounds that don’t matter — like a leaking pipe,” Bozkurt says. “Once we’ve identified sounds that matter, we can use the biobots equipped with microphone arrays to zero in on where those sounds are coming from.”

A research team led by Dr. Edgar Lobaton has previously shown that biobots can be used to map a disaster area. Funded by National Science Foundation CyberPhysical Systems Program, the long-term goal is for Bozkurt and Lobaton to merge their research efforts to both map disaster areas and pinpoint survivors. The researchers are already working with collaborator Dr. Mihail Sichitiu to develop the next generation of biobot networking and localization technology.

Bozkurt’s team also recently demonstrated technology that creates an invisible fence for keeping biobots in a defined area. This is significant because it can be used to keep biobots at a disaster site, and to keep the biobots within range of each other so that they can be used as a reliable mobile wireless network. This technology could also be used to steer biobots to light sources, so that the miniaturized solar panels on biobot backpacks can be recharged. Video of the invisible fence technology in practice can be seen at http://www.youtube.com/watch?v=mWGAKd7_fAM.

A paper on the microphone sensor research, “Acoustic Sensors for Biobotic Search and Rescue,” was presented Nov. 5 at the IEEE Sensors 2014 conference in Valencia, Spain. Lead author of the paper is Eric Whitmire, a former undergraduate at NC State. The paper was co-authored by Tahmid Latif, a Ph.D. student at NC State, and Bozkurt.

The paper on the invisible fence for biobots, “Towards Fenceless Boundaries for Solar Powered Insect Biobots,” was presented Aug. 28 at the 36th Annual International IEEE EMBS Conference in Chicago, Illinois. Latif was the lead author. Co-authors include Tristan Novak, a graduate student at NC State, Whitmire and Bozkurt.

The research was supported by the National Science Foundation under grant number 1239243.

Cientistas criticam esqueleto-robô a ser exibido na Copa (Folha de S.Paulo)

JC e-mail 4896, de 17 de fevereiro de 2014

Interface não teria informação cerebral suficiente para fazer deficiente controlar estrutura que o permita andar

Pesquisadores que estudam a transmissão de informação do cérebro para os músculos estão questionando a promessa do neurocientista brasileiro Miguel Nicolelis, que anunciou que fará um jovem com lesão de medula espinhal dar o pontapé inicial da Copa do Mundo.

Em uma ilustração promocional do programa “Andar de Novo”, liderado por Nicolelis, uma mulher vestindo uma armadura robótica aparece levantando-se de uma cadeira de rodas, caminhando até a bola e chutando-a.

home_noticias_2013_05Um cientista que chegou a trabalhar com Nicolelis no IINN (Instituto Internacional de Neurociências de Natal), porém, diz que essa cena, caso se concretize, é mais bem descrita como um robô controlando os movimentos de uma pessoa do que o inverso.

“Essa demonstração é prematura e, na melhor das hipóteses, será só uma propaganda daquilo que ele espera que aconteça um dia”, diz Edward Tehovnik, americano que deixou o IINN após uma cisão interna em 2011.

Hoje professor na UFRN (Universidade Federal do Rio Grande do Norte), ele diz que Nicolelis ainda não publicou estudos suficientes para mostrar que sua técnica está pronta para reabilitar pessoas com problemas neuromotores. “Eu não digo que isso jamais acontecerá, mas a esta altura é prematuro”, diz.

BITS POR SEGUNDO
Segundo artigos publicados recentemente por Tehovnik, nenhum grupo de pesquisa consegue ainda extrair uma quantidade de informação no cérebro com velocidade suficiente para controlar movimentos complexos.

Segundo o pesquisador, com uma taxa menor de “bits” de informação por segundo, uma interface que conecte um cérebro a uma máquina já é capaz de tarefas simples, como ligar/desligar um aparelho, mas não conseguiria controlar uma perna eletromecânica com precisão.

“Não dá para obter nada que se pareça com um ser humano andando na rua”, diz Tehovnik. Segundo o cientista, o campo de pesquisa das interfaces cérebro-máquina foi “corrompido” pela oferta de dinheiro para os grupos de pesquisa, que hoje estariam mais preocupados em levantar verbas do que em solucionar problemas científicos que ainda se apresentam como barreiras à sua evolução.

Para ele, demonstrações públicas de uma tecnologia tão incipiente alimentam falsas expectativas em pessoas paralíticas. “Acho que isso deveria estar restrito ao laboratório nesse ponto”, diz. Tehovnik explica pormenores técnicos de sua argumentação em um artigo de opinião na revista “Mente&Cérebro”.

Outros cientistas que trabalham na linha de pesquisa de Tehovnik são menos contundentes na crítica a Nicolelis, mas também veem um excesso de entusiasmo.

Michael Graziano, da Universidade de Princeton, diz ver excesso de ênfase na engenharia dos projetos, em detrimento das questões de ciência básica. “Dizer que dentro de dez anos resolveremos esses problemas soa muito implausível para mim.”

(Rafael Garcia/Folha de S.Paulo)
http://www1.folha.uol.com.br/fsp/cienciasaude/152230-cientistas-criticam-esqueleto-robo-a-ser-exibido-na-copa.shtml

When Animals Learn to Control Robots, You Know We’re in Trouble (Wired)

BY WIRED SCIENCE

03.21.13 – 6:30 AM

Unless an asteroid or deadly pandemic wipes us out first, the force we are most afraid will rob us of our place as rulers of Earth is robots. The warnings range from sarcastic to nervous to dead serious, but they all describe the same scenario: Robots become sentient, join forces and turn on us en masse.

But with all the paranoia about machines, we’ve ignored another possibility: Animals learn to control robots and decide it’s their turn to rule the planet. This would be even more dangerous than dolphins evolving opposable thumbs. And the first signs of this coming threat are already starting to appear in laboratories around the world where robots are being driven by birds, trained by moths and controlled by the minds of monkeys.

Emerging Ethical Dilemmas in Science and Technology (Science Daily)

Dec. 17, 2012 — As a new year approaches, the University of Notre Dame’s John J. Reilly Center for Science, Technology and Values has announced its inaugural list of emerging ethical dilemmas and policy issues in science and technology for 2013.

The Reilly Center explores conceptual, ethical and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.

The center generated its inaugural list with the help of Reilly fellows, other Notre Dame experts and friends of the center.

The center aimed to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. It will feature one of these issues on its website each month in 2013, giving readers more information, questions to ask and resources to consult.

The ethical dilemmas and policy issues are:

Personalized genetic tests/personalized medicine

Within the last 10 years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. What are the potential privacy issues, and how do we protect this very personal and private information? Are we headed toward a new era of therapeutic intervention to increase quality of life, or a new era of eugenics?

Hacking into medical devices

Implanted medical devices, such as pacemakers, are susceptible to hackers. Barnaby Jack, of security vendor IOActive, recently demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. How do we make sure these devices are secure?

Driverless Zipcars

In three states — Nevada, Florida, and California — it is now legal for Google to operate its driverless cars. Google’s goal is to create a fully automated vehicle that is safer and more effective than a human-operated vehicle, and the company plans to marry this idea with the concept of the Zipcar. The ethics of automation and equality of access for people of different income levels are just a taste of the difficult ethical, legal and policy questions that will need to be addressed.

3-D printing

Scientists are attempting to use 3-D printing to create everything from architectural models to human organs, but we could be looking at a future in which we can print personalized pharmaceuticals or home-printed guns and explosives. For now, 3-D printing is largely the realm of artists and designers, but we can easily envision a future in which 3-D printers are affordable and patterns abound for products both benign and malicious, and that cut out the manufacturing sector completely.

Adaptation to climate change

The differential susceptibility of people around the world to climate change warrants an ethical discussion. We need to identify effective and safe ways to help people deal with the effects of climate change, as well as learn to manage and manipulate wild species and nature in order to preserve biodiversity. Some of these adaptation strategies might be highly technical (e.g. building sea walls to stem off sea level rise), but others are social and cultural (e.g., changing agricultural practices).

Low-quality and counterfeit pharmaceuticals

Until recently, detecting low-quality and counterfeit pharmaceuticals required access to complex testing equipment, often unavailable in developing countries where these problems abound. The enormous amount of trade in pharmaceutical intermediaries and active ingredients raise a number of issues, from the technical (improvement in manufacturing practices and analytical capabilities) to the ethical and legal (for example, India ruled in favor of manufacturing life-saving drugs, even if it violates U.S. patent law).

Autonomous systems

Machines (both for peaceful purposes and for war fighting) are increasingly evolving from human-controlled, to automated, to autonomous, with the ability to act on their own without human input. As these systems operate without human control and are designed to function and make decisions on their own, the ethical, legal, social and policy implications have grown exponentially. Who is responsible for the actions undertaken by autonomous systems? If robotic technology can potentially reduce the number of human fatalities, is it the responsibility of scientists to design these systems?

Human-animal hybrids (chimeras)

So far scientists have kept human-animal hybrids on the cellular level. According to some, even more modest experiments involving animal embryos and human stem cells violate human dignity and blur the line between species. Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species?

Ensuring access to wireless and spectrum

Mobile wireless connectivity is having a profound effect on society in both developed and developing countries. These technologies are completely transforming how we communicate, conduct business, learn, form relationships, navigate and entertain ourselves. At the same time, government agencies increasingly rely on the radio spectrum for their critical missions. This confluence of wireless technology developments and societal needs presents numerous challenges and opportunities for making the most effective use of the radio spectrum. We now need to have a policy conversation about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underserved (rural, low-income, developing areas) populations.

Data collection and privacy

How often do we consider the massive amounts of data we give to commercial entities when we use social media, store discount cards or order goods via the Internet? Now that microprocessors and permanent memory are inexpensive technology, we need think about the kinds of information that should be collected and retained. Should we create a diabetic insulin implant that could notify your doctor or insurance company when you make poor diet choices, and should that decision make you ineligible for certain types of medical treatment? Should cars be equipped to monitor speed and other measures of good driving, and should this data be subpoenaed by authorities following a crash? These issues require appropriate policy discussions in order to bridge the gap between data collection and meaningful outcomes.

Human enhancements

Pharmaceutical, surgical, mechanical and neurological enhancements are already available for therapeutic purposes. But these same enhancements can be used to magnify human biological function beyond the societal norm. Where do we draw the line between therapy and enhancement? How do we justify enhancing human bodies when so many individuals still lack access to basic therapeutic medicine?

Should Physicians Prescribe Cognitive Enhancers to Healthy Individuals? (Science Daily)

Dec. 17, 2012 — Physicians should not prescribe cognitive enhancers to healthy individuals, states a report being published today in the Canadian Medical Association Journal (CMAJ)Dr. Eric Racine and his research team at the IRCM, the study’s authors, provide their recommendation based on the professional integrity of physicians, the drugs’ uncertain benefits and harms, and limited health care resources.

Prescription stimulants and other neuropharmaceuticals, generally prescribed to treat attention deficit disorder (ADD), are often used by healthy people to enhance concentration, memory, alertness and mood, a phenomenon described as cognitive enhancement.

“Individuals take prescription stimulants to perform better in school or at work,” says Dr. Racine, a Montréal neuroethics specialist and Director of the Neuroethics research unit at the IRCM. “However, because these drugs are available in Canada by prescription only, people must request them from their doctors. Physicians are thus important stakeholders in this debate, given the risks and regulations of prescription drugs and the potential for requests from patients for such cognitive enhancers.”

The prevalence of cognitive enhancers used by students on university campuses ranges from 1 per cent to 11 per cent. Taking such stimulants is associated with risks of dependence, cardiovascular problems, and psychosis.

“Current evidence has not shown that the desired benefits of enhanced mental performance are achieved with these substances,” explainsCynthia Forlini, first author of the study and doctoral student in Dr. Racine’s research unit. “With uncertain benefits and clear harms, it is difficult to support the notion that physicians should prescribe a medication to a healthy individual for enhancement purposes.”

“Physicians in Canada provide prescriptions through a publicly-funded health care system with expanding demands for care,” adds Ms. Forlini. “Prescribing cognitive enhancers may therefore not be an appropriate use of resources. The concern is that those who need the medication for health reasons but cannot afford it will be at a disadvantage.”

“An international bioethics discussion has surfaced on the ethics of cognitive enhancement and the role of physicians in prescribing stimulants to healthy people,” concludes Dr. Racine. “We hope that our analysis prompts reflection in the Canadian medical community about these cognitive enhancers.”

Éric Racine’s research is funded through a New Investigator Award from the Canadian Institutes for Health Research (CIHR). The report’s co-author is Dr. Serge Gauthier from the McGill Centre for Studies in Aging.

Journal Reference:

  1. Cynthia Forlini, Serge Gauthier, and Eric Racine. Should physicians prescribe cognitive enhancers to healthy individuals? Canadian Medical Association Journal, 2012; DOI: 10.1503/cmaj.121508