Arquivo da tag: Cibernética

Understanding fruit fly behavior may be next step toward autonomous vehicles (Science Daily)

Could the way drosophila use antennae to sense heat help us teach self-driving cars make decisions?

Date: April 6, 2021

Source: Northwestern University

Summary: With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research shows us we may be better off putting fruit flies behind the wheel instead of robots.


With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research from Northwestern University shows us we may be better off putting fruit flies behind the wheel instead of robots.

Drosophila have been subjects of science as long as humans have been running experiments in labs. But given their size, it’s easy to wonder what can be learned by observing them. Research published today in the journal Nature Communications demonstrates that fruit flies use decision-making, learning and memory to perform simple functions like escaping heat. And researchers are using this understanding to challenge the way we think about self-driving cars.

“The discovery that flexible decision-making, learning and memory are used by flies during such a simple navigational task is both novel and surprising,” said Marco Gallio, the corresponding author on the study. “It may make us rethink what we need to do to program safe and flexible self-driving vehicles.”

According to Gallio, an associate professor of neurobiology in the Weinberg College of Arts and Sciences, the questions behind this study are similar to those vexing engineers building cars that move on their own. How does a fruit fly (or a car) cope with novelty? How can we build a car that is flexibly able to adapt to new conditions?

This discovery reveals brain functions in the household pest that are typically associated with more complex brains like those of mice and humans.

“Animal behavior, especially that of insects, is often considered largely fixed and hard-wired — like machines,” Gallio said. “Most people have a hard time imagining that animals as different from us as a fruit fly may possess complex brain functions, such as the ability to learn, remember or make decisions.”

To study how fruit flies tend to escape heat, the Gallio lab built a tiny plastic chamber with four floor tiles whose temperatures could be independently controlled and confined flies inside. They then used high-resolution video recordings to map how a fly reacted when it encountered a boundary between a warm tile and a cool tile. They found flies were remarkably good at treating heat boundaries as invisible barriers to avoid pain or harm.

Using real measurements, the team created a 3D model to estimate the exact temperature of each part of the fly’s tiny body throughout the experiment. During other trials, they opened a window in the fly’s head and recorded brain activity in neurons that process external temperature signals.

Miguel Simões, a postdoctoral fellow in the Gallio lab and co-first author of the study, said flies are able to determine with remarkable accuracy if the best path to thermal safety is to the left or right. Mapping the direction of escape, Simões said flies “nearly always” escape left when they approach from the right, “like a tennis ball bouncing off a wall.”

“When flies encounter heat, they have to make a rapid decision,” Simões said. “Is it safe to continue, or should it turn back? This decision is highly dependent on how dangerous the temperature is on the other side.”

Observing the simple response reminded the scientists of one of the classic concepts in early robotics.

“In his famous book, the cyberneticist Valentino Braitenberg imagined simple models made of sensors and motors that could come close to reproducing animal behavior,” said Josh Levy, an applied math graduate student and a member of the labs of Gallio and applied math professor William Kath. “The vehicles are a combination of simple wires, but the resulting behavior appears complex and even intelligent.”

Braitenberg argued that much of animal behavior could be explained by the same principles. But does that mean fly behavior is as predictable as that of one of Braitenberg’s imagined robots?

The Northwestern team built a vehicle using a computer simulation of fly behavior with the same wiring and algorithm as a Braitenberg vehicle to see how closely they could replicate animal behavior. After running model race simulations, the team ran a natural selection process of sorts, choosing the cars that did best and mutating them slightly before recombining them with other high-performing vehicles. Levy ran 500 generations of evolution in the powerful NU computing cluster, building cars they ultimately hoped would do as well as flies at escaping the virtual heat.

This simulation demonstrated that “hard-wired” vehicles eventually evolved to perform nearly as well as flies. But while real flies continued to improve performance over time and learn to adopt better strategies to become more efficient, the vehicles remain “dumb” and inflexible. The researchers also discovered that even as flies performed the simple task of escaping the heat, fly behavior remains somewhat unpredictable, leaving space for individual decisions. Finally, the scientists observed that while flies missing an antenna adapt and figure out new strategies to escape heat, vehicles “damaged” in the same way are unable to cope with the new situation and turn in the direction of the missing part, eventually getting trapped in a spin like a dog chasing its tail.

Gallio said the idea that simple navigation contains such complexity provides fodder for future work in this area.

Work in the Gallio lab is supported by the NIH (Award No. R01NS086859 and R21EY031849), a Pew Scholars Program in the Biomedical Sciences and a McKnight Technological Innovation in Neuroscience Awards.


Story Source:

Materials provided by Northwestern University. Original written by Lila Reynolds. Note: Content may be edited for style and length.


Journal Reference:

  1. José Miguel Simões, Joshua I. Levy, Emanuela E. Zaharieva, Leah T. Vinson, Peixiong Zhao, Michael H. Alpert, William L. Kath, Alessia Para, Marco Gallio. Robustness and plasticity in Drosophila heat avoidance. Nature Communications, 2021; 12 (1) DOI: 10.1038/s41467-021-22322-w

Multi species Epistemes (Knowledge Ecology)

March 23, 2015

sea-dragon_702_600x450-590x442

The epistemic import of camouflage vis-a-vis notions of realism is an under researched area of inquiry.

CAqkfZBUUAAZIiw

Camouflaged critters bring to mind not just the intersubjective character of perception but also its interspecies reality.

CAqj0PwVIAED_34

Different organisms hide not just from us humans but also from a wide variety of other species, playing on appearances.

CAqjRDIUYAAEXpn

This means that we humans encounter phenomena in terms of specific perceptual capacities, but not in a way entirely alien to other species.

CAqmQ4iUsAEWQxl

The point is not to efface differences across species but to explore multispecies entanglements in perception.

CAqlAs8UYAAX4z_

Because the aesthetic play of appearances can be life or death in multispecies epistemes.

Crocodile-fish_1594835i

Butterflies, Ants and the Internet of Things (Wired)

[Isn’t it scary that there are bright people who are that innocent? Or perhaps this is just a propaganda piece. – RT]

BY GEOFF WEBB, NETIQ

12.10.14  |  12:41 PM

Autonomous Cars (Autopia)

Buckminster Fuller once wrote, “there is nothing in the caterpillar that tells you it’s going to be a butterfly.”  It’s true that often our capacity to look at things and truly understand their final form is very limited.  Nor can we necessarily predict what happens when many small changes combine – when small pebbles roll down a hillside and turn in a landslide that dams a river and floods a plain.

This is the situation we face now as we try to understand the final form and impact of the Internet of Things (IoT). Countless small, technological pebbles have begun to roll down the hillside from initial implementation to full realization.  In this case, the “pebbles” are the billions of sensors, actuators, and smart technologies that are rapidly forming the Internet of Things. And like the caterpillar in Fuller’s quote, the final shape of the IoT may look very different from our first guesses.

In whatever the world looks like as the IoT begins to bear full fruit, the experience of our lives will be markedly different.  The world around us will not only be aware of our presence, it will know who we are, and it will react to us, often before we are even aware of it.  The day-to-day process of living will change because almost every piece of technology we touch (and many we do not) will begin to tailor their behavior to our specific needs and desires.  Our car will talk to our house.

Walking into a store will be very different, as the displays around us could modify their behavior based on our preferences and buying habits.  The office of the future will be far more adaptive, less rigid, more connected – the building will know who we are and will be ready for us when we arrive.  Everything, from the way products are built and packaged and the way our buildings and cities are managed, to the simple process of travelling around, interacting with each other, will change and change dramatically. And it’s happening now.

We’re already seeing mainstream manufacturers building IoT awareness into their products, such as Whirlpool building Internet-aware washing machines, and specialized IoT consumer tech such as LIFX light bulbs which can be managed from a smartphone and will respond to events in your house. Even toys are becoming more and more connected as our children go online at even younger ages.  And while many of the consumer purchases may already be somehow “IoT” aware, we are still barely scratching the surface of the full potential of a fully connected world. The ultimate impact of the IoT will run far deeper, into the very fabric of our lives and the way we interact with the world around us.

One example is the German port of Hamburg. The Hamburg port Authority is building what they refer to as a smartPort. Literally embedding millions of sensors in everything from container handling systems to street lights – to provide data and management capabilities to move cargo through the port more efficiently, avoid traffic snarl-ups, and even predict environmental impacts through sensors that respond to noise and air pollution.

Securing all those devices and sensors will require a new way of thinking about technology and the interactions of “things,” people, and data. What we must do, then, is to adopt an approach that scales to manage the staggering numbers of these sensors and devices, while still enabling us to identify when they are under attack or being misused.

This is essentially the same problem we already face when dealing with human beings – how do I know when someone is doing something they shouldn’t? Specifically how can I identify a bad person in a crowd of law-abiding citizens?

The best answer is what I like to call, the “Vegas Solution.” Rather than adopting a model that screens every person as they enter a casino, the security folks out in Nevada watch for behavior that indicates someone is up to no good, and then respond accordingly. It’s low impact for everyone else, but works with ruthless efficiency (as anyone who has ever tried counting cards in a casino will tell you.)

This approach focuses on known behaviors and looks for anomalies. It is, at its most basic, the practical application of “identity.” If I understand the identity of the people I am watching, and as a result, their behavior, I can tell when someone is acting badly.

Now scale this up to the vast number of devices and sensors out there in the nascent IoT. If I understand the “identity” of all those washing machines, smart cars, traffic light sensors, industrial robots, and so on, I can determine what they should be doing, see when that behavior changes (even in subtle ways such as how they communicate with each other) and respond quickly when I detect something potentially bad.

The approach is sound, in fact, it’s probably the only approach that will scale to meet the complexity of all those billions upon billions of “things” that make up the IoT. The challenge of this is brought to the forefront by the fact that there must be a concept of identity applied to so many more “things” than we have ever managed before. If there is an “Internet of Everything” there will be an “Identity of Everything” to go with it? And those identities will tell us what each device is, when it was created, how it should behave, what it is capable of, and so on.  There are already proposed standards for this kind of thing, such as the UK’s HyperCatstandard, which lets one device figure out what another device it can talk to actually does and therefore what kind of information it might want to share.

Where things get really interesting, however, is when we start to watch the interactions of all these identities – and especially the interactions of the “thing” identities and our own. How we humans of Internet users compared to the “things”, interact with all the devices around us will provide even more insight into our lives, wants, and behaviors. Watching how I interact with my car, and the car with the road, and so on, will help manage city traffic far more efficiently than broad brush traffic studies. Likewise, as the wearable technology I have on my person (or in my person) interacts with the sensors around me, so my experience of almost everything, from shopping to public services, can be tailored and managed more efficiently. This, ultimately is the promise of the IoT, a world that is responsive, intelligent and tailored for every situation.

As we continue to add more and more sensors and smart devices, the potential power of the IoT grows.  Many small, slightly smart things have a habit of combining to perform amazing feats. Taking another example from nature, leaf-cutter ants (tiny in the extreme) nevertheless combine to form the second most complex social structures on earth (after humans) and can build staggeringly large homes.

When we combine the billions of smart devices into the final IoT, we should expect to be surprised by the final form all those interactions take, and by the complexity of the thing we create.  Those things can and will work together, and how they behave will be defined by the identities we give them today.

Geoff Webb is Director of Solution Strategy at NetIQ.

Robôs inteligentes podem levar ao fim da raça humana, diz Stephen Hawking (Folha de S.Paulo)

SALVADOR NOGUEIRA

COLABORAÇÃO PARA FOLHA

16/12/2014 02h03

O físico britânico Stephen Hawking está causando novamente. Em entrevista à rede BBC, ele alertou para os perigos do desenvolvimento de máquinas superinteligentes.

“As formas primitivas de inteligência artificial que temos agora se mostraram muito úteis. Mas acho que o desenvolvimento de inteligência artificial completa pode significar o fim da raça humana”, disse o cientista.

Ele ecoa um número crescente de especialistas –de filósofos a tecnologistas– que aponta as incertezas trazidas pelo desenvolvimento de máquinas pensantes.

Alex Argozino/Editoria de Arte/Folhapress
Robô

Recentemente, outro luminar a se pronunciar foi Elon Musk, sul-africano que fez fortuna ao criar um sistema de pagamentos para internet e agora desenvolve foguetes e naves para o programa espacial americano.

Em outubro, falando a alunos do MIT (Instituto de Tecnologia de Massachusetts), lançou um alerta parecido.

“Acho que temos de ser muito cuidadosos com inteligência artificial. Se eu tivesse que adivinhar qual é a nossa maior ameaça existencial, seria provavelmente essa.”

Para Musk, a coisa é tão grave que ele acredita na necessidade de desenvolver mecanismos de controle, talvez em nível internacional, “só para garantir que não vamos fazer algo bem idiota”.

SUPERINTELIGÊNCIA

A preocupação vem de longe. Em 1965, Gordon Moore, co-fundador da Intel, notou que a capacidade dos computadores dobrava a cada dois anos, aproximadamente.

Como o efeito é exponencial, em pouco tempo conseguimos sair de modestas máquinas de calcular a supercomputadores capazes de simular a evolução do Universo. Não é pouca coisa.

Os computadores ainda não ultrapassaram a capacidade de processamento do cérebro humano. Por pouco.

“O cérebro como um todo executa cerca de 10 mil trilhões de operações por segundo”, diz o físico Paul Davies, da Universidade Estadual do Arizona. “O computador mais rápido atinge 360 trilhões, então a natureza segue na frente. Mas não por muito tempo.”

Alguns tecnologistas comemoram essa ultrapassagem iminente, como o inventor americano Ray Kurzweil, que atualmente tem trabalhado em parceria com o Google para desenvolver o campo da IA (inteligência artificial).

Ele estima que máquinas com capacidade intelectual similar à humana surgirão em 2029. É mais ou menos o de tempo imaginado por Musk para o surgimento da ameaça.

“A inteligência artificial passará a voar por seus próprios meios, se reprojetando a um ritmo cada vez maior”, sugeriu Hawking.

O resultado: não só as máquinas seriam mais inteligentes que nós, como estariam em constante aprimoramento. Caso desenvolvam a consciência, o que farão conosco?

Kurzweil prefere pensar que nos ajudarão a resolver problemas sociais e se integrarão à civilização. Mas até ele admite que não há garantias. “Acho que a melhor defesa é cultivar valores como democracia, tolerância, liberdade”, disse à Folha.

Para ele, máquinas criadas nesse ambiente aprenderiam os mesmos valores. “Não é uma estratégia infalível”, diz Kurzweil. “Mas é o melhor que podemos fazer.”

Enquanto Musk sugere um controle sobre a tecnologia, Kurzweil acredita que já passamos o ponto de não-retorno –estamos a caminho da “singularidade tecnológica”, quando a IA alterará radicalmente a civilização.

The rise of data and the death of politics (The Guardian)

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

The Observer, Sunday 20 July 2014

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisionsthat such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0”) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer

For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.

The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.

This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.

Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.

Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.

This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, driveUber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

Facebook Data Scientists Prove Memes Mutate And Adapt Like DNA (TechCrunch)

Posted Jan 8, 2014 by  (@joshconstine)

Richard Dawkins likened memes to genes, but a new study by Facebook shows just how accurate that analogy is. Memes adapt to their surroundings in order to survive, just like organisms. Post a liberal meme saying no one should die for lack of healthcare, and conservatives will mutate it to say no one should die because Obamacare rations their healthcare. And nerds will make it about Star Wars.

Facebook’s data scientists used anonymized data to determine that “Just as certain genetic mutations can be advantageous in specific environments, meme mutations can be propagated differentially if the variant matches the subpopulation’s beliefs or culture.”

Take this meme:

“No one should die because they cannot afford health care, and no one should go broke because they get sick. If you agree, post this as your status for the rest of the day”.

In September 2009, 470,000 Facebook users posted this exact phrase as a status update. But a total of 1.14 million status updates containing 121,605 variants of the meme were spawned, such as “No one should be frozen in carbonite because they can’t pay Jabba The Hut”. Why? Because humans help bend memes to better fit their audience.

In the chart below you can see how people of different political leanings adapted the meme to fit their own views, and likely the views of people they’re friends with. As Facebook’s data scientists explain, “the original variant in support of Affordable Care Act (aka Obamacare) was propagated primarily by liberals, while those mentioning government and taxes slanted conservative. Sci-fi variants were slightly liberal, alcohol-related ones slightly conservative”. That matches theories by Dawkins and Malcom Gladwell.

1525321_10152134856908415_586555184_n

Average political bias (-2 being very liberal, +2 being very conservative) of users reposting different variants of the “no one should” meme.

As I wrote in my Stanford Cybersociology Master’s program research paper, memes are more shareable if they’re easy to remix. When a meme has a clear template with substitutable variables, people recognize how to put their own spin on it. They’re then more likely to share their own modified creations, which drives awareness of the original. When I recognized this back in 2009, I based my research on Lolcats and Soulja Boy, but more recently The Harlem Shake meme proved me right.

Facebook’s findings and my own have signficant implications for marketers or anyone looking to make a message go viral. Once you know memes are naturally inclined to mutate, and that these mutations increase sharing, you can try to purposefully structure your message in a remixable way. By creating and seeding a few variants of your own, you can crystallize how the template works and encourage your audience to make their own remixes.

As you can see in this graph from my research paper, usage of the word “haz” as in the Lolcat phrase “I can haz cheezburger” grew increasingly popular for several years. Meanwhile, less remixable memes often only create a spike in mentions for a few days. I posit that high remixability — or adaptability — keeps memes popular for a much longer period of time.

Screen Shot 2014-01-08 at 12.11.11 PM

Rise in mentions of the word “haz” in Facebook wall posts, indicating sustained popularity of the highly remixable Lolcats memes – as shown on the now defunct Facebook Lexicon tool

For social networks like Facebook, understanding how memes evolve could make sure we continue to see fresh content. Rather than showing us the exact copies of a meme over and over again in the News Feed, Facebook’s algorithms could purposefully search for and promote mutated variations.

That way instead of hearing about healthcare over and over, you might see that “No one should twerk just because they can’t avoid hearing Miley Cyrus on the radio. If you agree, sit perfectly still with your tongue safely inside your mouth for the rest of the day.”

Predicting the Future Could Improve Remote-Control of Space Robots (Wired)

BY ADAM MANN

10.15.13

A new system could make space exploration robots faster and more efficient by predicting where they will be in the very near future.

The engineers behind the program hope to overcome a particular snarl affecting our probes out in the solar system: that pesky delay caused by the speed of light. Any commands sent to a robot on a distant body take a certain amount of time to travel and won’t be executed for a while. By building a model of the terrain surrounding a rover and providing an interface that lets operators forecast the how the probe will move around within it, engineer can identify potential obstacles and make decisions nearer to real-time.

“You’re reacting quickly, and the rover is staying active more of the time,” said computer scientist Jeff Norris, who leads mission operation innovations at the Jet Propulsion Laboratory’s Ops Lab.

As an example, the distance between Earth and Mars creates round-trip lags of up to 40 minutes. Nowadays, engineers send a long string of commands once a day to robots like NASA’s Curiosity rover. These get executed but then the rover has to stop and wait until the next instructions are beamed down.

Because space exploration robots are multi-million or even multi-billion-dollar machines, they have to work very carefully. One day’s commands might tell Curiosity to drive up to a rock. It will then check that it has gotten close enough. Then, the following day, if will be instructed to place its arm on that rock. Later on, it might be directed to drill into or probe this rock with its instruments. While safe, this method is very inefficient.

“When we only send commands once a day, we’re not dealing with 10- or 20-minute delays. We’re dealing with a 24-hour round trip,” said Norris.

Norris’ lab wants to make the speed and productivity of distant probes better. Their interface simulates more or less where a robot would be given a particular time delay. This is represented by a small ghostly machine — called the “committed state” — moving just ahead of a rover. The ghosted robot is the software’s best guess of where the probe would end up if operators hit the emergency stop button right then.

By looking slightly into the future, the interface allows a rover driver to update decisions and commands at a much faster rate than is currently possible. Say a robot on Mars is commanded to drive forward 100 meters. But halfway there, its sensors notice an interesting rock that scientists want to investigate. Rather than waiting for the rover to finish its drive and then commanding it to go back, this new interface would give operators the ability to write and rewrite their directions on the fly.

The simulation can’t know every detail around a probe and so provides a small predictive envelope as to where the robot might be. Different terrains have different uncertainties.

“If you’re on loose sand, that might be different than hard rock,” said software engineer Alexander Menzies, who works on the interface.

Menzies added that when they tested the interface, users had an “almost game-like experience” trying to optimize commands for a robot. He designed an actual video game where participants were given points for commanding a time-delayed robot through a slalom-like terrain. (Norris lamented that he had the highest score on that game until the last day of testing, when Menzies beat him.)

The team thinks that aspects of this new interface could start to be used in the near future, perhaps even with the current Mars rovers Curiosity and Opportunity. At this point, though, Mars operations are limited by bandwidth. Because there are only a few communicating satellites in orbit on the Red Planet, commands can only be sent a few times a day, reducing a lot of the efficiency that would be gained from this new system. But operations on the moon or a potential asteroid capture and exploration mission – such as the one NASA is currently planning – would likely be in more constant communication with Earth, providing even faster and more efficient operations that could take advantage of this new time-delay-reducing system.

Video: OPSLabJPL/Youtube

Robots Take Over Economy: Sudden Rise of Global Ecology of Interacting Robots Trade at Speeds Too Fast for Humans (Science Daily)

Sep. 11, 2013 — Recently, the global financial market experienced a series of computer glitches that abruptly brought operations to a halt. One reason for these “flash freezes” may be the sudden emergence of mobs of ultrafast robots, which trade on the global markets and operate at speeds beyond human capability, thus overwhelming the system. The appearance of this “ultrafast machine ecology” is documented in a new study published on September 11 in Nature Scientific Reports.

Typical ultrafast extreme events caused by mobs of computer algorithms operating faster than humans can react. (Credit: Neil Johnson, University of Miami)

The findings suggest that for time scales less than one second, the financial world makes a sudden transition into a cyber jungle inhabited by packs of aggressive trading algorithms. “These algorithms can operate so fast that humans are unable to participate in real time, and instead, an ultrafast ecology of robots rises up to take control,” explains Neil Johnson, professor of physics in the College of Arts and Sciences at the University of Miami (UM), and corresponding author of the study.

“Our findings show that, in this new world of ultrafast robot algorithms, the behavior of the market undergoes a fundamental and abrupt transition to another world where conventional market theories no longer apply,” Johnson says.

Society’s push for faster systems that outpace competitors has led to the development of algorithms capable of operating faster than the response time for humans. For instance, the quickest a person can react to potential danger is approximately one second. Even a chess grandmaster takes around 650 milliseconds to realize that he is in trouble — yet microchips for trading can operate in a fraction of a millisecond (1 millisecond is 0.001 second).

In the study, the researchers assembled and analyzed a high-throughput millisecond-resolution price stream of multiple stocks and exchanges. From January, 2006, through February, 2011, they found 18,520 extreme events lasting less than 1.5 seconds, including both crashes and spikes.

The team realized that as the duration of these ultrafast extreme events fell below human response times, the number of crashes and spikes increased dramatically. They created a model to understand the behavior and concluded that the events were the product of ultrafast computer trading and not attributable to other factors, such as regulations or mistaken trades. Johnson, who is head of the inter-disciplinary research group on complexity at UM, compares the situation to an ecological environment.

“As long as you have the normal combination of prey and predators, everything is in balance, but if you introduce predators that are too fast, they create extreme events,” Johnson says. “What we see with the new ultrafast computer algorithms is predatory trading. In this case, the predator acts before the prey even knows it’s there.”

Johnson explains that in order to regulate these ultrafast computer algorithms, we need to understand their collective behavior. This is a daunting task, but is made easier by the fact that the algorithms that operate below human response times are relatively simple, because simplicity allows faster processing.

“There are relatively few things that an ultrafast algorithm will do,” Johnson says. “This means that they are more likely to start adopting the same behavior, and hence form a cyber crowd or cyber mob which attacks a certain part of the market. This is what gives rise to the extreme events that we observe,” he says. “Our math model is able to capture this collective behavior by modeling how these cyber mobs behave.”

In fact, Johnson believes this new understanding of cyber-mobs may have other important applications outside of finance, such as dealing with cyber-attacks and cyber-warfare.

Journal Reference:

  1. Neil Johnson, Guannan Zhao, Eric Hunsader, Hong Qi, Nicholas Johnson, Jing Meng, Brian Tivnan. Abrupt rise of new machine ecology beyond human response time.Scientific Reports, 2013; 3 DOI: 10.1038/srep02627

When Will My Computer Understand Me? (Science Daily)

June 10, 2013 — It’s not hard to tell the difference between the “charge” of a battery and criminal “charges.” But for computers, distinguishing between the various meanings of a word is difficult.

A “charge” can be a criminal charge, an accusation, a battery charge, or a person in your care. Some of those meanings are closer together, others further apart. (Credit: Image courtesy of University of Texas at Austin, Texas Advanced Computing Center)

For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software. Driven initially by efforts to translate Russian scientific texts during the Cold War (and more recently by the value of information retrieval and data analysis tools), these efforts have met with mixed success. IBM’s Jeopardy-winningWatson system and Google Translate are high profile, successful applications of language technologies, but the humorous answers and mistranslations they sometimes produce are evidence of the continuing difficulty of the problem.

Our ability to easily distinguish between multiple word meanings is rooted in a lifetime of experience. Using the context in which a word is used, an intrinsic understanding of syntax and logic, and a sense of the speaker’s intention, we intuit what another person is telling us.

“In the past, people have tried to hand-code all of this knowledge,” explained Katrin Erk, a professor of linguistics at The University of Texas at Austin focusing on lexical semantics. “I think it’s fair to say that this hasn’t been successful. There are just too many little things that humans know.”

Other efforts have tried to use dictionary meanings to train computers to better understand language, but these attempts have also faced obstacles. Dictionaries have their own sense distinctions, which are crystal clear to the dictionary-maker but murky to the dictionary reader. Moreover, no two dictionaries provide the same set of meanings — frustrating, right?

Watching annotators struggle to make sense of conflicting definitions led Erk to try a different tactic. Instead of hard-coding human logic or deciphering dictionaries, why not mine a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a weighted map of relationships — a dictionary without a dictionary?

“An intuition for me was that you could visualize the different meanings of a word as points in space,” she said. “You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”

To create a model that can accurately recreate the intuitive ability to distinguish word meaning requires a lot of text and a lot of analytical horsepower.

“The lower end for this kind of a research is a text collection of 100 million words,” she explained. “If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers and Hadoop come in.”

Applying Computational Horsepower

Erk initially conducted her research on desktop computers, but around 2009, she began using the parallel computing systems at the Texas Advanced Computing Center (TACC). Access to a special Hadoop-optimized subsystem on TACC’s Longhornsupercomputer allowed Erk and her collaborators to expand the scope of their research. Hadoop is a software architecture well suited to text analysis and the data mining of unstructured data that can also take advantage of large computer clusters. Computational models that take weeks to run on a desktop computer can run in hours on Longhorn. This opened up new possibilities.

“In a simple case we count how often a word occurs in close proximity to other words. If you’re doing this with one billion words, do you have a couple of days to wait to do the computation? It’s no fun,” Erk said. “With Hadoop on Longhorn, we could get the kind of data that we need to do language processing much faster. That enabled us to use larger amounts of data and develop better models.”

Treating words in a relational, non-fixed way corresponds to emerging psychological notions of how the mind deals with language and concepts in general, according to Erk. Instead of rigid definitions, concepts have “fuzzy boundaries” where the meaning, value and limits of the idea can vary considerably according to the context or conditions. Erk takes this idea of language and recreates a model of it from hundreds of thousands of documents.

Say That Another Way

So how can we describe word meanings without a dictionary? One way is to use paraphrases. A good paraphrase is one that is “close to” the word meaning in that high-dimensional space that Erk described.

“We use a gigantic 10,000-dimentional space with all these different points for each word to predict paraphrases,” Erk explained. “If I give you a sentence such as, ‘This is a bright child,’ the model can tell you automatically what are good paraphrases (‘an intelligent child’) and what are bad paraphrases (‘a glaring child’). This is quite useful in language technology.”

Language technology already helps millions of people perform practical and valuable tasks every day via web searches and question-answer systems, but it is poised for even more widespread applications.

Automatic information extraction is an application where Erk’s paraphrasing research may be critical. Say, for instance, you want to extract a list of diseases, their causes, symptoms and cures from millions of pages of medical information on the web.

“Researchers use slightly different formulations when they talk about diseases, so knowing good paraphrases would help,” Erk said.

In a paper to appear in ACM Transactions on Intelligent Systems and Technology, Erk and her collaborators illustrated they could achieve state-of-the-art results with their automatic paraphrasing approach.

Recently, Erk and Ray Mooney, a computer science professor also at The University of Texas at Austin, were awarded a grant from the Defense Advanced Research Projects Agency to combine Erk’s distributional, high dimensional space representation of word meanings with a method of determining the structure of sentences based on Markov logic networks.

“Language is messy,” said Mooney. “There is almost nothing that is true all the time. “When we ask, ‘How similar is this sentence to another sentence?’ our system turns that question into a probabilistic theorem-proving task and that task can be very computationally complex.”

In their paper, “Montague Meets Markov: Deep Semantics with Probabilistic Logical Form,” presented at the Second Joint Conference on Lexical and Computational Semantics (STARSEM2013) in June, Erk, Mooney and colleagues announced their results on a number of challenge problems from the field of artificial intelligence.

In one problem, Longhorn was given a sentence and had to infer whether another sentence was true based on the first. Using an ensemble of different sentence parsers, word meaning models and Markov logic implementations, Mooney and Erk’s system predicted the correct answer with 85% accuracy. This is near the top results in this challenge. They continue to work to improve the system.

There is a common saying in the machine-learning world that goes: “There’s no data like more data.” While more data helps, taking advantage of that data is key.

“We want to get to a point where we don’t have to learn a computer language to communicate with a computer. We’ll just tell it what to do in natural language,” Mooney said. “We’re still a long way from having a computer that can understand language as well as a human being does, but we’ve made definite progress toward that goal.”

Cyborg America: inside the strange new world of basement body hackers (The Verve)

The Verve, 8 August 2012

Shawn Sarver took a deep breath and stared at the bottle of Listerine on the counter. “A minty fresh feeling for your mouth… cures bad breath,” he repeated to himself, as the scalpel sliced open his ring finger. His left arm was stretched out on the operating table, his sleeve rolled up past the elbow, revealing his first tattoo, the Air Force insignia he got at age 18, a few weeks after graduating from high school. Sarver was trying a technique he learned in the military to block out the pain, since it was illegal to administer anesthetic for his procedure.

“A minty fresh feeling… cures bad breath,” Sarver muttered through gritted teeth, his eyes staring off into a void.

Tim, the proprietor of Hot Rod Piercing in downtown Pittsburgh, put down the scalpel and picked up an instrument called an elevator, which he used to separate the flesh inside in Sarver’s finger, creating a small empty pocket of space. Then, with practiced hands, he slid a tiny rare earth metal inside the open wound, the width of a pencil eraser and thinner than a dime. When he tried to remove his tool, however, the metal disc stuck to the tweezers. “Let’s try this again,” Tim said. “Almost done.”

The implant stayed put the second time. Tim quickly stitched the cut shut, and cleaned off the blood. “Want to try it out?” he asked Sarver, who nodded with excitement. Tim dangled the needle from a string of suture next to Sarver’s finger, closer and closer, until suddenly, it jumped through the air and stuck to his flesh, attracted by the magnetic pull of the mineral implant.

“I’m a cyborg!” Sarver cried, getting up to join his friends in the waiting room outside. Tim started prepping a new tray of clean surgical tools. Now it was my turn.

PART.01

With the advent of the smartphone, many Americans have grown used to the idea of having a computer on their person at all times. Wearable technologies like Google’s Project Glass are narrowing the boundary between us and our devices even further by attaching a computer to a person’s face and integrating the software directly into a user’s field of vision. The paradigm shift is reflected in the names of our dominant operating systems. Gone are Microsoft’s Windows into the digital world, replaced by a union of man and machine: the iPhone or Android.

For a small, growing community of technologists, none of this goes far enough. I first met Sarver at the home of his best friend, Tim Cannon, in Oakdale, a Pennsylvania suburb about 30 minutes from Pittsburgh where Cannon, a software developer, lives with his longtime girlfriend and their three dogs. The two-story house sits next to a beer dispensary and an abandoned motel, a reminder the city’s best days are far behind it. In the last two decades, Pittsburgh has been gutted of its population, which plummeted from a high of more than 700,000 in the 1980s to less than 350,000 today. For its future, the city has pinned much of its hopes on the biomedical and robotics research being done at local universities like Carnegie Mellon. “The city was dying and so you have this element of anti-authority freaks are welcome,” said Cannon. “When you have technology and biomedical research and a pissed-off angry population that loves tattoos, this is bound to happen. Why Pittsburgh? It’s got the right amount of fuck you.”

Cannon led me down into the basement, which he and Sarver have converted into a laboratory. A long work space was covered with Arduino motherboards, soldering irons, and electrodes. Cannon had recently captured a garter snake, which eyed us from inside a plastic jar. “Ever since I was a kid, I’ve been telling people that I want to be a robot,” said Cannon. “These days, that doesn’t seem so impossible anymore.” The pair call themselves grinders — homebrew biohackers obsessed with the idea of human enhancement — who are looking for new ways to put machines into their bodies. They are joined by hundreds of aspiring biohackers who populate the movement’s online forums and a growing number, now several dozen, who have gotten the magnetic implants in real life.

GONE ARE MICROSOFT’S WINDOWS INTO THE DIGITALWORLD, REPLACED BY A UNION OF MANAND MACHINE: THE IPHONE ORANDROID

COMPUTERS ARE HARDWARE. APPS ARE SOFTWARE. HUMANS AREWETWARE

“EVER SINCE IWAS A KID, I’VE BEEN TELLING PEOPLE THAT IWANT TO BE A ROBOT.”

Cannon looks and moves a bit like Shaggy from Scooby Doo, a languid rubberband of a man in baggy clothes and a newsboy cap. Sarver, by contrast, stands ramrod-straight, wearing a dapper three-piece suit and waxed mustache, a dandy steampunk with a high-pitched laugh. There is a distinct division of labor between the two: Cannon is the software developer and Sarver, who learned electrical engineering as a mechanic in the Air Force, does the hardware. The moniker for their working unit is Grindhouse Wetwares. Computers are hardware. Apps are software. Humans are wetware.

Cannon, like Sarver, served in the military, but the two didn’t meet until they had both left the service, introduced by a mutual friend in the Pittsburgh area. Politics brought them together. “We were both kind of libertarians, really strong anti-authority people, but we didn’t fit into the two common strains here: idiot anarchist who’s unrealistic or right-wing crazy Christian. Nobody was incorporating technology into it. So there was no political party but just a couple like-minded individuals, who were like… techno-libertarians!”

Cannon got his own neodymium magnetic implant a year before Sarver. Putting these rare earth metals into the body was pioneered by artists on the bleeding edge of piercing culture and transhumanists interested in experimenting with a sixth sense.Steve Haworth, who specializes in the bleeding edge of body modification and considers himself a “human evolution artist,” is considered one of the originators, and helped to inspire a generation of practitioners to perform magnetic implants, including the owner of Hot Rod Piercing in Pittsburgh. (Using surgical tools like a scalpel is a grey area for piercers. Operating with these instruments, or any kind of anesthesia, could be classified as practicing medicine. Without a medical license, a piercer who does this is technically committing assault on the person getting the implant.) On its own, the implant allows a person to feel electromagnetic fields: a microwave oven in their kitchen, a subway passing beneath the ground, or high-tension power lines overhead.

While this added perception is interesting, it has little utility. But the magnet, explains Cannon, is more of a stepping stone toward bigger things. “It can be done cheaply, with minimally invasive surgery. You get used to the idea of having something alien in your body, and kinda begin to see how much more the human body could do with a little help. Sure, feeling other magnets around you is fucking cool, but the real key is, you’re giving the human body a simple, digital input.”

As an example of how that might work, Cannon showed me a small device he and Sarver created called the Bottlenose. It’s a rectangle of black metal about half the size of a pack of cigarettes that slips over your finger. Named after the echolocation used by dolphins, it sends out an electromagnetic pulse and measures the time it takes to bounce back. Cannon slips it over his finger and closes his eyes. “I can kind of sweep the room and get this picture of where things are.” He twirls around the half-empty basement, eyes closed, then stops, pointing directly at my chest. “The magnet in my finger is extremely sensitive to these waves. So the Bottlenose can tell me the shape of things around me and how far away they are.”

The way Cannon sees it, biohacking is all around us. “In a way, eyeglasses are a body hack, a piece of equipment that enhances your sense, and pretty quickly becomes like a part of your body,” says Cannon. He took a pair of electrodes off the workbench and attached them to my temples. “Your brain works through electricity, so why not help to boost that?” A sharp pinch ran across my forehead as the first volts flowed into my skull. He and Sarver laughed as my face involuntarily twitched. “You’re one of us now,” Cannon says with a laugh.

HISTORY.01

In one sense, Mary Shelley’s Frankenstein, part man, part machine, animated by electricity and with superhuman abilities, might be the first dark, early vision of what humans’ bodies would become when modern science was brought to bear. A more utopian version was put forward in 1960, a year before man first travelled into space, by the scientist and inventor Manfred Clynes. Clynes was considering the problem of how mankind would survive in our new lives as outer space dwellers, and concluded that only by augmenting our physiology with drugs and machines could we thrive in extraterrestrial environs. It was Clynes and his co-author Nathan Kline, writing on this subject, who coined the term cyborg.

At its simplest, a cyborg is a being with both biological and artificial parts: metal, electrical, mechanical, or robotic. The construct is familiar to almost everyone through popular culture, perhaps most spectacularly in the recent Iron Man films. Tony Stark is surely our greatest contemporary cyborg: a billionaire businessman who designed his own mechanical heart, a dapper bachelor who can transform into a one-man fighter jet, then shed his armour as easily as a suit of clothes.

Britain is the birthplace of 21st-century biohacking, and the movement’s two foundational figures present a similar Jekyll and Hyde duality. One is Lepht Anonym, a DIY punk who was one of the earliest, and certainly the most dramatic, to throw caution to the wind and implant metal and machines into her flesh. The other is Kevin Warwick, an academic at the University of Reading’s department of cybernetics. Warwick relies on a trained staff of medical technicians when doing his implants. Lepht has been known to say that all she requires is a potato peeler and a bottle of vodka. In an article on h+, Anonym wrote:

I’m sort of inured to pain by this point. Anesthetic is illegal for people like me, so we learn to live without it; I’ve made scalpel incisions in my hands, pushed five-millimeter diameter needles through my skin, and once used a vegetable knife to carve a cavity into the tip of my index finger. I’m an idiot, but I’m an idiot working in the name of progress: I’m Lepht Anonym, scrapheap transhumanist. I work with what I can get.

Anonym’s essay, a series of YouTube videos, and a short profile in Wired established her as the face of the budding biohacking movement. It was Anonym who proved, with herself as the guinea pig, that it was possible to implant RFID chips and powerful magnets into one’s body, without the backing of an academic institution or help from a team of doctors.

 

“She is an inspiration to all of us,” said a biohacker who goes by the name of Sovereign Bleak. “To anyone who was frustrated with the human condition, who felt we had been promised more from the future, she said that it was within our grasp, and our rights, to evolve our bodies however we saw fit.” Over the last decade grinders have begun to form a loose culture, connected mostly by online forums like biohack.me, where hundreds of aspiring cyborgs congregate to swap tips about the best bio-resistant coatings to prevent the body from rejecting magnetic implants and how to get illegal anesthetics shipped from Canada to the United States. There is another strain of biohacking which focuses on the possibilities for DIY genetics, but their work is far more theoretical than the hands-on experiments performed by grinders.

But while Anonym’s renegade approach to bettering her own flesh birthed a new generation of grinders, it seems to have had some serious long-term consequences for her own health. “I’m a wee bit frightened right now,” Anonym wrote on her blog early this year. “I’m hearing things that aren’t there. Sure I see things that aren’t real from time to time because of the stupid habits I had when I was a teenager and the permanent, very mild damage I did to myself experimenting like that, but I don’t usually hear anything and this is not a flashback.”

MEDICAL NEED VERSUS HUMAN ENHANCEMENT

Neil Harbisson was born with a condition that allows him to see only in black and white. He became interested in cybernetics, and eventually began wearing the Eyeborg, a head-mounted camera which translated colors into vibrations that Harbisson could hear. The addition of the Eyeborg to his passport has led some to dub him the first cyborg officially recognized by the federal government. He now plans to extend and improve this cybernetic synesthesia by having the Eyeborg permanently surgically attached to his skull.

Getting a medical team to help him was no easy task. “Their position was that ‘doctors usually repair or fix humans’ and that my operation was not about fixing nor repairing myself but about creating a new sense: the perception of visual elements via bone-conducted sounds,” Harbisson told me by email. “The other main issue was that the operation would allow me to perceive outside the ability of human vision and human hearing (hearing via the bone allows you to hear a wider range of sounds, from infrasounds to ultrasounds, and some lenses can detect ultraviolets and infrareds). It took me over a year to convince them.”

In the end, the bio-ethical community still relies on promises of medical need to justify cybernetic enhancement. “I think I convinced them when I told them that this kind of operation could help ‘fix and repair’ blind people. If you use a different type of chip, a chip that translates words into sound, or distances into sound, for instance, the same electronic eye implant could be used to read or to detect obstacles which could mean the end of Braille and sticks. I guess hospitals and governments will soon start publishing their own laws about which kind of cybernetic implants they find are ethical/legal and which ones they find are not.”

PART.02

THE EXPERIENCE RANKED ALONGSIDE BREAKING MY ARM AND HAVING MY APPENDIX REMOVED

  

I had Lepht Anonym in the back of my mind as I stretched my arm out on the operating table at Hot Rod Piercing. The fingertip is an excellent place for a magnet because it is full of sensitive nerve tissue, fertile ground for your nascent sixth sense to pick up on the electro-magnetic fields all around us. It is also an exceptionally painful spot to have sliced open with a scalpel, especially when no painkillers are available. The experience ranked alongside breaking my arm and having my appendix removed, a level of pain that opens your mind to parts of your body which before you were not conscious of.

For the first few days after the surgery, it was difficult to separate out my newly implanted sense from the bits of pain and sensation created by the trauma of having the magnet jammed in my finger. Certain things were clear: microwave ovens gave off a steady field that was easy to perceive, like a pulsating wave of invisible water, or air heavy from heat coming off a fan. And other magnets, of course, were easy to identify. They lurked like landmines in everyday objects — my earbuds, my messenger bag — sending my finger ringing with a deep, sort of probing force field that shifted around in my flesh.

High-tension wires seemed to give off a sort of pulsating current, but it was often hard to tell, since my finger often began throbbing for no reason, as it healed from the trauma of surgery. Playing with strong, stand-alone magnets was a game of chicken. The party trick of making one leap across a table towards my finger was thrilling, but the awful squirming it caused inside my flesh made me regret it hours later. Grasping a colleague’s stylus too near the magnetic tip put a sort of freezing probe into my finger that I thought about for days afterwards.

Within a few weeks, the sensation began to fade. I noticed fewer and fewer instances of a sixth sense, beyond other magnets, which were quite obvious. I was glad that the implant didn’t interfere with my life, or prevent me from exercising, but I also grew a bit disenchanted, after all the hype and excitement the grinders I interviewed had shared about their newfound way of interacting with the world.

HISTORY.02

If Lepht Anonym is the cautionary tale, Prof. Kevin Warwick is the one bringing academic respectability to cybernetics. He was one of the first to experiment with implants, putting an RFID chip into his body back in 1998, and has also taken the techniques the farthest. In 2002, Prof. Warwick had cybernetic sensors implanted into the nerves of his arm. Unlike the grinders in Pittsburgh, he had the benefits of anesthesia and a full medical team, but he was still putting himself at great risk, as there was no research on the long-term effects of having these devices grafted onto his nervous system. “In a way that is what I like most about this,” he told me. “From an academic standpoint, it’s wide-open territory.”

I chatted with Warwick from his office at The University of Reading, stacked floor to ceiling with books and papers. He has light brown hair that falls over his forehead and an easy laugh. With his long sleeve shirt on, you would never know that his arm is full of complex machinery. The unit allows Warwick to manipulate a robot hand, a mirror of his own fingers and flesh. What’s more, the impulse could flow both ways. Warwick’s wife, Irena, had a simpler cybernetic implant done on herself. When someone grasped her hand, Prof. Warwick was able to experience the same sensation in his hand, from across the Atlantic. It was, Warwick writes, a sort of cybernetic telepathy, or empathy, in which his nerves were made to feel what she felt, via bits of data travelling over the internet.

The work was hailed by the mainstream media as a major step forward in helping amputees and victims of paralysis to regain a full range of abilities. But Prof. Warwick says that misses the point. “I quite like the fact that new medical therapies could potentially come out of this work, but what I am really interested in is not getting people back to normal; it’s enhancement of fully functioning humans to a higher level.”

It’s a sentiment that can take some getting used to. “A decade ago, if you talked about human enhancement, you upset quite a lot of people. Unless the end goal was helping the disabled, people really were not open to it.” With the advent of smartphones, says Prof. Warwick, all that has changed. “Normal folks really see the value of ubiquitous technology. In fact the social element has almost created the reverse. Now, you must be connected all the time.”

While he is an accomplished academic, Prof. Warwick has embraced biohackers and grinders as fellow travelers on the road to exploring our cybernetic future. “A lot of the time, when it comes to putting magnets into your body or RFID chips, there is more information on YouTube than in the peer-reviewed journals. There are artists and geeks pushing the boundaries, sharing information, a very renegade thing. My job is to take that, and apply some more rigorous scientific analysis.”

To that end, Prof. Warwick and one of his PhD students, Ian Harrison, are beginning a series of studies on biohackers with magnetic implants. “When it comes to sticking sensors into your nerve endings, so much is subjective,” says Harrison. “What one person feels, another may not. So we are trying to establish some baselines for future research.”

“IT’S LIKE THIS LAST, UNEXPLORED CONTINENT STARING US IN THE FACE.”The end goal for Prof. Warwick, as it was for the team at Grindhouse Wetwares in Pittsburgh, is still the stuff of science fiction. “When it comes to communication, humans are still so far behind what computers are capable of,” Prof. Warwick explained. “Bringing about brain to brain communication is something I hope to achieve in my lifetime.”For Warwick, this will advance not just the human body and the field of cybernetics, but allow for a more practical evaluation the entire canon of Western thought. “I would like to ask the questions that the philosopher Ludwig Wittgenstein asked, but in practice, not in theory.” It would be another attempt to study the mind, from inside and out, as Wittgenstein proposed. But with access to objective data. “Perhaps he was bang on, or maybe we will rubbish his whole career, but either way, it’s something we should figure out.”

As the limits of space exploration become increasingly clear, a generation of scientists who might once have turned to the stars are seeking to expand humanity’s horizons much closer to home. “Jamming stuff into your body, merging machines with your nerves and brain, it’s brand new,” said Warwick. “It’s like this last, unexplored continent staring us in the face.”

On a hot day in mid-July, I went for a walk around Manhattan with Dann Berg, who had a magnet implanted in his pinky three years earlier. I told him I was a little disappointed how rarely I noticed anything with my implant. “Actually, your experience is pretty common,” he told me. “I didn’t feel much for the first 6 months, as the nerves were healing from surgery. It took a long time for me to gain this kind of ambient awareness.”

Berg worked for a while in the piercing and tattoo studio, which brought him into contact with the body modification community who were experimenting with implants. At the same time, he was teaching himself to code and finding work as a front-end developer building web sites. “To me, these two things, the implant and the programming, they are both about finding new ways to see and experience the world.”

“WE’RE TOUCHING SOMETHING OTHER PEOPLE CAN’T SEE; THEY DON’T KNOW
IT EXISTS.”Berg took me to an intersection at Broadway and Bleecker. In the middle of the crosswalk, he stopped, and began moving his hand over a metal grate. “You feel that?” he asked. “It’s a dome, right here, about a foot off the ground, that just sets my finger off. Somewhere down there, part of the subway system or the power grid is working. We’re touching something other people can’t see; they don’t know it exists. That’s amazing to me.” People passing by gave us odd stares as Berg and I stood next to each other in the street, waving our hands around inside an invisible field, like mystics groping blindly for a ghost.

CYBORGS IN SOCIETY

Last month, a Canadian professor named Steve Mann was eating at a McDonald’s with his family. Mann wears a pair of computerized glasses at all times, similar to Google’s Project Glass. One of the employees asked him to take them off. When he refused, Mann says, an employee tried to rip the glasses off, an alleged attack made more brutal because the device is permanently attached and does not come off his skull without special tools.

On biohacking websites and transhumanist forums, the event was a warning sign of the battle to come. Some dubbed it the first hate crime against cyborgs. That would imply the employees knew Mann’s device was part of him, which is still largely unclear. But it was certainly a harbinger of the friction that will emerge between people whose bodies contain powerful machines and society at large.

PART.03

After zapping my brain with a few dozen volts, the boys from Grindhouse Wetwares offered to cook me dinner. Cannon popped a tray of mashed potatoes in the microwave and showed me where he put his finger to feel the electromagnetic waves streaming off. We stepped out onto the back porch and let his three little puggles run wild. The sound of cars passing on the nearby highway and the crickets warming up for sunset relaxed everyone. I asked what they thought the potential was for biohacking to become part of the mainstream.

“That’s the thing, it’s not that much of a leap,” said Cannon. “We’ve had pacemakers since the ’70s.” Brain implants are now being used to treat Parkinson’s disease and depression. Scientists hope that brain implants might soon restore mobility to paralyzed limbs. The crucial difference is that grinders are pursuing this technology for human enhancement, without any medical need. “How is this any different than plastic surgery, which like half the fucking country gets?” asked Cannon. “Look, you know the military is already working on stuff like this, right? And it won’t be too long before the corporations start following suit.”

Sarver joined the Air Force just weeks after 9/11. “I was a dyed-in-the-wool Roman Catholic Republican. I wasn’t thinking about the military, but after 9/11, I just believed the dogma.” In place of college, he got an education in electronics repairing fighter jets and attack helicopters. He left the war a very different man. “There were no terrorists in Iraq. We were the terrorists. These were scared people, already scared of their own government.”

Yet, while he rejected the conflict in the Middle East, Sarver’s time in the military gave him a new perspective on the human body. “I’ve been in the special forces,” said Sarver. “I know what the limits of the human body are like. Once you’ve seen the capabilities of a 5000psi hydraulic system, it’s no comparison.”

“THIS IS JUST A DECAYING LUMP OF FLESH THAT GETS OLD, IT’S LEAKING FLUID ALL THE TIME”

“IT’S GOING TO BE WEIRD AND UNCOMFORTABLEAND SCARY. BUT YOU CAN DO THAT, OR YOU CAN BECOME OBSOLETE.”

The boys from Grindhouse Wetwares both sucked down Parliament menthols the whole time we talked. There was no irony for them in dreaming of the possibilities for one’s body and willfully destroying it. “For me, the end game is my brain and spinal column in a jar, and a robot body out in the world doing my bidding,” said Sarver. “I would really prefer not to have to rely on an inefficient four-valve pump that sends liquid through these fragile hoses. Fuck cheetahs. I want to punch through walls.”

Flesh and blood are easily shed in grinder circles, at least theoretically speaking. “People recoil from the idea of tampering inside the body,” said Tim. “I am lost when it comes to people’s unhealthy connections to your body. This is just a decaying lump of flesh that gets old, it’s leaking fluid all the time, it’s obscene to think this is me. I am my ideas and the sum of my experiences.” As far as the biohackers are concerned, we are the best argument against intelligent design.

Neither man has any illusions about how fringe biohacking is now. But technology marches on. “People say nobody is going to want to get surgery for this stuff,” admits Cannon. But he believes that will change. “They will or they will be left behind. They have no choice. It’s going to be weird and uncomfortable and scary. But you can do that, or you can become obsolete.”

We came back into the kitchen for dinner. As I wolfed down steak and potatoes, Cannon broke into a nervous grin. “I want to show you something. It’s not quite ready, but this is what we’re working on.” He disappeared down into the basement lab and returned with a small device the size of a cigarette lighter, a simple circuit board with a display attached. This was the HELEDD, the next step in the Grindhouse Wetwares plan to unite man and machine. “This is just a prototype, but when we get it small enough, the idea is to have this beneath my skin,” he said, holding it up against his inner forearm.

The smartphone in your pocket would act as the brain for this implant, communicating via bluetooth with the HELEDD, which would use a series of LED lights to display the time, a text message, or the user’s heart rate. “We’re looking to get sensors in there for the big three,” said Tim. “Heart rate, body temperature, and blood pressure. Because then you are looking at this incredible data. Most people don’t know the effect on a man’s heart when he finds out his wife is cheating on him.”

Cannon hopes to have the operation in the next few months. A big part of what drives the duo to move so fast is the idea that there is no hierarchy established in this space. “We want to be doing this before the FDA gets involved and starts telling us what we can and cannot do. Someday this will be commercially feasible and Apple will design an implant which will sync with your phone, but that is not going to be for us. We like to open things up and break them.”

I point out that Steve Jobs may have died in large part because he was reluctant to get surgery, afraid that if doctors opened him up, they might not be able to put him back together good as new. “We’re grinders,” said Cannon. “I view it as kind of taking the pain for the people who are going to come after me. We’re paying now so that it will become socially acceptable later.”

3rdi, 2010-2011Photographed by Wafaa Bilal, Copyright: Wafaa Bilal
Image of Prof. Kevin Warwick courtesty of Prof. Kevin Warick
Portrait of Prof. Kevin Warwick originally shot for Time Magazine by Jim Naughten