Tag Archives: Tecnofetichismo

Global warming pioneer calls for carbon dioxide to be taken from atmosphere and stored underground (Science Daily)

Date: August 28, 2014

Source: European Association of Geochemistry

Summary: Wally Broeker, the first person to alert the world to global warming, has called for atmospheric carbon dioxide to be captured and stored underground.


Wally Broeker, the first person to alert the world to global warming, has called for atmospheric CO2 to be captured and stored underground. He says that carbon capture, combined with limits on fossil fuel emissions, is the best way to avoid global warming getting out of control over the next fifty years. Professor Broeker (Columbia University, New York) made the call during his presentation to the International Carbon Conference in Reykjavik, Iceland, where 150 scientists are meeting to discuss carbon capture and storage.

He was presenting an analysis which showed that the world has been cooling very slowly, over the last 51 million years, but that human activity is causing a rise in temperature which will lead to problems over the next 100,000 years.

“We have painted ourselves into a tight corner. We can’t reduce our reliance of fossil fuels quickly enough, so we need to look at alternatives.

“One of the best ways to deal with this is likely to be carbon capture — in other words, putting the carbon back where it came from, underground. There has been great progress in capturing carbon from industrial processes, but to really make a difference we need to begin to capture atmospheric CO2. Ideally, we could reach a stage where we could control the levels of CO2 in the atmosphere, like you control your central heating. Continually increasing CO2 levels means that we will need to actively manage CO2 levels in the environment, not just stop more being produced. The technology is proven, it just needs to be brought to a stage where it can be implemented.”

Wally Broeker was speaking at the International Carbon Conference in Reykjavik, where 150 scientists are meeting to discuss how best CO2 can be removed from the atmosphere as part of a programme to reduce global warming.

Meeting co-convener Professor Eric Oelkers (University College London and University of Toulouse) commented: “Capture is now at a crossroads; we have proven methods to store carbon in the Earth but are limited in our ability to capture this carbon directly from the atmosphere. We are very good at capturing carbon from factories and power stations, but because roughly two-thirds of our carbon originates from disperse sources, implementing direct air capture is key to solving this global challenge.”

European Association of Geochemistry. “Global warming pioneer calls for carbon dioxide to be taken from atmosphere and stored underground.” ScienceDaily. ScienceDaily, 28 August 2014. <www.sciencedaily.com/releases/2014/08/140828110915.htm>.

Carbon dioxide ‘sponge’ could ease transition to cleaner energy (Science Daily)

Date: August 10, 2014

Source: American Chemical Society (ACS)

Summary: A plastic sponge that sops up the greenhouse gas carbon dioxide might ease our transition away from polluting fossil fuels to new energy sources like hydrogen. A relative of food container plastics could play a role in President Obama’s plan to cut carbon dioxide emissions. The material might also someday be integrated into power plant smokestacks.


Plastic that soaks up carbon dioxide could someday be used in plant smokestacks.
Credit: American Chemical Society

A sponge-like plastic that sops up the greenhouse gas carbon dioxide (CO2) might ease our transition away from polluting fossil fuels and toward new energy sources, such as hydrogen. The material — a relative of the plastics used in food containers — could play a role in President Obama’s plan to cut CO2 emissions 30 percent by 2030, and could also be integrated into power plant smokestacks in the future.

The report on the material is one of nearly 12,000 presentations at the 248th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, taking place here through Thursday.

“The key point is that this polymer is stable, it’s cheap, and it adsorbs CO2 extremely well. It’s geared toward function in a real-world environment,” says Andrew Cooper, Ph.D. “In a future landscape where fuel-cell technology is used, this adsorbent could work toward zero-emission technology.”

CO2 adsorbents are most commonly used to remove the greenhouse gas pollutant from smokestacks at power plants where fossil fuels like coal or gas are burned. However, Cooper and his team intend the adsorbent, a microporous organic polymer, for a different application — one that could lead to reduced pollution.

The new material would be a part of an emerging technology called an integrated gasification combined cycle (IGCC), which can convert fossil fuels into hydrogen gas. Hydrogen holds great promise for use in fuel-cell cars and electricity generation because it produces almost no pollution. IGCC is a bridging technology that is intended to jump-start the hydrogen economy, or the transition to hydrogen fuel, while still using the existing fossil-fuel infrastructure. But the IGCC process yields a mixture of hydrogen and CO2 gas, which must be separated.

Cooper, who is at the University of Liverpool, says that the sponge works best under the high pressures intrinsic to the IGCC process. Just like a kitchen sponge swells when it takes on water, the adsorbent swells slightly when it soaks up CO2 in the tiny spaces between its molecules. When the pressure drops, he explains, the adsorbent deflates and releases the CO2­, which they can then collect for storage or convert into useful carbon compounds.

The material, which is a brown, sand-like powder, is made by linking together many small carbon-based molecules into a network. Cooper explains that the idea to use this structure was inspired by polystyrene, a plastic used in styrofoam and other packaging material. Polystyrene can adsorb small amounts of CO2 by the same swelling action.

One advantage of using polymers is that they tend to be very stable. The material can even withstand being boiled in acid, proving it should tolerate the harsh conditions in power plants where CO2 adsorbents are needed. Other CO2 scrubbers — whether made from plastics or metals or in liquid form — do not always hold up so well, he says. Another advantage of the new adsorbent is its ability to adsorb CO2 without also taking on water vapor, which can clog up other materials and make them less effective. Its low cost also makes the sponge polymer attractive. “Compared to many other adsorbents, they’re cheap,” Cooper says, mostly because the carbon molecules used to make them are inexpensive. “And in principle, they’re highly reusable and have long lifetimes because they’re very robust.”

Cooper also will describe ways to adapt his microporous polymer for use in smokestacks and other exhaust streams. He explains that it is relatively simple to embed the spongy polymers in the kinds of membranes already being evaluated to remove CO­2 from power plant exhaust, for instance. Combining two types of scrubbers could make much better adsorbents by harnessing the strengths of each, he explains.

The research was funded by the Engineering and Physical Sciences Research Council and E.ON Energy.

Geoengineering the Earth’s climate sends policy debate down a curious rabbit hole (The Guardian)

Many of the world’s major scientific establishments are discussing the concept of modifying the Earth’s climate to offset global warming

Monday 4 August 2014

Many leading scientific institutions are now looking at proposed ways to engineer the planet's climate to offset the impacts of global warming.

Many leading scientific institutions are now looking at proposed ways to engineer the planet’s climate to offset the impacts of global warming. Photograph: NASA/REUTERS

There’s a bit in Alice’s Adventures in Wonderland where things get “curiouser and curiouser” as the heroine tries to reach a garden at the end of a rat-hole sized corridor that she’s just way too big for.

She drinks a potion and eats a cake with no real clue what the consequences might be. She grows to nine feet tall, shrinks to ten inches high and cries literal floods of frustrated tears.

I spent a couple of days at a symposium in Sydney last week that looked at the moral and ethical issues around the concept of geoengineering the Earth’s climate as a “response” to global warming.

No metaphor is ever quite perfect (climate impacts are no ‘wonderland’), but Alice’s curious experiences down the rabbit hole seem to fit the idea of medicating the globe out of a possible catastrophe.

And yes, the fact that in some quarters geoengineering is now on the table shows how the debate over climate change policy is itself becoming “curiouser and curiouser” still.

It’s tempting too to dismiss ideas like pumping sulphate particles into the atmosphere or making clouds whiter as some sort of surrealist science fiction.

But beyond the curiosity lies actions being countenanced and discussed by some of the world’s leading scientific institutions.

What is geoengineering?

Geoengineering – also known as climate engineering or climate modification – comes in as many flavours as might have been on offer at the Mad Hatter’s Tea Party.

Professor Jim Falk, of the Melbourne Sustainable Society Institute at the University of Melbourne, has a list of more than 40 different techniques that have been suggested.

They generally take two approaches.

Carbon Dioxide Reduction (CDR) is pretty self explanatory. Think tree planting, algae farming, increasing the carbon in soils, fertilising the oceans or capturing emissions from power stations. Anything that cuts the amount of CO2 in the atmosphere.

Solar Radiation Management (SRM) techniques are concepts to try and reduce the amount of solar energy reaching the earth. Think pumping sulphate particles into the atmosphere (this mimics major volcanic eruptions that have a cooling effect on the planet), trying to whiten clouds or more benign ideas like painting roofs white.

Geoengineering on the table

In 2008 an Australian Government–backed research group issued a report on the state-of-play of ocean fertilisation, recording there had been 12 experiments carried out of various kinds with limited to zero evidence of “success”.

This priming of the “biological pump” as its known, promotes the growth of organisms (phytoplankton) that store carbon and then sink to the bottom of the ocean.

The report raised the prospect that larger scale experiments could interfere with the oceanic food chain, create oxygen-depleted “dead zones” (no fish folks), impact on corals and plants and various other unknowns.

The Royal Society – the world’s oldest scientific institution – released a report in 2009, also reviewing various geoengineering technologies.

In 2011, Australian scientists gathered at a geoengineering symposium organised by the Australian Academy of Science and the Australian Academy of Technological Sciences and Engineering.

The London Protocol – a maritime convention relating to dumping at sea – was amended last year to try and regulate attempts at “ocean fertilisation” – where substances, usually iron, are dumped into the ocean to artificially raise the uptake of carbon dioxide.

The latest major United Nations Intergovernmental Panel on Climate Change also addressed the geoengineering issue in several chapters of its latest report. The IPCC summarised geoengineering this way.

CDR methods have biogeochemical and technological limitations to their potential on a global scale. There is insufficient knowledge to quantify how much CO2 emissions could be partially offset by CDR on a century timescale. Modelling indicates that SRM methods, if realizable, have the potential to substantially offset a global temperature rise, but they would also modify the global water cycle, and would not reduce ocean acidification. If SRM were terminated for any reason, there is high confidence that global surface temperatures would rise very rapidly to values consistent with the greenhouse gas forcing. CDR and SRM methods carry side effects and long-term consequences on a global scale.

Towards the end of this year, the US National Academy of Sciences will be publishing a major report on the “technical feasibility” of some geoengineering techniques.

Fighting Fire With Fire

The symposium in Sydney was co-hosted by the University of New South Wales and the Sydney Environment Institute at the University of Sydney (for full disclosure here, they paid my travel costs and one night stay).

Dr Matthew Kearnes, one of the organisers of the workshop from UNSW, told me there was “nervousness among many people about even thinking or talking about geoengineering.” He said:

I would not want to dismiss that nervousness, but this is an agenda that’s now out there and it seems to be gathering steam and credibility in some elite establishments.

Internationally geoengineering tends to be framed pretty narrowly as just a case of technical feasibility, cost and efficacy. Could it be done? What would it cost? How quickly would it work?

We wanted to get a way from the arguments about the pros and cons and instead think much more carefully about what this tells us about the climate change debate more generally.

The symposium covered a range of frankly exhausting philosophical, social and political considerations – each of them jumbo-sized cans full of worms ready to open.

Professor Stephen Gardiner, of the University of Washington, Seattle, pushed for the wider community to think about the ethical and moral consequences of geoengineering. He drew a parallel between the way, he said, that current fossil fuel combustion takes benefits now at the expense of impacts on future generations. Geoengineering risked making the same mistake.

Clive Hamilton’s book Earthmasters notes “in practice any realistic assessment of how the world works must conclude that geoengineering research is virtually certain to reduce incentives to pursue emission reductions”.

Odd advocates

Curiouser still, is that some of the world’s think tanks who shout the loudest that human-caused climate change might not even be a thing, or at least a thing not worth worrying about, are happy to countenance geoengineering as a solution to the problem they think is overblown.

For example, in January this year the Copenhagen Consensus Center, a US-based think tank founded by Danish political scientist Bjorn Lomborg, issued a submission to an Australian Senate inquiry looking at overseas aid and development.

Lomborg’s center has for many years argued that cutting greenhouse gas emissions is too expensive and that action on climate change should have a low-priority compared to other issues around the world.

Lomborg himself says human-caused climate change will not turn into an economic negative until near the end of this century.

Yet Lomborg’s submission told the Australian Senate suggested that every dollar spent on “investigat[ing] the feasibility of planetary cooling through geoengineering technologies” could yield “$1000 of benefits” although this, Lomborg wrote, was a “rough estimate”.

But these investigations, Lomborg submitted, “would serve to better understand risks, costs, and benefits, but also act as an important potential insurance against global warming”.

Engineering another excuse

Several academics I’ve spoken with have voiced fears that the idea of unproven and potentially disastrous geoengineering technologies being an option to shield societies from the impacts of climate change could be used to distract policy makers and the public from addressing the core of the climate change issue – that is, curbing emissions in the first place.

But if the idea of some future nation, or group of nations, or even corporations, some embarking on a major project to modify the Earth’s climate systems leaves you feeling like you’ve fallen down a surreal rabbit hole, then perhaps we should also ask ourselves this.

Since the year 1750, the world has added something in the region of 1,339,000,000,000 tonnes of carbon dioxide (that’s 1.34 trillion tonnes) to the atmosphere from fossil fuel and cement production.

Raising the level of CO2 in the atmosphere by 40 per cent could be seen as accidental geoengineering.

Time to crawl out of the rabbit hole?

The rise of data and the death of politics (The Guardian)

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

The Observer, Sunday 20 July 2014

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisionsthat such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0”) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer

For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.

The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.

This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.

Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.

Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.

This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, driveUber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

Geoengineering Approaches to Reduce Climate Change Unlikely to Succeed (Science Daily)

Dec. 5, 2013 — Reducing the amount of sunlight reaching the planet’s surface by geoengineering may not undo climate change after all. Two German researchers used a simple energy balance analysis to explain how Earth’s water cycle responds differently to heating by sunlight than it does to warming due to a stronger atmospheric greenhouse effect. Further, they show that this difference implies that reflecting sunlight to reduce temperatures may have unwanted effects on Earth’s rainfall patterns.

Heavy rainfall events can be more common in a warmer world. (Credit: Annett Junginger, distributed via imaggeo.egu.eu)

The results are now published in Earth System Dynamics, an open access journal of the European Geosciences Union (EGU).

Global warming alters Earth’s water cycle since more water evaporates to the air as temperatures increase. Increased evaporation can dry out some regions while, at the same time, result in more rain falling in other areas due to the excess moisture in the atmosphere. The more water evaporates per degree of warming, the stronger the influence of increasing temperature on the water cycle. But the new study shows the water cycle does not react the same way to different types of warming.

Axel Kleidon and Maik Renner of the Max Planck Institute for Biogeochemistry in Jena, Germany, used a simple energy balance model to determine how sensitive the water cycle is to an increase in surface temperature due to a stronger greenhouse effect and to an increase in solar radiation. They predicted the response of the water cycle for the two cases and found that, in the former, evaporation increases by 2% per degree of warming while in the latter this number reaches 3%. This prediction confirmed results of much more complex climate models.

“These different responses to surface heating are easy to explain,” says Kleidon, who uses a pot on the kitchen stove as an analogy. “The temperature in the pot is increased by putting on a lid or by turning up the heat — but these two cases differ by how much energy flows through the pot,” he says. A stronger greenhouse effect puts a thicker ‘lid’ over Earth’s surface but, if there is no additional sunlight (if we don’t turn up the heat on the stove), extra evaporation takes place solely due to the increase in temperature. Turning up the heat by increasing solar radiation, on the other hand, enhances the energy flow through Earth’s surface because of the need to balance the greater energy input with stronger cooling fluxes from the surface. As a result, there is more evaporation and a stronger effect on the water cycle.

In the new Earth System Dynamics study the authors also show how these findings can have profound consequences for geoengineering. Many geoengineering approaches aim to reduce global warming by reducing the amount of sunlight reaching Earth’s surface (or, in the pot analogy, reduce the heat from the stove). But when Kleidon and Renner applied their results to such a geoengineering scenario, they found out that simultaneous changes in the water cycle and the atmosphere cannot be compensated for at the same time. Therefore, reflecting sunlight by geoengineering is unlikely to restore the planet’s original climate.

“It’s like putting a lid on the pot and turning down the heat at the same time,” explains Kleidon. “While in the kitchen you can reduce your energy bill by doing so, in the Earth system this slows down the water cycle with wide-ranging potential consequences,” he says.

Kleidon and Renner’s insight comes from looking at the processes that heat and cool Earth’s surface and how they change when the surface warms. Evaporation from the surface plays a key role, but the researchers also took into account how the evaporated water is transported into the atmosphere. They combined simple energy balance considerations with a physical assumption for the way water vapour is transported, and separated the contributions of surface heating from solar radiation and from increased greenhouse gases in the atmosphere to obtain the two sensitivities. One of the referees for the paper commented: “it is a stunning result that such a simple analysis yields the same results as the climate models.”

Journal Reference:

  1. A. Kleidon, M. Renner. A simple explanation for the sensitivity of the hydrologic cycle to global climate changeEarth System Dynamics Discussions, 2013; 4 (2): 853 DOI: 10.5194/esdd-4-853-2013

Manejo de água no país é crítico, afirmam pesquisadores (Fapesp)

Avaliação foi feita por participantes de seminário sobre recursos hídricos e agricultura, realizado na FAPESP como parte das atividades do Prêmio Fundação Bunge 2013 (Wikipedia)

09/10/2013

Por Elton Alisson

Agência FAPESP – A gestão de recursos hídricos no Brasil representa um problema crítico, devido à falta de mecanismos, tecnologias e, sobretudo, de recursos humanos suficientes para gerir de forma adequada as bacias hidrográficas do país. A avaliação foi feita por pesquisadores participantes do “Seminário sobre Recursos Hídricos e Agricultura”, realizado no dia 2 de outubro, na FAPESP.

O evento integrou as atividades do 58º Prêmio Fundação Bunge e do 34º Prêmio Fundação Bunge Juventude que, neste ano, contemplaram as áreas de Recursos Hídricos e Agricultura e Crítica Literária. Na área de Recursos Hídricos e Agricultura os prêmios foram outorgados, respectivamente, aos professores Klaus Reichardt, do Centro de Energia Nuclear na Agricultura (CENA), da Universidade de São Paulo (USP), e Samuel Beskow, da Universidade Federal de Pelotas (UFPel).

“O Brasil tem problemas de gestão de recursos hídricos porque não há mecanismos, instrumentos, tecnologias e, acima de tudo, recursos humanos suficientemente treinados e com bagagem interdisciplinar para enfrentar e solucionar os problemas de manejo da água”, disse José Galizia Tundisi, pesquisador do Instituto Internacional de Ecologia (IIE), convidado a participar do evento.

“É preciso gerar métodos, conceitos e mecanismos aplicáveis às condições do país”, avaliou o pesquisador, que atualmente dirige o programa mundial de formação de gestores de recursos hídricos da Rede Global de Academias de Ciências (IAP, na sigla em inglês) – instituição que representa mais de cem academias de ciências no mundo.

De acordo com Tundisi, as bacias hidrográficas foram adotadas como unidades prioritárias de gerenciamento do uso da água pela Política Nacional de Recursos Hídricos, sancionada em 1997. Todas as bacias hidrográficas do país, contudo, carecem de instrumentos que possibilitem uma gestão adequada, apontou o pesquisador.

“É muito difícil encontrar um comitê de bacia hidrográfica [colegiado composto por representantes da sociedade civil e responsável pela gestão de recursos hídricos de uma determinada bacia] que esteja totalmente instrumentalizado em termos de técnicas e de programas para melhorar o desempenho do gerenciamento de uso da água”, afirmou.

Modelagem hidrológica

Segundo Tundisi, alguns dos instrumentos que podem facilitar a gestão e a tomada de decisões em relação ao manejo da água de bacias hidrográficas brasileiras são modelos computacionais de simulação do comportamento de bacias hidrográficas, como o desenvolvido por Beskow, professor do Departamento de Engenharia Hídrica da UFPel, ganhador da atual edição do Prêmio Fundação Bunge Juventude na área de Recursos Hídricos e Agricultura.

Batizado de Lavras Simulation of Hidrology (Lash), o modelo hidrológico foi desenvolvido por Beskow durante seu doutorado, realizado na Universidade Federal de Lavras (Ufla), em Minas Gerais, com um período na Purdue University, dos Estados Unidos.

“Há vários modelos hidrológicos desenvolvidos em diferentes partes do mundo – especialmente nos Estados Unidos e Europa –, que são ferramentas valiosíssimas para gestão e tomada de decisões relacionadas a bacias hidrográficas”, disse Beskow.

“Esses modelos hidrológicos são úteis tanto para projetar estruturas hidráulicas – pontes ou reservatórios –, como para fazer previsões em tempo real de cheias e enchentes, como para medir os impactos de ações do tipo desmatamento ou mudanças no uso do solo de áreas no entorno de bacias hidrográficas”, afirmou.

De acordo com o pesquisador, a primeira versão do Lash foi concluída em 2009 e aplicada em pesquisas sobre modelagem de chuva e vazão de água para avaliação do potencial de geração de energia elétrica em bacias hidrográficas de porte pequeno, como a do Ribeirão Jaguará, em Minas Gerais, que possui 32 quilômetros quadrados.

Em razão dos resultados animadores obtidos, o pesquisador começou a desenvolver, a partir de 2011, a segunda versão do modelo de simulação hidrológica, que pretende disponibilizar para os gestores de bacias hidrográficas de diferentes dimensões.

“O modelo conta agora com um banco de dados por meio do qual os usuários conseguem importar e armazenar dados de chuva, temperatura e umidade e uso do solo, entre outros parâmetros, gerados em diferentes estações da rede de monitoramento de uma determinada bacia geográfica e, que permitem realizar a gestão de recursos hídricos”, contou.

Uma das principais motivações para o desenvolvimento de modelos e de simulação hidrológica no Brasil, segundo o pesquisador, é a falta de dados fluviométricos (de medição de níveis de água, velocidade e vazão nos rios) das bacias hidrológicas existentes no país.

É baixo o número de estações fluviométricas cadastradas no Sistema de Informações Hidrológicas (HidroWeb), operado pela Agência Nacional de Águas (ANA), e muitas delas estão fora de operação, afirmou Beskow.

“Existem pouco mais de cem estações fluviométricas no Rio Grande do Sul cadastradas nesse sistema, que nos permitem obter dados de séries temporais de até dez anos”, disse o pesquisador. “Esse número de estações é muito baixo para fazer a gestão de recursos hídricos de um estado como o Rio Grande do Sul.”

Uso racional da água

Beskow e Klaus Reichardt – que também é professor da Escola Superior de Agricultura Luiz de Queiroz (Esalq) – destacaram a necessidade de desenvolver tecnologias para usar a água de maneira cada vez mais racional na agricultura, uma vez que o setor consome a maior parte da água doce prontamente disponível no mundo hoje.

Do total de 70% da água encontrada na Terra, 97,5% é salgada e 2,5% é doce. Desse percentual ínfimo de água doce, no entanto, 69% estão estocados em geleiras e neves eternas, 29,8% em aquíferos e 0,9% em reservatórios. Do 0,3% prontamente disponível, 65% são utilizados pela agricultura, 22% pelas indústrias, 7% para consumo humano e 6% são perdidos, ressaltou Reichardt.

“No Brasil, temos a Amazônia e o aquífero Guarani que poderão ser explorados”, afirmou o pesquisador que teve projetos apoiados pela FAPESP.

Reichardt ganhou o prêmio por sua contribuição em Física de Solos ao estudar e desenvolver formas de calcular o movimento de água em solos arenosos ou argilosos, entre outros, que apresentam variações. “Isso foi aplicado em vários tipos de solo com condutividade hidráulica saturada em função da umidade, por exemplo”, contou.

O pesquisador vem se dedicando nos últimos anos a realizar, em colaboração com colegas da Empresa Brasileira de Pesquisa Agropecuária (Embrapa), tomografia computadorizada para medida de água no solo. “Por meio dessa técnica conseguimos desvendar fenômenos muito interessantes que ocorrem no solo”, disse Reichardt.

Custo da inanição

O evento contou com a presença de Eduardo Moacyr Krieger e Carlos Henrique de Brito Cruz, respectivamente vice-presidente e diretor científico da FAPESP; Jacques Marcovitch, presidente da Fundação Bunge; Ardaillon Simões, presidente da Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco (Facepe), e José Antônio Frizzone, professor da Esalq, entre outras autoridades.

Em seu pronunciamento, Krieger apontou que a Fundação Bunge e a FAPESP têm muitas características em comum. “Ao premiar anualmente os melhores pesquisadores em determinadas áreas, a Fundação Bunge revela seu cuidado com o mérito científico e a qualidade das pesquisas”, disse Krieger.

“A FAPESP, de certa forma, também faz isso ao ‘premiar’ os pesquisadores por meio de Bolsas, Auxílios e outras modalidades de apoio, levando em conta a qualidade da pesquisa realizada.”

Brito Cruz ressaltou que o prêmio concedido pela Fundação Bunge ajuda a criar no Brasil a possibilidade de pesquisadores se destacarem na sociedade brasileira por sua capacidade e realizações intelectuais.

“Isso é essencial para se construir um país que seja dono de seu destino, capaz de criar seu futuro e enfrentar novos desafios de qualquer natureza”, disse Brito Cruz. “Um país só consegue avançar tendo pessoas com capacidade intelectual para entender os problemas e criar soluções para resolvê-los.”

Por sua vez, Marcovitch avaliou que o problema da gestão do uso da água no país pode ser enfrentado de duas formas. A primeira parte da premissa de que o país está deitado em berço esplêndido, tem recursos naturais abundantes e, portanto, não precisaria se preocupar com o problema. A segunda alerta para as consequências da inação em relação à necessidade de se fazer gestão adequada dos recursos hídricos do país, como Tundisi vem fazendo, para estimular pesquisadores como Beskow e Reichardt a encontrar respostas.

“[Nós, pesquisadores,] temos a responsabilidade de elevar a consciência da sociedade sobre os riscos e o custo da inação em relação à gestão dos recursos hídricos do país”, disse.

Is War Really Disappearing? New Analysis Suggests Not (Science Daily)

Aug. 29, 2013 — While some researchers have claimed that war between nations is in decline, a new analysis suggests we shouldn’t be too quick to celebrate a more peaceful world.

The study finds that there is no clear trend indicating that nations are less eager to wage war, said Bear Braumoeller, author of the study and associate professor of political science at The Ohio State University.

Conflict does appear to be less common than it had been in the past, he said. But that’s due more to an inability to fight than to an unwillingness to do so.

“As empires fragment, the world has split up into countries that are smaller, weaker and farther apart, so they are less able to fight each other,” Braumoeller said.

“Once you control for their ability to fight each other, the proclivity to go to war hasn’t really changed over the last two centuries.”

Braumoeller presented his research Aug. 29 in Chicago at the annual meeting of the American Political Science Association.

Several researchers have claimed in recent years that war is in decline, most notably Steven Pinker in his 2011 book The Better Angels of Our Nature: Why Violence Has Declined.

As evidence, Pinker points to a decline in war deaths per capita. But Braumoeller said he believes that is a flawed measure.

“That accurately reflects the average citizen’s risk from death in war, but countries’ calculations in war are more complicated than that,” he said.

Moreover, since population grows exponentially, it would be hard for war deaths to keep up with the booming number of people in the world.

Because we cannot predict whether wars will be quick and easy or long and drawn-out (“Remember ‘Mission Accomplished?'” Braumoeller says) a better measure of how warlike we as humans are is to start with how often countries use force — such as missile strikes or armed border skirmishes — against other countries, he said.

“Any one of these uses of force could conceivably start a war, so their frequency is a good indication of how war prone we are at any particular time,” he said.

Braumoeller used the Correlates of War Militarized Interstate Dispute database, which scholars from around the world study to measure uses of force up to and including war.

The data shows that the uses of force held more or less constant through World War I, but then increased steadily thereafter.

This trend is consistent with the growth in the number of countries over the course of the last two centuries.

But just looking at the number of conflicts per pair of countries is misleading, he said, because countries won’t go to war if they aren’t “politically relevant” to each other.

Military power and geography play a big role in relevance; it is unlikely that a small, weak country in South America would start a war with a small, weak country in Africa.

Once Braumoeller took into account both the number of countries and their political relevance to one another, the results showed essentially no change to the trend of the use of force over the last 200 years.

While researchers such as Pinker have suggested that countries are actually less inclined to fight than they once were, Braumoeller said these results suggest a different reason for the recent decline in war.

“With countries being smaller, weaker and more distant from each other, they certainly have less ability to fight. But we as humans shouldn’t get credit for being more peaceful just because we’re not as able fight as we once were,” he said.

“There is no indication that we actually have less proclivity to wage war.”

When Will My Computer Understand Me? (Science Daily)

June 10, 2013 — It’s not hard to tell the difference between the “charge” of a battery and criminal “charges.” But for computers, distinguishing between the various meanings of a word is difficult.

A “charge” can be a criminal charge, an accusation, a battery charge, or a person in your care. Some of those meanings are closer together, others further apart. (Credit: Image courtesy of University of Texas at Austin, Texas Advanced Computing Center)

For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software. Driven initially by efforts to translate Russian scientific texts during the Cold War (and more recently by the value of information retrieval and data analysis tools), these efforts have met with mixed success. IBM’s Jeopardy-winningWatson system and Google Translate are high profile, successful applications of language technologies, but the humorous answers and mistranslations they sometimes produce are evidence of the continuing difficulty of the problem.

Our ability to easily distinguish between multiple word meanings is rooted in a lifetime of experience. Using the context in which a word is used, an intrinsic understanding of syntax and logic, and a sense of the speaker’s intention, we intuit what another person is telling us.

“In the past, people have tried to hand-code all of this knowledge,” explained Katrin Erk, a professor of linguistics at The University of Texas at Austin focusing on lexical semantics. “I think it’s fair to say that this hasn’t been successful. There are just too many little things that humans know.”

Other efforts have tried to use dictionary meanings to train computers to better understand language, but these attempts have also faced obstacles. Dictionaries have their own sense distinctions, which are crystal clear to the dictionary-maker but murky to the dictionary reader. Moreover, no two dictionaries provide the same set of meanings — frustrating, right?

Watching annotators struggle to make sense of conflicting definitions led Erk to try a different tactic. Instead of hard-coding human logic or deciphering dictionaries, why not mine a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a weighted map of relationships — a dictionary without a dictionary?

“An intuition for me was that you could visualize the different meanings of a word as points in space,” she said. “You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”

To create a model that can accurately recreate the intuitive ability to distinguish word meaning requires a lot of text and a lot of analytical horsepower.

“The lower end for this kind of a research is a text collection of 100 million words,” she explained. “If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers and Hadoop come in.”

Applying Computational Horsepower

Erk initially conducted her research on desktop computers, but around 2009, she began using the parallel computing systems at the Texas Advanced Computing Center (TACC). Access to a special Hadoop-optimized subsystem on TACC’s Longhornsupercomputer allowed Erk and her collaborators to expand the scope of their research. Hadoop is a software architecture well suited to text analysis and the data mining of unstructured data that can also take advantage of large computer clusters. Computational models that take weeks to run on a desktop computer can run in hours on Longhorn. This opened up new possibilities.

“In a simple case we count how often a word occurs in close proximity to other words. If you’re doing this with one billion words, do you have a couple of days to wait to do the computation? It’s no fun,” Erk said. “With Hadoop on Longhorn, we could get the kind of data that we need to do language processing much faster. That enabled us to use larger amounts of data and develop better models.”

Treating words in a relational, non-fixed way corresponds to emerging psychological notions of how the mind deals with language and concepts in general, according to Erk. Instead of rigid definitions, concepts have “fuzzy boundaries” where the meaning, value and limits of the idea can vary considerably according to the context or conditions. Erk takes this idea of language and recreates a model of it from hundreds of thousands of documents.

Say That Another Way

So how can we describe word meanings without a dictionary? One way is to use paraphrases. A good paraphrase is one that is “close to” the word meaning in that high-dimensional space that Erk described.

“We use a gigantic 10,000-dimentional space with all these different points for each word to predict paraphrases,” Erk explained. “If I give you a sentence such as, ‘This is a bright child,’ the model can tell you automatically what are good paraphrases (‘an intelligent child’) and what are bad paraphrases (‘a glaring child’). This is quite useful in language technology.”

Language technology already helps millions of people perform practical and valuable tasks every day via web searches and question-answer systems, but it is poised for even more widespread applications.

Automatic information extraction is an application where Erk’s paraphrasing research may be critical. Say, for instance, you want to extract a list of diseases, their causes, symptoms and cures from millions of pages of medical information on the web.

“Researchers use slightly different formulations when they talk about diseases, so knowing good paraphrases would help,” Erk said.

In a paper to appear in ACM Transactions on Intelligent Systems and Technology, Erk and her collaborators illustrated they could achieve state-of-the-art results with their automatic paraphrasing approach.

Recently, Erk and Ray Mooney, a computer science professor also at The University of Texas at Austin, were awarded a grant from the Defense Advanced Research Projects Agency to combine Erk’s distributional, high dimensional space representation of word meanings with a method of determining the structure of sentences based on Markov logic networks.

“Language is messy,” said Mooney. “There is almost nothing that is true all the time. “When we ask, ‘How similar is this sentence to another sentence?’ our system turns that question into a probabilistic theorem-proving task and that task can be very computationally complex.”

In their paper, “Montague Meets Markov: Deep Semantics with Probabilistic Logical Form,” presented at the Second Joint Conference on Lexical and Computational Semantics (STARSEM2013) in June, Erk, Mooney and colleagues announced their results on a number of challenge problems from the field of artificial intelligence.

In one problem, Longhorn was given a sentence and had to infer whether another sentence was true based on the first. Using an ensemble of different sentence parsers, word meaning models and Markov logic implementations, Mooney and Erk’s system predicted the correct answer with 85% accuracy. This is near the top results in this challenge. They continue to work to improve the system.

There is a common saying in the machine-learning world that goes: “There’s no data like more data.” While more data helps, taking advantage of that data is key.

“We want to get to a point where we don’t have to learn a computer language to communicate with a computer. We’ll just tell it what to do in natural language,” Mooney said. “We’re still a long way from having a computer that can understand language as well as a human being does, but we’ve made definite progress toward that goal.”

Brain Scans Predict Which Criminals Are Most Likely to Reoffend (Wired)

BY GREG MILLER

03.26.13 – 3:40 PM

Photo: Erika Kyte/Getty Images

Brain scans of convicted felons can predict which ones are most likely to get arrested after they get out of prison, scientists have found in a study of 96 male offenders.

“It’s the first time brain scans have been used to predict recidivism,” said neuroscientist Kent Kiehl of the Mind Research Network in Albuquerque, New Mexico, who led the new study. Even so, Kiehl and others caution that the method is nowhere near ready to be used in real-life decisions about sentencing or parole.

Generally speaking, brain scans or other neuromarkers could be useful in the criminal justice system if the benefits in terms of better accuracy outweigh the likely higher costs of the technology compared to conventional pencil-and-paper risk assessments, says Stephen Morse, a legal scholar specializing in criminal law and neuroscience at the University of Pennsylvania. The key questions to ask, Morse says, are: “How much predictive accuracy does the marker add beyond usually less expensive behavioral measures? How subject is it to counter-measures if a subject wishes to ‘defeat’ a scan?”

Those are still open questions with regard to the new method, which Kiehl and colleagues, including postdoctoral fellow Eyal Aharoni, describe in a paper to be published this week in the Proceedings of the National Academy of Sciences.

The test targets impulsivity. In a mobile fMRI scanner the researchers trucked in to two state prisons, they scanned inmates’ brains as they did a simple impulse control task. Inmates were instructed to press a button as quickly as possible whenever they saw the letter X pop up on a screen inside the scanner, but not to press it if they saw the letter K. The task is rigged so that X pops up 84 percent of the time, which predisposes people to hit the button and makes it harder to suppress the impulse to press the button on the rare trials when a K pops up.

Based on previous studies, the researchers focused on the anterior cingulate cortex, one of several brain regions thought to be important for impulse control. Inmates with relatively low activity in the anterior cingulate made more errors on the task, suggesting a correlation with poor impulse control.

They were also more likely to get arrested after they were released. Inmates with relatively low anterior cingulate activity were roughly twice as likely as inmates with high anterior cingulate activity to be rearrested for a felony offense within 4 years of their release, even after controlling for other behavioral and psychological risk factors.

“This is an exciting new finding,” said Essi Viding, a professor of developmental psychopathology at University College London. “Interestingly this brain activity measure appears to be a more robust predictor, in particular of non-violent offending, than psychopathy or drug use scores, which we know to be associated with a risk of reoffending.” However, Viding notes that Kiehl’s team hasn’t yet tried to compare their fMRI test head to head against pencil-and-paper tests specifically designed to assess the risk of recidivism. ”It would be interesting to see how the anterior cingulate cortex activity measure compares against these measures,” she said.

“It’s a great study because it brings neuroimaging into the realm of prediction,” said clinical psychologistDustin Pardini of the University of Pittsburgh. The study’s design is an improvement over previous neuroimaging studies that compared groups of offenders with groups of non-offenders, he says. All the same, he’s skeptical that brain scans could be used to predict the behavior of a given individual. ”In general we’re horrible at predicting human behavior, and I don’t see this as being any different, at least not in the near future.”

Even if the findings hold up in a larger study, there would be limitations, Pardini adds. “In a practical sense, there are just too many ways an offender could get around having an accurate representation of his brain activity taken,” he said. For example, if an offender moves his head while inside the scanner, that would render the scan unreadable. Even more subtle strategies, such as thinking about something unrelated to the task, or making mistakes on purpose, could also thwart the test.

Kiehl isn’t convinced either that this type of fMRI test will ever prove useful for assessing the risk to society posed by individual criminals. But his group is collecting more data — lots more — as part of a much larger study in the New Mexico state prisons. “We’ve scanned 3,000 inmates,” he said. “This is just the first 100.”

Kiehl hopes this work will point to new strategies for reducing criminal behavior. If low activity in the anterior cingulate does in fact turn out to be a reliable predictor of recidivism, perhaps therapies that boost activity in this region would improve impulse control and prevent future crimes, Kiehl says. He admits it’s speculative, but his group is already thinking up experiments to test the idea. ”Cognitive exercises is where we’ll start,” he said. “But I wouldn’t rule out pharmaceuticals.”

Nobel de Química fala sobre a ‘magia da ciência’ em São Carlos (Fapesp)

Na palestra de abertura do simpósio em homenagem ao professor do MIT Daniel Kleppner, Dudley Herschbach, ganhador do prêmio de Química em 1986, apresentou parábolas para ilustrar o que a química é capaz de fazer (foto:Silvio Pires/FAPESP)

28/02/2013

Por Karina Toledo

Agência FAPESP – Com uma palestra intitulada “Glimpses of Chemical Wizardry” (Vislumbres da Magia da Química), o norte-americano Dudley Herschbach – ganhador do prêmio Nobel de Química de 1986 – deu início às atividades de um simpósioque reúne esta semana grandes nomes da ciência mundial em São Carlos, no interior de São Paulo.

A um auditório repleto de estudantes, principalmente dos cursos de Física, Química e Ciências Biológicas da Universidade Federal de São Carlos (UFSCar), Herschbach apresentou três “parábolas moleculares” com o intuito de mostrar algumas das coisas espetaculares que a ciência é capaz de fazer.

Em uma das histórias, intitulada “A vida em turnê no interior das células”, Herschbach falou sobre técnicas avançadas de microscopia com super-resolução desenvolvidas por Xiaowei Zhuang, pesquisadora da Universidade Harvard, que permitem, por exemplo, estudar a interação entre células e a expressão de genes em tempo real.

“A ciência faz coisas que realmente pareciam impossíveis antes de acontecerem. De vez em quando, alguém, em alguma parte do mundo, faz algo mágico e muda as coisas. É maravilhoso saber que você faz parte disso. É parte da recompensa da ciência que você não tem na maioria das profissões”, disse Herschbach à Agência FAPESP.

Graduado em Matemática pela Universidade Stanford, Herschbach fez mestrado em Física e em Química, além de doutorado em Físico-Química pela Universidade Harvard, onde hoje é professor.

“Fui o primeiro da minha família a ir para a universidade. Ofereceram-me uma bolsa para jogar futebol [norte-americano], mas acabei trocando por uma bolsa acadêmica, pois o técnico havia me proibido de frequentar as aulas de laboratório para não me atrasar para os treinos. A verdade é que eu achava a ciência muito mais fascinante”, contou.

Nos anos 1960, o cientista conduziu experimentos pioneiros com a técnica de feixes moleculares cruzados para estudar reações químicas e a dinâmica dos átomos das moléculas em tempo real. Por suas pesquisas nesse campo, recebeu em 1986 – junto com o taiwanês Yuan Lee e o canadense John Polanyi – o Nobel de Química.

Os resultados foram de grande importância para o desenvolvimento de um novo campo de pesquisa — o da dinâmica de reação — e proporcionaram um entendimento detalhado de como as reações químicas acontecem.

“Quando olho no espelho, ao me barbear, percebo que ganhar o Nobel não mudou nada em mim. A única diferença é que as pessoas ficaram mais interessadas no que tenho a dizer. Convidam-me para palestras e entrevistas. E isso acabou me transformando numa espécie de embaixador da ciência”, disse Herschbach.

Poesia em sala de aula

Durante toda a apresentação, Herschbach combateu o mito de que ciência é algo muito difícil, reservado para os muito inteligentes. “Costumo ouvir pessoas dizendo que é preciso ser muito bom em matemática para ser um bom pesquisador, mas a maioria dos cientistas usa a mesma matemática que um caixa de supermercado. Você não precisa ser bom em tudo, apenas em uma coisa, achar um nicho”, afirmou.

Ao comparar a ciência com outras atividades humanas, Herschbach disse que, em nenhuma outra profissão, você pode falhar inúmeras vezes e ainda ser aplaudido quando consegue fazer alguma coisa certa. “Um músico pode tocar quase todas as notas certas em um concerto e ser criticado por ter errado apenas algumas”, comparou.

Herschbach contou que costumava pedir a seus alunos que escrevessem poemas para lhes mostrar que é mais importante se preocupar em fazer as perguntas certas do que encontrar a resposta certa.

“Isso, mais do que resolver equações, é como fazer ciência de verdade. Ninguém diz se um poema está certo ou errado e sim o quanto ele é capaz de abrir seus olhos para algo que parecia ordinário, fazer você enxergar aquilo de outra forma. É assim com a ciência. Se você faz pesquisa de fronteira, coisas novas, é muito artístico. Quero que os estudantes percebam que eles também podem ser feiticeiros”, concluiu.

O Simpósio em Homenagem ao Prof. Daniel Kleppner “Física atômica e áreas correlatas”, que termina no dia 1º de março, é promovido pelo Centro de Pesquisa em Óptica e Fotônica (Cepof) de São Carlos, um dos Centros de Pesquisa, Inovação e Difusão (CEPID) financiados pela FAPESP.

O objetivo do encontro é prestar uma homenagem ao físico norte-americano Daniel Kleppner, do Instituto de Tecnologia de Massachusetts (MIT), que receberá o título de professor honorário do Instituto de Física de São Carlos, da Universidade de São Paulo (IFSC-USP).

Além de Herschbach, amigo de Kleppner desde os tempos da graduação, outros quatro ganhadores do Nobel também participam do evento: Serge Haroche (Nobel de Física 2012), David Wineland (Nobel de Física 2012), Eric Cornell (Nobel de Física 2001) e William Phillips (Nobel de Física 1997).

Will we ever have cyborg brains? (IO9)

Will we ever have cyborg brains?

DEC 19, 2012 2:40 PM

By George Dvorsky

Over at BBC Future, computer scientist Martin Angler has put together a provocative piece about humanity’s collision course with cybernetic technologies. Today, says Angler, we’re using neural interface devices and other assistive technologies to help the disabled. But in short order we’ll be able to radically enhance human capacites — prompting him to wonder about the extent to which we might cyborgize our brains.

Angler points to two a recent and equally remarkable breakthroughs, including a paralyzed stroke victim who was able to guide a robot arm that delivered a hot drink, and a thought-controlled prosthetic hand that could grasp a variety of objects.

Admitting that it’s still early days, Angler speculates about the future:

Yet it’s still a far cry from the visions of man fused with machine, or cyborgs, that grace computer games or sci-fi. The dream is to create the type of brain augmentations we see in fiction that provide cyborgs with advantages or superhuman powers. But the ones being made in the lab only aim to restore lost functionality – whether it’s brain implants that restore limb control, or cochlear implants for hearing.

Creating implants that improve cognitive capabilities, such as an enhanced vision “gadget” that can be taken from a shelf and plugged into our brain, or implants that can restore or enhance brain function is understandably a much tougher task. But some research groups are being to make some inroads.

For instance, neuroscientists Matti Mintz from Tel Aviv University and Paul Verschure from Universitat Pompeu Fabra in Barcelona, Spain, are trying to develop an implantable chip that can restore lost movement through the ability to learn new motor functions, rather than regaining limb control. Verschure’s team has developed a mathematical model that mimics the flow of signals in the cerebellum, the region of the brain that plays an important role in movement control. The researchers programmed this model onto a circuit and connected it with electrodes to a rat’s brain. If they tried to teach the rat a conditioned motor reflex – to blink its eye when it sensed an air puff – while its cerebellum was “switched off” by being anaesthetised, it couldn’t respond. But when the team switched the chip on, this recorded the signal from the air puff, processed it, and sent electrical impulses to the rat’s motor neurons. The rat blinked, and the effect lasted even after it woke up.

Be sure to read the entire article, as Angler discusses uplifted monkeys, the tricky line that divides a human brain from a cybernetic one, and the all-important question of access.

Image: BBC/Science Photo Library.

Emerging Ethical Dilemmas in Science and Technology (Science Daily)

Dec. 17, 2012 — As a new year approaches, the University of Notre Dame’s John J. Reilly Center for Science, Technology and Values has announced its inaugural list of emerging ethical dilemmas and policy issues in science and technology for 2013.

The Reilly Center explores conceptual, ethical and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.

The center generated its inaugural list with the help of Reilly fellows, other Notre Dame experts and friends of the center.

The center aimed to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. It will feature one of these issues on its website each month in 2013, giving readers more information, questions to ask and resources to consult.

The ethical dilemmas and policy issues are:

Personalized genetic tests/personalized medicine

Within the last 10 years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. What are the potential privacy issues, and how do we protect this very personal and private information? Are we headed toward a new era of therapeutic intervention to increase quality of life, or a new era of eugenics?

Hacking into medical devices

Implanted medical devices, such as pacemakers, are susceptible to hackers. Barnaby Jack, of security vendor IOActive, recently demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. How do we make sure these devices are secure?

Driverless Zipcars

In three states — Nevada, Florida, and California — it is now legal for Google to operate its driverless cars. Google’s goal is to create a fully automated vehicle that is safer and more effective than a human-operated vehicle, and the company plans to marry this idea with the concept of the Zipcar. The ethics of automation and equality of access for people of different income levels are just a taste of the difficult ethical, legal and policy questions that will need to be addressed.

3-D printing

Scientists are attempting to use 3-D printing to create everything from architectural models to human organs, but we could be looking at a future in which we can print personalized pharmaceuticals or home-printed guns and explosives. For now, 3-D printing is largely the realm of artists and designers, but we can easily envision a future in which 3-D printers are affordable and patterns abound for products both benign and malicious, and that cut out the manufacturing sector completely.

Adaptation to climate change

The differential susceptibility of people around the world to climate change warrants an ethical discussion. We need to identify effective and safe ways to help people deal with the effects of climate change, as well as learn to manage and manipulate wild species and nature in order to preserve biodiversity. Some of these adaptation strategies might be highly technical (e.g. building sea walls to stem off sea level rise), but others are social and cultural (e.g., changing agricultural practices).

Low-quality and counterfeit pharmaceuticals

Until recently, detecting low-quality and counterfeit pharmaceuticals required access to complex testing equipment, often unavailable in developing countries where these problems abound. The enormous amount of trade in pharmaceutical intermediaries and active ingredients raise a number of issues, from the technical (improvement in manufacturing practices and analytical capabilities) to the ethical and legal (for example, India ruled in favor of manufacturing life-saving drugs, even if it violates U.S. patent law).

Autonomous systems

Machines (both for peaceful purposes and for war fighting) are increasingly evolving from human-controlled, to automated, to autonomous, with the ability to act on their own without human input. As these systems operate without human control and are designed to function and make decisions on their own, the ethical, legal, social and policy implications have grown exponentially. Who is responsible for the actions undertaken by autonomous systems? If robotic technology can potentially reduce the number of human fatalities, is it the responsibility of scientists to design these systems?

Human-animal hybrids (chimeras)

So far scientists have kept human-animal hybrids on the cellular level. According to some, even more modest experiments involving animal embryos and human stem cells violate human dignity and blur the line between species. Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species?

Ensuring access to wireless and spectrum

Mobile wireless connectivity is having a profound effect on society in both developed and developing countries. These technologies are completely transforming how we communicate, conduct business, learn, form relationships, navigate and entertain ourselves. At the same time, government agencies increasingly rely on the radio spectrum for their critical missions. This confluence of wireless technology developments and societal needs presents numerous challenges and opportunities for making the most effective use of the radio spectrum. We now need to have a policy conversation about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underserved (rural, low-income, developing areas) populations.

Data collection and privacy

How often do we consider the massive amounts of data we give to commercial entities when we use social media, store discount cards or order goods via the Internet? Now that microprocessors and permanent memory are inexpensive technology, we need think about the kinds of information that should be collected and retained. Should we create a diabetic insulin implant that could notify your doctor or insurance company when you make poor diet choices, and should that decision make you ineligible for certain types of medical treatment? Should cars be equipped to monitor speed and other measures of good driving, and should this data be subpoenaed by authorities following a crash? These issues require appropriate policy discussions in order to bridge the gap between data collection and meaningful outcomes.

Human enhancements

Pharmaceutical, surgical, mechanical and neurological enhancements are already available for therapeutic purposes. But these same enhancements can be used to magnify human biological function beyond the societal norm. Where do we draw the line between therapy and enhancement? How do we justify enhancing human bodies when so many individuals still lack access to basic therapeutic medicine?

Reading history through genetics (Columbia University)

5-Dec-2012, by Holly Evarts

New method analyzes recent history of Ashkenazi and Masai populations, paving the way to personalized medicine

New York, NY—December 5, 2012—Computer scientists at Columbia’s School of Engineering and Applied Science have published a study in the November 2012 issue of The American Journal of Human Genetics (AJHG) that demonstrates a new approach used to analyze genetic data to learn more about the history of populations. The authors are the first to develop a method that can describe in detail events in recent history, over the past 2,000 years. They demonstrate this method in two populations, the Ashkenazi Jews and the Masai people of Kenya, who represent two kinds of histories and relationships with neighboring populations: one that remained isolated from surrounding groups, and one that grew from frequent cross-migration across nearby villages.

“Through this work, we’ve been able to recover very recent and refined demographic history, within the last few centuries, in contrast to previous methods that could only paint broad brushstrokes of the much deeper past, many thousands of years ago,” says Computer Science Associate Professor Itsik Pe’er, who led the research. “This means that we can now use genetics as an objective source of information regarding history, as opposed to subjective written texts.”

Pe’er’s group uses computational genetics to develop methods to analyze DNA sequence variants. Understanding the history of a population, knowing which populations had a shared origin and when, which groups have been isolated for a long time, or resulted from admixture of multiple original groups, and being able to fully characterize their genetics is, he explains, “essential in paving the way for personalized medicine.”

For this study, the team developed the mathematical framework and software tools to describe and analyze the histories of the two populations and discovered that, for instance, Ashkenazi Jews are descendants of a small number—in the hundreds—of individuals from the late medieval times, and since then have remained genetically isolated while their population has expanded rapidly to several millions today.

“Knowing that the Ashkenazi population has expanded so recently from a very small number has practical implications,” notes Pe’er. “If we can obtain data on only a few hundreds of individuals from this population, a perfectly feasible task in today’s technology, we will have effectively collected the genomes of millions of current Ashkenazim.” He and his team are now doing just that, and have already begun to analyze a first group of about 150 Ashkenazi genomes.

The genetic data of the Masai, a semi-nomadic people, indicates the village-by-village structure of their population. Unlike the isolated Ashkenazi group, the Masai live in small villages but regularly interact and intermarry across village boundaries. The ancestors of each village therefore typically come from many different places, and a single village hosts an effective gene pool that is much larger than the village itself.

Previous work in population genetics was focused on mutations that occurred very long ago, say the researchers, and therefore able to only describe population changes that occurred at that timescale, typically before the agricultural revolution. Pe’er’s research has changed that, enabling scientists to learn more about recent changes in populations and start to figure out, for instance, how to pinpoint severe mutations in personal genomes of specific individuals—mutations that are more likely to be associated with disease.

“This is a thrilling time to be working in computational genetics,” adds Pe’er, citing the speed in which data acquisition has been accelerating; much faster than the ability of computing hardware to process such data. “While the deluge of big data has forced us to develop better algorithms to analyze them, it has also rewarded us with unprecedented levels of understanding.”

###

Pe’er’s team worked closely on this research with study co-authors, Ariel Darvasi, PhD of the Hebrew University of Jerusalem, who was responsible for collecting most of the study samples, and Todd Lencz, PhD of Feinstein institute for Medical Research, who handled genotyping of the DNA samples. The team’s computing and analysis took place in the Columbia Initiative in Systems Biology (CISB).

This research is supported by the National Science Foundation (NSF). The computing facility of CISB is supported by the National Institutes of Health (NIH).

“Belo Monte é um monstro do desenvolvimentismo” (O Globo)

JC e-mail 4604, de 16 de Outubro de 2012

Antropóloga critica a construção de hidrelétrica no Xingu, afirmando que ela causará mais impactos do que benefícios, podendo afetar até, a longo prazo, tradições indígenas.

Enquanto indígenas de diversas etnias, além de pescadores, ribeirinhos e pequenos agricultores ocupavam o canteiro de obras da hidrelétrica de Belo Monte, no Sul do Pará, numa manifestação contra a construção da usina, a antropóloga Carmen Junqueira fazia um balanço de suas pesquisas com os povos da região, no auditório da PUC de São Paulo, durante o Colóquio Transformações da Biopolítica, no último dia 10.

Foi pouco depois da criação do Parque Indígena do Xingu, pelo então presidente Jânio Quadros e por pressão dos irmãos Villas-Boas, que ela pisou na região pela primeira vez e se deparou com uma paixão que entrou em seus estudos, na vida particular, e em sua casa: os povos indígenas do Alto Xingu, especialmente os Kamaiurá, até hoje referência importante de cultura indígena. Desde 1965, ela os visita periodicamente e até os recebe em casa, em São Paulo, acompanhando mudanças, principalmente em decorrência do desenvolvimento econômico que tirou as aldeias do isolamento.

Em quase 50 anos estudando a etnia, ela tem analisado o contato cada vez maior com a cultura branca, e enxerga parte das novas características nas aldeias como inerentes ao processo. Mas o discurso analítico da antropóloga muda quando o assunto é Belo Monte. Em meio segundo de resposta, ela afirma: “Sou contra”. Carmen classifica a construção da hidrelétrica, que está sendo tocada pelo consórcio Norte Energia – responsável pela construção e operação de Belo Monte – como parte de um projeto desenvolvimentista do governo que atropela o valor histórico e cultural das populações locais.

Como a senhora vê a ocupação que está acontecendo neste momento nos canteiros de obra da Norte Energia, no Sudoeste do Pará?

Os ocupantes estão defendendo sua própria sobrevivência. A maioria de nós desconhece os saberes dos povos indígenas, ribeirinhos e outros tantos que zelam pela natureza. Eles representam a quebra da monótona subserviência consumista, oferecendo diversidade e originalidade. Nós não sabemos, mas eles estão igualmente nos defendendo.

Por conta do progresso, que tem sido a palavra de ordem no País?

Eles estão nos defendendo de uma fúria desenvolvimentista. Sou totalmente contra grandes hidrelétricas. Sei que temos de gerar energia, mas o impacto desses empreendimentos monstruosos é muito danoso. Acredito em outro modelo, mais local, com pequenas centrais, energia das ondas do mar, do sol. Conheço a hidrelétrica de Tucuruí (no Rio Tocantins, no Pará) e, quando estive lá, não consegui nem enquadrá-la numa foto, dado o tamanho do monstro. E a história de Balbina (no estado do Amazonas) todo mundo já conhece, trata-se de um empreendimento com grandes impactos, incompatíveis com os benefícios.

A senhora esteve em Altamira, em cujas imediações estão sendo feitas as obras de Belo Monte. Quando foi isso, e qual era o cenário lá?

Estive lá logo que o burburinho começou, vi o projeto de perto, conversei com as pessoas. O que está acontecendo lá é um desenvolvimento a qualquer custo. Vai afetar muitos povos indígenas, como os Kaiapós e Juruna, além de populações rurais. Não sei como vai ficar quando estiver pronto, os impactos nas populações, porém, serão imensos. E essa energia gerada vai para indústrias, o zé povinho mesmo não leva quase nada para si.

E os Kamaiurá, foco principal de seus estudos desde a década de 1960, serão afetados?

Os Kamaiurá ficam um pouquinho mais para baixo no Pará. Diretamente não serão afetados, mas hoje o que acontece na região do Xingu afeta a todos. Eles não estão mais isolados. Haverá consequências secundárias. Muda a flora, muda a fauna, isso afeta as populações, que são expulsas e não participam do processo. Quando dizem “ah, mas é pouco o que será usado de território”, é um argumento pífio. O território é deles, e, por tudo que fizemos com os povos indígenas, temos pelo menos uma dívida moral com eles.

Quais são as principais mudanças que a senhora já registrou no comportamento do povo Kamaiurá desde o início de suas incursões ao local?

Como todo povo indígena, eles gostam muito de mel. De repente o açúcar surge como um produto baratíssimo. Isso começou desde a época dos irmãos Villas-Boas, que, no entanto não gostavam que os índios adquirissem todos os nossos hábitos. Depois da década de 1990, os Kamaiurá começaram a comprar muito açúcar, em pacotes de cinco quilos. Hoje já tem uma moça com diabetes na aldeia. O que quero dizer com isso? Essa entrada do capitalismo mudou o comportamento nas aldeias. Vai mudando o paladar. O capitalismo coloniza até o apetite dos índios, que passaram a consumir o que nós consumimos. Hoje, as listas de presentes que eles me pedem para quando voltar têm produtos para depilação, embelezamento dos cabelos, e outras coisas que classifico na categoria novidades, que vão desde cachorros a aparelhos eletrônicos. Os jovens hoje têm acesso às redes sociais, o que traz mais mudanças ainda. Eles têm um programa de rádio e, outro dia, foram me entrevistar novamente. Pela primeira vez, um deles passou a palavra ao outro dizendo “É com você, Bené”. Isso não é coisa de futebol? Hoje eles cobram hospedagem, cobram por direito de imagem. Já entenderam a economia monetária, embora para eles a troca ainda seja o primordial.

A ecopolítica, foco do colóquio de hoje, propõe-se a analisar práticas de gestão que incluem mecanismos de controle das populações, dentro da democracia participativa. Quais são os impactos desse controle sobre os povos indígenas?

Eles estão passando por várias mudanças e teoricamente têm mais poder de participação, mas não é real. E sobre mudanças no cotidiano, eles encaram como naturais. Só entram na defensiva mesmo quando pode haver impactos na tradição. Fiz entrevistas na aldeia para descobrir o que eles chamavam de tradição. São ritos e mitos, o que eles mais valorizam. Ainda não é o dinheiro. Enquanto tradição para nós tem a ver com rotina de trabalho, de família, filhos, para eles é a arte. Isso é o que mais valorizam.

E isso tudo pode ser afetado por esses grandes empreendimentos?

Essa perda cultural deve entrar no cálculo dessas grandes obras. Mas não entra, ninguém dá valor a isso. Os grandes empreendimentos modificam a flora e a fauna; é possível que o regime dos peixes se altere. A sobrevivência dos xinguanos pode ser afetada. O maior perigo é com a alimentação precarizada, e que eles comecem a se voltar para o turismo ecológico. Seria desastroso se começassem a realizar suas cerimônias até para inglês ver. Com o tempo, essas cerimônias poderiam perder seu caráter aglutinador, sua memória, tornando-se apenas espetáculo.

(O Globo)