Arquivo da tag: Tecnocracia

A ciência inútil de Alckmim (OESP)

14 Maio 2016 | 03h 00

Geraldo Alckmin insinuou, semanas atrás, que o dinheiro destinado à pesquisa científica no Estado de São Paulo é desperdiçado em estudos irrelevantes ou mesmo inúteis. Ninguém duvida que a aplicação do dinheiro público deve ser cuidadosa e sempre pode ser melhorada. O problema é saber o que é ciência útil.

Quinze páginas publicadas nesta semana na mais conceituada revista científica mundial podem ser consideradas uma resposta às criticas do governador. Principalmente porque seus autores foram, durante anos, considerados grandes produtores de ciência “inútil”. Mas vamos à história que culminou na publicação.

Faz mais de 20 anos, um amigo voltou da França com uma ideia fixa. Queria estudar a biologia molecular dos vírus. Argumentava que novos vírus surgiriam do nada para assombrar a humanidade. O HIV e o ebola eram o prenúncio do que nos esperava no futuro. Sua ciência sempre foi criativa e de qualidade. E foi por esse motivo, e não com medo do apocalipse, que a Fapesp passou a financiar o jovem virologista. O grupo cresceu.

A ciência que esses virologistas produziram nas últimas décadas pode ser classificada como básica ou pura, sem utilidade aparente. Talvez fosse considerada “inútil” pelo governador. Pessoas que pensam assim acreditam que o papel do Estado é financiar projetos que resultem em conhecimentos de utilidade óbvia e imediata, que resolvam os problemas da Nação. Como essa política científica utilitarista e de curto prazo não predomina na Fapesp, a virologia molecular “inútil” prosperou no Estado de São Paulo. Entre os anos 2000 e 2007, eles formaram uma rede de pesquisa, montaram laboratórios, formaram estudantes e publicaram trabalhos científicos. Depois cada um seguiu seu caminho, estudando vírus diferentes, com métodos distintos, nas mais diversas unidades da USP.

Em dezembro, meu colega apareceu na Fapesp com outra ideia fixa. Argumentou que um vírus quase desconhecido poderia estar relacionado aos casos de microcefalia que pipocavam no Nordeste. Era o zika. Enquanto o pânico se espalhava em meio à total desinformação, em uma semana a rede dos virologistas moleculares se aglutinou e resolveu atacar o problema. Eram 45 cientistas agrupados em 15 laboratórios “inúteis”. Na semana seguinte, a Fapesp aumentou o financiamento desses laboratórios. Não tardou para um exército de virologistas moleculares paulistas desembarcar no palco da tragédia munidos de tudo que existia de “inútil” nos seus laboratórios. Isolaram o vírus dos pacientes e, enquanto um laboratório “inútil” cultivava o vírus, outro “inútil” sequenciou seu genoma. Rapidamente esse grupo de cientistas básicos se tornou “útil”. Demonstraram que o vírus ataca células do sistema nervoso, que atravessa a placenta e infecta o sistema nervoso do feto. E que provoca o retardo de seu crescimento.

Em poucos meses, a nova variante do vírus zika foi identificada, isolada, seu mecanismo de ação, esclarecido, e um modelo experimental para a doença foi desenvolvido. Essas descobertas vão servir como base para o desenvolvimento de uma vacina nos próximos anos. São essas descobertas “úteis”, descritas no trabalho realizado por cientistas “inúteis”, que agora foram publicadas pela revista Nature.

Premidos pela Segunda Guerra, cientistas “inúteis” dos EUA e da Inglaterra desenvolveram o radar, a bomba atômica e o computador. Premidos pela microcefalia, nossos virologistas estão ajudando a resolver o problema. Da mesma maneira que era impossível prever no entreguerras que o financiamento de linguistas, físicos teóricos, matemáticos e outros cientistas “inúteis” fosse ajudar no esforço de guerra, era impossível prever que os esforços de financiamento de jovens virologistas iriam, anos mais tarde, solucionar o enigma do zika antes da toda poderosa ciência americana.

Esse é um dos motivos que levam todo país que se preza a financiar essa tal de ciência “inútil”. Esse repositório de cientistas, laboratórios e conhecimento não somente aumenta nosso conhecimento sobre a natureza e ajuda a educar nossos jovens, mas pode ser aglutinado em uma emergência. Foi porque a Fapesp financiou ciência “inútil” por anos que agora temos a capacidade de responder rapidamente a uma emergência médica nacional. Do meu ponto de vista, a simples existência desse trabalho científico é uma resposta da comunidade científica às críticas ventiladas por nosso governador.

MAIS INFORMAÇÕES: THE BRAZILIAN ZIKA VÍRUS STRAIN CAUSES BIRTH DEFECTS IN EXPERIMENTAL MODELS. NATURE DOI:10.1038/NATURE18296 2016

Anúncios

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

Nudge: The gentle science of good governance (New Scientist)

25 June 2013

Magazine issue 2922

NOT long before David Cameron became UK prime minister, he famously prescribed some holiday reading for his colleagues: a book modestly entitled Nudge.

Cameron wasn’t the only world leader to find it compelling. US president Barack Obama soon appointed one of its authors, Cass Sunstein, a social scientist at the University of Chicago, to a powerful position in the White House. And thus the nudge bandwagon began rolling. It has been picking up speed ever since (see “Nudge power: Big government’s little pushes“).

So what’s the big idea? We don’t always do what’s best for ourselves, thanks to cognitive biases and errors that make us deviate from rational self-interest. The premise of Nudge is that subtly offsetting or exploiting these biases can help people to make better choices.

If you live in the US or UK, you’re likely to have been nudged towards a certain decision at some point. You probably didn’t notice. That’s deliberate: nudging is widely assumed to work best when people aren’t aware of it. But that stealth breeds suspicion: people recoil from the idea that they are being stealthily manipulated.

There are other grounds for suspicion. It sounds glib: a neat term for a slippery concept. You could argue that it is a way for governments to avoid taking decisive action. Or you might be concerned that it lets them push us towards a convenient choice, regardless of what we really want.

These don’t really hold up. Our distaste for being nudged is understandable, but is arguably just another cognitive bias, given that our behaviour is constantly being discreetly influenced by others. What’s more, interventions only qualify as nudges if they don’t create concrete incentives in any particular direction. So the choice ultimately remains a free one.

Nudging is a less blunt instrument than regulation or tax. It should supplement rather than supplant these, and nudgers must be held accountable. But broadly speaking, anyone who believes in evidence-based policy should try to overcome their distaste and welcome governance based on behavioural insights and controlled trials, rather than carrot-and-stick wishful thinking. Perhaps we just need a nudge in the right direction.

Geoengineering the Earth’s climate sends policy debate down a curious rabbit hole (The Guardian)

Many of the world’s major scientific establishments are discussing the concept of modifying the Earth’s climate to offset global warming

Monday 4 August 2014

Many leading scientific institutions are now looking at proposed ways to engineer the planet's climate to offset the impacts of global warming.

Many leading scientific institutions are now looking at proposed ways to engineer the planet’s climate to offset the impacts of global warming. Photograph: NASA/REUTERS

There’s a bit in Alice’s Adventures in Wonderland where things get “curiouser and curiouser” as the heroine tries to reach a garden at the end of a rat-hole sized corridor that she’s just way too big for.

She drinks a potion and eats a cake with no real clue what the consequences might be. She grows to nine feet tall, shrinks to ten inches high and cries literal floods of frustrated tears.

I spent a couple of days at a symposium in Sydney last week that looked at the moral and ethical issues around the concept of geoengineering the Earth’s climate as a “response” to global warming.

No metaphor is ever quite perfect (climate impacts are no ‘wonderland’), but Alice’s curious experiences down the rabbit hole seem to fit the idea of medicating the globe out of a possible catastrophe.

And yes, the fact that in some quarters geoengineering is now on the table shows how the debate over climate change policy is itself becoming “curiouser and curiouser” still.

It’s tempting too to dismiss ideas like pumping sulphate particles into the atmosphere or making clouds whiter as some sort of surrealist science fiction.

But beyond the curiosity lies actions being countenanced and discussed by some of the world’s leading scientific institutions.

What is geoengineering?

Geoengineering – also known as climate engineering or climate modification – comes in as many flavours as might have been on offer at the Mad Hatter’s Tea Party.

Professor Jim Falk, of the Melbourne Sustainable Society Institute at the University of Melbourne, has a list of more than 40 different techniques that have been suggested.

They generally take two approaches.

Carbon Dioxide Reduction (CDR) is pretty self explanatory. Think tree planting, algae farming, increasing the carbon in soils, fertilising the oceans or capturing emissions from power stations. Anything that cuts the amount of CO2 in the atmosphere.

Solar Radiation Management (SRM) techniques are concepts to try and reduce the amount of solar energy reaching the earth. Think pumping sulphate particles into the atmosphere (this mimics major volcanic eruptions that have a cooling effect on the planet), trying to whiten clouds or more benign ideas like painting roofs white.

Geoengineering on the table

In 2008 an Australian Government–backed research group issued a report on the state-of-play of ocean fertilisation, recording there had been 12 experiments carried out of various kinds with limited to zero evidence of “success”.

This priming of the “biological pump” as its known, promotes the growth of organisms (phytoplankton) that store carbon and then sink to the bottom of the ocean.

The report raised the prospect that larger scale experiments could interfere with the oceanic food chain, create oxygen-depleted “dead zones” (no fish folks), impact on corals and plants and various other unknowns.

The Royal Society – the world’s oldest scientific institution – released a report in 2009, also reviewing various geoengineering technologies.

In 2011, Australian scientists gathered at a geoengineering symposium organised by the Australian Academy of Science and the Australian Academy of Technological Sciences and Engineering.

The London Protocol – a maritime convention relating to dumping at sea – was amended last year to try and regulate attempts at “ocean fertilisation” – where substances, usually iron, are dumped into the ocean to artificially raise the uptake of carbon dioxide.

The latest major United Nations Intergovernmental Panel on Climate Change also addressed the geoengineering issue in several chapters of its latest report. The IPCC summarised geoengineering this way.

CDR methods have biogeochemical and technological limitations to their potential on a global scale. There is insufficient knowledge to quantify how much CO2 emissions could be partially offset by CDR on a century timescale. Modelling indicates that SRM methods, if realizable, have the potential to substantially offset a global temperature rise, but they would also modify the global water cycle, and would not reduce ocean acidification. If SRM were terminated for any reason, there is high confidence that global surface temperatures would rise very rapidly to values consistent with the greenhouse gas forcing. CDR and SRM methods carry side effects and long-term consequences on a global scale.

Towards the end of this year, the US National Academy of Sciences will be publishing a major report on the “technical feasibility” of some geoengineering techniques.

Fighting Fire With Fire

The symposium in Sydney was co-hosted by the University of New South Wales and the Sydney Environment Institute at the University of Sydney (for full disclosure here, they paid my travel costs and one night stay).

Dr Matthew Kearnes, one of the organisers of the workshop from UNSW, told me there was “nervousness among many people about even thinking or talking about geoengineering.” He said:

I would not want to dismiss that nervousness, but this is an agenda that’s now out there and it seems to be gathering steam and credibility in some elite establishments.

Internationally geoengineering tends to be framed pretty narrowly as just a case of technical feasibility, cost and efficacy. Could it be done? What would it cost? How quickly would it work?

We wanted to get a way from the arguments about the pros and cons and instead think much more carefully about what this tells us about the climate change debate more generally.

The symposium covered a range of frankly exhausting philosophical, social and political considerations – each of them jumbo-sized cans full of worms ready to open.

Professor Stephen Gardiner, of the University of Washington, Seattle, pushed for the wider community to think about the ethical and moral consequences of geoengineering. He drew a parallel between the way, he said, that current fossil fuel combustion takes benefits now at the expense of impacts on future generations. Geoengineering risked making the same mistake.

Clive Hamilton’s book Earthmasters notes “in practice any realistic assessment of how the world works must conclude that geoengineering research is virtually certain to reduce incentives to pursue emission reductions”.

Odd advocates

Curiouser still, is that some of the world’s think tanks who shout the loudest that human-caused climate change might not even be a thing, or at least a thing not worth worrying about, are happy to countenance geoengineering as a solution to the problem they think is overblown.

For example, in January this year the Copenhagen Consensus Center, a US-based think tank founded by Danish political scientist Bjorn Lomborg, issued a submission to an Australian Senate inquiry looking at overseas aid and development.

Lomborg’s center has for many years argued that cutting greenhouse gas emissions is too expensive and that action on climate change should have a low-priority compared to other issues around the world.

Lomborg himself says human-caused climate change will not turn into an economic negative until near the end of this century.

Yet Lomborg’s submission told the Australian Senate suggested that every dollar spent on “investigat[ing] the feasibility of planetary cooling through geoengineering technologies” could yield “$1000 of benefits” although this, Lomborg wrote, was a “rough estimate”.

But these investigations, Lomborg submitted, “would serve to better understand risks, costs, and benefits, but also act as an important potential insurance against global warming”.

Engineering another excuse

Several academics I’ve spoken with have voiced fears that the idea of unproven and potentially disastrous geoengineering technologies being an option to shield societies from the impacts of climate change could be used to distract policy makers and the public from addressing the core of the climate change issue – that is, curbing emissions in the first place.

But if the idea of some future nation, or group of nations, or even corporations, some embarking on a major project to modify the Earth’s climate systems leaves you feeling like you’ve fallen down a surreal rabbit hole, then perhaps we should also ask ourselves this.

Since the year 1750, the world has added something in the region of 1,339,000,000,000 tonnes of carbon dioxide (that’s 1.34 trillion tonnes) to the atmosphere from fossil fuel and cement production.

Raising the level of CO2 in the atmosphere by 40 per cent could be seen as accidental geoengineering.

Time to crawl out of the rabbit hole?

The rise of data and the death of politics (The Guardian)

Tech pioneers in the US are advocating a new data-based approach to governance – ‘algorithmic regulation’. But if technology provides the answers to society’s problems, what happens to governments?

The Observer, Sunday 20 July 2014

US president Barack Obama with Facebook founder Mark Zuckerberg

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph: Mandel Ngan/AFP/Getty Images

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New York, was driving to Orchard Beach in the Bronx. Clad in shorts and sunglasses, the housewife was looking forward to quiet time at the beach. But the moment she crossed the Willis Avenue bridge in her Chevrolet Corvair, Placente was surrounded by a dozen patrolmen. There were also 125 reporters, eager to witness the launch of New York police department’s Operation Corral – an acronym for Computer Oriented Retrieval of Auto Larcenists.

Fifteen months earlier, Placente had driven through a red light and neglected to answer the summons, an offence that Corral was going to punish with a heavy dose of techno-Kafkaesque. It worked as follows: a police car stationed at one end of the bridge radioed the licence plates of oncoming cars to a teletypist miles away, who fed them to a Univac 490 computer, an expensive $500,000 toy ($3.5m in today’s dollars) on loan from the Sperry Rand Corporation. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known offenders. In case of a match the teletypist would alert a second patrol car at the bridge’s other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today – automatic number plate recognition, CCTV cameras, GPS trackers – Operation Corral looks quaint. And the possibilities for control will only expand. European officials have considered requiring all cars entering the European market to feature a built-in mechanism that allows the police to stop vehicles remotely. Speaking earlier this year, Jim Farley, a senior Ford executive, acknowledged that “we know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.” That last bit didn’t sound very reassuring and Farley retracted his remarks.

As both cars and roads get “smart,” they promise nearly perfect, real-time law enforcement. Instead of waiting for drivers to break the law, authorities can simply prevent the crime. Thus, a 50-mile stretch of the A14 between Felixstowe and Rugby is to be equipped with numerous sensors that would monitor traffic by sending signals to and from mobile phones in moving vehicles. The telecoms watchdog Ofcom envisionsthat such smart roads connected to a centrally controlled traffic system could automatically impose variable speed limits to smooth the flow of traffic but also direct the cars “along diverted routes to avoid the congestion and even [manage] their speed”.

Other gadgets – from smartphones to smart glasses – promise even more security and safety. In April, Apple patented technology that deploys sensors inside the smartphone to analyse if the car is moving and if the person using the phone is driving; if both conditions are met, it simply blocks the phone’s texting feature. Intel and Ford are working on Project Mobil – a face recognition system that, should it fail to recognise the face of the driver, would not only prevent the car being started but also send the picture to the car’s owner (bad news for teenagers).

The car is emblematic of transformations in many other domains, from smart environments for “ambient assisted living” where carpets and walls detect that someone has fallen, to various masterplans for the smart city, where municipal services dispatch resources only to those areas that need them. Thanks to sensors and internet connectivity, the most banal everyday objects have acquired tremendous power to regulate behaviour. Even public toilets are ripe for sensor-based optimisation: the Safeguard Germ Alarm, a smart soap dispenser developed by Procter & Gamble and used in some public WCs in the Philippines, has sensors monitoring the doors of each stall. Once you leave the stall, the alarm starts ringing – and can only be stopped by a push of the soap-dispensing button.

In this context, Google’s latest plan to push its Android operating system on to smart watches, smart cars, smart thermostats and, one suspects, smart everything, looks rather ominous. In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.

This “smartification” of everyday life follows a familiar pattern: there’s primary data – a list of what’s in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.

In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – “evidence-based” and “results-oriented,” technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O’Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term “web 2.0”) has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O’Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can’t write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it’s time to find another rule for finding a good rule – and so on. An algorithm can do this, but it’s the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it’s not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O’Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on “a deep understanding of the desired outcome” (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

O’Reilly presents such technologies as novel and unique – we are living through a digital revolution after all – but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics – a discipline that, even in its name (it means “the science of governance”) hints at its great regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting itself to the changing circumstances, is what the British psychiatrist Ross Ashby, one of the founding fathers of cybernetics, called “ultrastability”.

To illustrate it, Ashby designed the homeostat. This clever device consisted of four interconnected RAF bomb control units – mysterious looking black boxes with lots of knobs and switches – that were sensitive to voltage fluctuations. If one unit stopped working properly – say, because of an unexpected external disturbance – the other three would rewire and regroup themselves, compensating for its malfunction and keeping the system’s overall output stable.

Ashby’s homeostat achieved “ultrastability” by always monitoring its internal state and cleverly redeploying its spare resources.

Like the spam filter, it didn’t have to specify all the possible disturbances – only the conditions for how and when it must be updated and redesigned. This is no trivial departure from how the usual technical systems, with their rigid, if-then rules, operate: suddenly, there’s no need to develop procedures for governing every contingency, for – or so one hopes – algorithms and real-time, immediate feedback can do a better job than inflexible rules out of touch with reality.

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people’s spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O’Reilly’s question: for whom are they working? If it’s just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it’s hardly a democratic success.

With his belief that algorithmic regulation is based on “a deep understanding of the desired outcome”, O’Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all “desired outcomes”, but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those “desired outcomes” was apolitical and didn’t force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O’Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore’s leaders might believe that they, too, have transcended politics, it doesn’t mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the “how” of politics is weakened. Silicon Valley’s default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google’s Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be “disrupted”. And where the innovators and the disruptors lead, the bureaucrats follow.

The intelligence services embraced solutionism before other government agencies. Thus, they reduced the topic of terrorism from a subject that had some connection to history and foreign policy to an informational problem of identifying emerging terrorist threats via constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes are neither traceable nor reparable, that the threat can only be pre-empted by out-innovating and out-surveilling the enemy with better communications.

Speaking in Athens last November, the Italian philosopher Giorgio Agamben discussed an epochal transformation in the idea of government, “whereby the traditional hierarchical relation between causes and effects is inverted, so that, instead of governing the causes – a difficult and expensive undertaking – governments simply try to govern the effects”.

Nobel laureate Daniel Kahneman

Governments’ current favourite pyschologist, Daniel Kahneman. Photograph: Richard Saker for the Observer

For Agamben, this shift is emblematic of modernity. It also explains why the liberalisation of the economy can co-exist with the growing proliferation of control – by means of soap dispensers and remotely managed cars – into everyday life. “If government aims for the effects and not the causes, it will be obliged to extend and multiply control. Causes demand to be known, while effects can only be checked and controlled.” Algorithmic regulation is an enactment of this political programme in technological form.

The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. “Health… is the opposite side of healthcare,” he said at a conference in Paris last December. “It’s what keeps you out of the healthcare system in the first place.” Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.

This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants’ visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company’s virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O’Reilly. “You know the way that advertising turned out to be the native business model for the internet?” he wondered at a recent conference. “I think that insurance is going to be the native business model for the internet of things.” Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of “proactive protection”.

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. “We propose ‘payment by results’, a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus,” they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what’s expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It’s certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one’s poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn’t wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O’Reilly writes in his essay: “New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes.” Thus, it’s a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O’Reilly’s essay is to a 2012 speech entitled “Regulation: Looking Backward, Looking Forward” by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

Airbnb's homepage.

Airbnb: part of the reputation-driven economy.

Such initiatives aim to reprogramme the state and make it feedback-friendly, crowding out other means of doing politics. For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand. And when the government is too slow to move at Silicon Valley’s speed, they simply move inside the government. Thus, Jennifer Pahlka, the founder of Code for America and a protege of O’Reilly, became the deputy chief technology officer of the US government – while pursuing a one-year “innovation fellowship” from the White House.

Cash-strapped governments welcome such colonisation by technologists – especially if it helps to identify and clean up datasets that can be profitably sold to companies who need such data for advertising purposes. Recent clashes over the sale of student and health data in the UK are just a precursor of battles to come: after all state assets have been privatised, data is the next target. For O’Reilly, open data is “a key enabler of the measurement revolution”.

This “measurement revolution” seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed. This vision did spawn a vast bureaucratic apparatus and the critics of the welfare state from the left – most prominently Michel Foucault – were right to question its disciplining inclinations. Nonetheless, neither perfection nor efficiency were the “desired outcome” of this system. Thus, to compare the welfare state with the algorithmic state on those grounds is misleading.

But we can compare their respective visions for human fulfilment – and the role they assign to markets and the state. Silicon Valley’s offer is clear: thanks to ubiquitous feedback loops, we can all become entrepreneurs and take care of our own affairs! As Brian Chesky, the chief executive of Airbnb, told the Atlantic last year, “What happens when everybody is a brand? When everybody has a reputation? Every person can become an entrepreneur.”

Under this vision, we will all code (for America!) in the morning, driveUber cars in the afternoon, and rent out our kitchens as restaurants – courtesy of Airbnb – in the evening. As O’Reilly writes of Uber and similar companies, “these services ask every passenger to rate their driver (and drivers to rate their passenger). Drivers who provide poor service are eliminated. Reputation does a better job of ensuring a superb customer experience than any amount of government regulation.”

The state behind the “sharing economy” does not wither away; it might be needed to ensure that the reputation accumulated on Uber, Airbnb and other platforms of the “sharing economy” is fully liquid and transferable, creating a world where our every social interaction is recorded and assessed, erasing whatever differences exist between social domains. Someone, somewhere will eventually rate you as a passenger, a house guest, a student, a patient, a customer. Whether this ranking infrastructure will be decentralised, provided by a giant like Google or rest with the state is not yet clear but the overarching objective is: to make reputation into a feedback-friendly social net that could protect the truly responsible citizens from the vicissitudes of deregulation.

Admiring the reputation models of Uber and Airbnb, O’Reilly wants governments to be “adopting them where there are no demonstrable ill effects”. But what counts as an “ill effect” and how to demonstrate it is a key question that belongs to the how of politics that algorithmic regulation wants to suppress. It’s easy to demonstrate “ill effects” if the goal of regulation is efficiency but what if it is something else? Surely, there are some benefits – fewer visits to the psychoanalyst, perhaps – in not having your every social interaction ranked?

The imperative to evaluate and demonstrate “results” and “effects” already presupposes that the goal of policy is the optimisation of efficiency. However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is “ultrastable” in Ashby’s sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don’t get one job but many, don’t take on debt, count on your own expertise. It’s all about resilience, risk-taking and, as Taleb puts it, “having skin in the game”. As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. “When policy-makers engage in the discourse of resilience,” write Reid and Evans, “they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves.”

What, then, is the progressive alternative? “The enemy of my enemy is my friend” doesn’t work here: just because Silicon Valley is attacking the welfare state doesn’t mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it’s the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google’s Android powers so much of our everyday life, the government’s temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government’s hold over areas of life previously free from regulation.

With so much data, the government’s favourite argument in fighting terror – if only the citizens knew as much as we do, they too would impose all these legal exceptions – easily extends to other domains, from health to climate change. Consider a recent academic paper that used Google search data to study obesity patterns in the US, finding significant correlation between search keywords and body mass index levels. “Results suggest great promise of the idea of obesity monitoring through real-time Google Trends data”, note the authors, which would be “particularly attractive for government health institutions and private businesses such as insurance companies.”

If Google senses a flu epidemic somewhere, it’s hard to challenge its hunch – we simply lack the infrastructure to process so much data at this scale. Google can be proven wrong after the fact – as has recently been the case with its flu trends data, which was shown to overestimate the number of infections, possibly because of its failure to account for the intense media coverage of flu – but so is the case with most terrorist alerts. It’s the immediate, real-time nature of computer systems that makes them perfect allies of an infinitely expanding and pre-emption‑obsessed state.

Perhaps, the case of Gloria Placente and her failed trip to the beach was not just a historical oddity but an early omen of how real-time computing, combined with ubiquitous communication technologies, would transform the state. One of the few people to have heeded that omen was a little-known American advertising executive called Robert MacBride, who pushed the logic behind Operation Corral to its ultimate conclusions in his unjustly neglected 1967 book, The Automated State.

At the time, America was debating the merits of establishing a national data centre to aggregate various national statistics and make it available to government agencies. MacBride attacked his contemporaries’ inability to see how the state would exploit the metadata accrued as everything was being computerised. Instead of “a large scale, up-to-date Austro-Hungarian empire”, modern computer systems would produce “a bureaucracy of almost celestial capacity” that can “discern and define relationships in a manner which no human bureaucracy could ever hope to do”.

“Whether one bowls on a Sunday or visits a library instead is [of] no consequence since no one checks those things,” he wrote. Not so when computer systems can aggregate data from different domains and spot correlations. “Our individual behaviour in buying and selling an automobile, a house, or a security, in paying our debts and acquiring new ones, and in earning money and being paid, will be noted meticulously and studied exhaustively,” warned MacBride. Thus, a citizen will soon discover that “his choice of magazine subscriptions… can be found to indicate accurately the probability of his maintaining his property or his interest in the education of his children.” This sounds eerily similar to the recent case of a hapless father who found that his daughter was pregnant from a coupon that Target, a retailer, sent to their house. Target’s hunch was based on its analysis of products – for example, unscented lotion – usually bought by other pregnant women.

For MacBride the conclusion was obvious. “Political rights won’t be violated but will resemble those of a small stockholder in a giant enterprise,” he wrote. “The mark of sophistication and savoir-faire in this future will be the grace and flexibility with which one accepts one’s role and makes the most of what it offers.” In other words, since we are all entrepreneurs first – and citizens second, we might as well make the most of it.

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley’s existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there’s no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

To his credit, MacBride understood all of this in 1967. “Given the resources of modern technology and planning techniques,” he warned, “it is really no great trick to transform even a country like ours into a smoothly running corporation where every detail of life is a mechanical function to be taken care of.” MacBride’s fear is O’Reilly’s master plan: the government, he writes, ought to be modelled on the “lean startup” approach of Silicon Valley, which is “using data to constantly revise and tune its approach to the market”. It’s this very approach that Facebook has recently deployed to maximise user engagement on the site: if showing users more happy stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: “Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator.”

Manejo de água no país é crítico, afirmam pesquisadores (Fapesp)

Avaliação foi feita por participantes de seminário sobre recursos hídricos e agricultura, realizado na FAPESP como parte das atividades do Prêmio Fundação Bunge 2013 (Wikipedia)

09/10/2013

Por Elton Alisson

Agência FAPESP – A gestão de recursos hídricos no Brasil representa um problema crítico, devido à falta de mecanismos, tecnologias e, sobretudo, de recursos humanos suficientes para gerir de forma adequada as bacias hidrográficas do país. A avaliação foi feita por pesquisadores participantes do “Seminário sobre Recursos Hídricos e Agricultura”, realizado no dia 2 de outubro, na FAPESP.

O evento integrou as atividades do 58º Prêmio Fundação Bunge e do 34º Prêmio Fundação Bunge Juventude que, neste ano, contemplaram as áreas de Recursos Hídricos e Agricultura e Crítica Literária. Na área de Recursos Hídricos e Agricultura os prêmios foram outorgados, respectivamente, aos professores Klaus Reichardt, do Centro de Energia Nuclear na Agricultura (CENA), da Universidade de São Paulo (USP), e Samuel Beskow, da Universidade Federal de Pelotas (UFPel).

“O Brasil tem problemas de gestão de recursos hídricos porque não há mecanismos, instrumentos, tecnologias e, acima de tudo, recursos humanos suficientemente treinados e com bagagem interdisciplinar para enfrentar e solucionar os problemas de manejo da água”, disse José Galizia Tundisi, pesquisador do Instituto Internacional de Ecologia (IIE), convidado a participar do evento.

“É preciso gerar métodos, conceitos e mecanismos aplicáveis às condições do país”, avaliou o pesquisador, que atualmente dirige o programa mundial de formação de gestores de recursos hídricos da Rede Global de Academias de Ciências (IAP, na sigla em inglês) – instituição que representa mais de cem academias de ciências no mundo.

De acordo com Tundisi, as bacias hidrográficas foram adotadas como unidades prioritárias de gerenciamento do uso da água pela Política Nacional de Recursos Hídricos, sancionada em 1997. Todas as bacias hidrográficas do país, contudo, carecem de instrumentos que possibilitem uma gestão adequada, apontou o pesquisador.

“É muito difícil encontrar um comitê de bacia hidrográfica [colegiado composto por representantes da sociedade civil e responsável pela gestão de recursos hídricos de uma determinada bacia] que esteja totalmente instrumentalizado em termos de técnicas e de programas para melhorar o desempenho do gerenciamento de uso da água”, afirmou.

Modelagem hidrológica

Segundo Tundisi, alguns dos instrumentos que podem facilitar a gestão e a tomada de decisões em relação ao manejo da água de bacias hidrográficas brasileiras são modelos computacionais de simulação do comportamento de bacias hidrográficas, como o desenvolvido por Beskow, professor do Departamento de Engenharia Hídrica da UFPel, ganhador da atual edição do Prêmio Fundação Bunge Juventude na área de Recursos Hídricos e Agricultura.

Batizado de Lavras Simulation of Hidrology (Lash), o modelo hidrológico foi desenvolvido por Beskow durante seu doutorado, realizado na Universidade Federal de Lavras (Ufla), em Minas Gerais, com um período na Purdue University, dos Estados Unidos.

“Há vários modelos hidrológicos desenvolvidos em diferentes partes do mundo – especialmente nos Estados Unidos e Europa –, que são ferramentas valiosíssimas para gestão e tomada de decisões relacionadas a bacias hidrográficas”, disse Beskow.

“Esses modelos hidrológicos são úteis tanto para projetar estruturas hidráulicas – pontes ou reservatórios –, como para fazer previsões em tempo real de cheias e enchentes, como para medir os impactos de ações do tipo desmatamento ou mudanças no uso do solo de áreas no entorno de bacias hidrográficas”, afirmou.

De acordo com o pesquisador, a primeira versão do Lash foi concluída em 2009 e aplicada em pesquisas sobre modelagem de chuva e vazão de água para avaliação do potencial de geração de energia elétrica em bacias hidrográficas de porte pequeno, como a do Ribeirão Jaguará, em Minas Gerais, que possui 32 quilômetros quadrados.

Em razão dos resultados animadores obtidos, o pesquisador começou a desenvolver, a partir de 2011, a segunda versão do modelo de simulação hidrológica, que pretende disponibilizar para os gestores de bacias hidrográficas de diferentes dimensões.

“O modelo conta agora com um banco de dados por meio do qual os usuários conseguem importar e armazenar dados de chuva, temperatura e umidade e uso do solo, entre outros parâmetros, gerados em diferentes estações da rede de monitoramento de uma determinada bacia geográfica e, que permitem realizar a gestão de recursos hídricos”, contou.

Uma das principais motivações para o desenvolvimento de modelos e de simulação hidrológica no Brasil, segundo o pesquisador, é a falta de dados fluviométricos (de medição de níveis de água, velocidade e vazão nos rios) das bacias hidrológicas existentes no país.

É baixo o número de estações fluviométricas cadastradas no Sistema de Informações Hidrológicas (HidroWeb), operado pela Agência Nacional de Águas (ANA), e muitas delas estão fora de operação, afirmou Beskow.

“Existem pouco mais de cem estações fluviométricas no Rio Grande do Sul cadastradas nesse sistema, que nos permitem obter dados de séries temporais de até dez anos”, disse o pesquisador. “Esse número de estações é muito baixo para fazer a gestão de recursos hídricos de um estado como o Rio Grande do Sul.”

Uso racional da água

Beskow e Klaus Reichardt – que também é professor da Escola Superior de Agricultura Luiz de Queiroz (Esalq) – destacaram a necessidade de desenvolver tecnologias para usar a água de maneira cada vez mais racional na agricultura, uma vez que o setor consome a maior parte da água doce prontamente disponível no mundo hoje.

Do total de 70% da água encontrada na Terra, 97,5% é salgada e 2,5% é doce. Desse percentual ínfimo de água doce, no entanto, 69% estão estocados em geleiras e neves eternas, 29,8% em aquíferos e 0,9% em reservatórios. Do 0,3% prontamente disponível, 65% são utilizados pela agricultura, 22% pelas indústrias, 7% para consumo humano e 6% são perdidos, ressaltou Reichardt.

“No Brasil, temos a Amazônia e o aquífero Guarani que poderão ser explorados”, afirmou o pesquisador que teve projetos apoiados pela FAPESP.

Reichardt ganhou o prêmio por sua contribuição em Física de Solos ao estudar e desenvolver formas de calcular o movimento de água em solos arenosos ou argilosos, entre outros, que apresentam variações. “Isso foi aplicado em vários tipos de solo com condutividade hidráulica saturada em função da umidade, por exemplo”, contou.

O pesquisador vem se dedicando nos últimos anos a realizar, em colaboração com colegas da Empresa Brasileira de Pesquisa Agropecuária (Embrapa), tomografia computadorizada para medida de água no solo. “Por meio dessa técnica conseguimos desvendar fenômenos muito interessantes que ocorrem no solo”, disse Reichardt.

Custo da inanição

O evento contou com a presença de Eduardo Moacyr Krieger e Carlos Henrique de Brito Cruz, respectivamente vice-presidente e diretor científico da FAPESP; Jacques Marcovitch, presidente da Fundação Bunge; Ardaillon Simões, presidente da Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco (Facepe), e José Antônio Frizzone, professor da Esalq, entre outras autoridades.

Em seu pronunciamento, Krieger apontou que a Fundação Bunge e a FAPESP têm muitas características em comum. “Ao premiar anualmente os melhores pesquisadores em determinadas áreas, a Fundação Bunge revela seu cuidado com o mérito científico e a qualidade das pesquisas”, disse Krieger.

“A FAPESP, de certa forma, também faz isso ao ‘premiar’ os pesquisadores por meio de Bolsas, Auxílios e outras modalidades de apoio, levando em conta a qualidade da pesquisa realizada.”

Brito Cruz ressaltou que o prêmio concedido pela Fundação Bunge ajuda a criar no Brasil a possibilidade de pesquisadores se destacarem na sociedade brasileira por sua capacidade e realizações intelectuais.

“Isso é essencial para se construir um país que seja dono de seu destino, capaz de criar seu futuro e enfrentar novos desafios de qualquer natureza”, disse Brito Cruz. “Um país só consegue avançar tendo pessoas com capacidade intelectual para entender os problemas e criar soluções para resolvê-los.”

Por sua vez, Marcovitch avaliou que o problema da gestão do uso da água no país pode ser enfrentado de duas formas. A primeira parte da premissa de que o país está deitado em berço esplêndido, tem recursos naturais abundantes e, portanto, não precisaria se preocupar com o problema. A segunda alerta para as consequências da inação em relação à necessidade de se fazer gestão adequada dos recursos hídricos do país, como Tundisi vem fazendo, para estimular pesquisadores como Beskow e Reichardt a encontrar respostas.

“[Nós, pesquisadores,] temos a responsabilidade de elevar a consciência da sociedade sobre os riscos e o custo da inação em relação à gestão dos recursos hídricos do país”, disse.

Brain Scans Predict Which Criminals Are Most Likely to Reoffend (Wired)

BY GREG MILLER

03.26.13 – 3:40 PM

Photo: Erika Kyte/Getty Images

Brain scans of convicted felons can predict which ones are most likely to get arrested after they get out of prison, scientists have found in a study of 96 male offenders.

“It’s the first time brain scans have been used to predict recidivism,” said neuroscientist Kent Kiehl of the Mind Research Network in Albuquerque, New Mexico, who led the new study. Even so, Kiehl and others caution that the method is nowhere near ready to be used in real-life decisions about sentencing or parole.

Generally speaking, brain scans or other neuromarkers could be useful in the criminal justice system if the benefits in terms of better accuracy outweigh the likely higher costs of the technology compared to conventional pencil-and-paper risk assessments, says Stephen Morse, a legal scholar specializing in criminal law and neuroscience at the University of Pennsylvania. The key questions to ask, Morse says, are: “How much predictive accuracy does the marker add beyond usually less expensive behavioral measures? How subject is it to counter-measures if a subject wishes to ‘defeat’ a scan?”

Those are still open questions with regard to the new method, which Kiehl and colleagues, including postdoctoral fellow Eyal Aharoni, describe in a paper to be published this week in the Proceedings of the National Academy of Sciences.

The test targets impulsivity. In a mobile fMRI scanner the researchers trucked in to two state prisons, they scanned inmates’ brains as they did a simple impulse control task. Inmates were instructed to press a button as quickly as possible whenever they saw the letter X pop up on a screen inside the scanner, but not to press it if they saw the letter K. The task is rigged so that X pops up 84 percent of the time, which predisposes people to hit the button and makes it harder to suppress the impulse to press the button on the rare trials when a K pops up.

Based on previous studies, the researchers focused on the anterior cingulate cortex, one of several brain regions thought to be important for impulse control. Inmates with relatively low activity in the anterior cingulate made more errors on the task, suggesting a correlation with poor impulse control.

They were also more likely to get arrested after they were released. Inmates with relatively low anterior cingulate activity were roughly twice as likely as inmates with high anterior cingulate activity to be rearrested for a felony offense within 4 years of their release, even after controlling for other behavioral and psychological risk factors.

“This is an exciting new finding,” said Essi Viding, a professor of developmental psychopathology at University College London. “Interestingly this brain activity measure appears to be a more robust predictor, in particular of non-violent offending, than psychopathy or drug use scores, which we know to be associated with a risk of reoffending.” However, Viding notes that Kiehl’s team hasn’t yet tried to compare their fMRI test head to head against pencil-and-paper tests specifically designed to assess the risk of recidivism. ”It would be interesting to see how the anterior cingulate cortex activity measure compares against these measures,” she said.

“It’s a great study because it brings neuroimaging into the realm of prediction,” said clinical psychologistDustin Pardini of the University of Pittsburgh. The study’s design is an improvement over previous neuroimaging studies that compared groups of offenders with groups of non-offenders, he says. All the same, he’s skeptical that brain scans could be used to predict the behavior of a given individual. ”In general we’re horrible at predicting human behavior, and I don’t see this as being any different, at least not in the near future.”

Even if the findings hold up in a larger study, there would be limitations, Pardini adds. “In a practical sense, there are just too many ways an offender could get around having an accurate representation of his brain activity taken,” he said. For example, if an offender moves his head while inside the scanner, that would render the scan unreadable. Even more subtle strategies, such as thinking about something unrelated to the task, or making mistakes on purpose, could also thwart the test.

Kiehl isn’t convinced either that this type of fMRI test will ever prove useful for assessing the risk to society posed by individual criminals. But his group is collecting more data — lots more — as part of a much larger study in the New Mexico state prisons. “We’ve scanned 3,000 inmates,” he said. “This is just the first 100.”

Kiehl hopes this work will point to new strategies for reducing criminal behavior. If low activity in the anterior cingulate does in fact turn out to be a reliable predictor of recidivism, perhaps therapies that boost activity in this region would improve impulse control and prevent future crimes, Kiehl says. He admits it’s speculative, but his group is already thinking up experiments to test the idea. ”Cognitive exercises is where we’ll start,” he said. “But I wouldn’t rule out pharmaceuticals.”

País ‘concorre’ a troféu por travar negociações na COP 11 (O Estado de São Paulo)

JC e-mail 4605, de 17 de Outubro de 2012

Brasil é indicado pela segunda vez, durante a Convenção da Diversidade Biológica, a prêmio organizado por rede internacional de ONGs.

Pela segunda edição seguida da Convenção da Diversidade Biológica (CDB), o Brasil figura hoje entre os indicados para o Troféu Dodô, que “premia” os países que menos têm evoluído nas negociações durante o encontro para evitar perdas de biodiversidade. Canadá, China, Paraguai e a Grã-Bretanha são os outros indicados pela CBD Alliance, uma rede internacional de ONGs que participa da convenção.

O pássaro dodô é o escolhido para dar nome ao prêmio por estar extinto há cerca de quatro séculos – a espécie vivia na costa leste da África, na Ilha Maurício. Nas convenções do clima, o equivalente é o Troféu Fóssil do Dia – o País foi “agraciado” em Durban, há quase um ano.

Entre as razões para a presença do País na lista está a falta de preocupação do governo com a biodiversidade na negociação de mecanismos de Redução de Emissões por Desmatamento e Degradação Florestal (Redd+) – sistema de compensação financeira para atividades que diminuam a emissão de carbono.

Na 11ª conferência das partes (COP-11) da CBD em Hyderabad, na Índia, o Brasil quer evitar a definição de salvaguardas de biodiversidade nos textos, fazendo pressão para que haja diferenças claras entre os acordos da CBD e os estabelecidos nas Convenções sobre Mudanças Climáticas (UNFCCC).

O governo brasileiro se alinhou a outros países descontentes, como Colômbia e Argentina, para criticar o texto que está sendo trabalhado na conferência da Índia. Em nota, o bloco afirmou que o documento está atrasado e não leva em conta as resoluções alcançadas nas Conferências do Clima de Cancún e de Durban.

“Muitas das recomendações que estamos vendo na COP-11 ou são redundantes ou colocam barreiras para a implementação dessa importante ferramenta (de Redd+)”, dizem os países. Além disso, o Brasil foi indicado ao troféu pelo fato de o governo não ter, segundo a rede de ONGs, uma boa relação com comunidades locais e tribos indígenas que vivem em áreas de relevância ecológica e biológica.

Nova indicação – Há dois anos, o País havia sido indicado por outro motivo: durante o encontro na cidade japonesa de Nagoya, os representantes brasileiros promoveram de forma escancarada os biocombustíveis e foram criticados por tentar abafar os possíveis impactos sobre a biodiversidade e as populações.

Os vencedores de 2010, porém, foram o Canadá e a União Europeia. O Canadá voltou a ser indicado neste ano, também acusado de tentar evitar a discussão sobre os biocombustíveis.

De acordo com as ONGs, a China tem desencorajado o desenvolvimento de áreas marinhas em países vizinhos, enquanto o Paraguai tem bloqueado qualquer progresso em assuntos socioeconômicos nas questões de biossegurança. Já a Grã-Bretanha estaria trabalhando para evitar discussões sobre biologia sintética e geoengenharia.

 

“Belo Monte é um monstro do desenvolvimentismo” (O Globo)

JC e-mail 4604, de 16 de Outubro de 2012

Antropóloga critica a construção de hidrelétrica no Xingu, afirmando que ela causará mais impactos do que benefícios, podendo afetar até, a longo prazo, tradições indígenas.

Enquanto indígenas de diversas etnias, além de pescadores, ribeirinhos e pequenos agricultores ocupavam o canteiro de obras da hidrelétrica de Belo Monte, no Sul do Pará, numa manifestação contra a construção da usina, a antropóloga Carmen Junqueira fazia um balanço de suas pesquisas com os povos da região, no auditório da PUC de São Paulo, durante o Colóquio Transformações da Biopolítica, no último dia 10.

Foi pouco depois da criação do Parque Indígena do Xingu, pelo então presidente Jânio Quadros e por pressão dos irmãos Villas-Boas, que ela pisou na região pela primeira vez e se deparou com uma paixão que entrou em seus estudos, na vida particular, e em sua casa: os povos indígenas do Alto Xingu, especialmente os Kamaiurá, até hoje referência importante de cultura indígena. Desde 1965, ela os visita periodicamente e até os recebe em casa, em São Paulo, acompanhando mudanças, principalmente em decorrência do desenvolvimento econômico que tirou as aldeias do isolamento.

Em quase 50 anos estudando a etnia, ela tem analisado o contato cada vez maior com a cultura branca, e enxerga parte das novas características nas aldeias como inerentes ao processo. Mas o discurso analítico da antropóloga muda quando o assunto é Belo Monte. Em meio segundo de resposta, ela afirma: “Sou contra”. Carmen classifica a construção da hidrelétrica, que está sendo tocada pelo consórcio Norte Energia – responsável pela construção e operação de Belo Monte – como parte de um projeto desenvolvimentista do governo que atropela o valor histórico e cultural das populações locais.

Como a senhora vê a ocupação que está acontecendo neste momento nos canteiros de obra da Norte Energia, no Sudoeste do Pará?

Os ocupantes estão defendendo sua própria sobrevivência. A maioria de nós desconhece os saberes dos povos indígenas, ribeirinhos e outros tantos que zelam pela natureza. Eles representam a quebra da monótona subserviência consumista, oferecendo diversidade e originalidade. Nós não sabemos, mas eles estão igualmente nos defendendo.

Por conta do progresso, que tem sido a palavra de ordem no País?

Eles estão nos defendendo de uma fúria desenvolvimentista. Sou totalmente contra grandes hidrelétricas. Sei que temos de gerar energia, mas o impacto desses empreendimentos monstruosos é muito danoso. Acredito em outro modelo, mais local, com pequenas centrais, energia das ondas do mar, do sol. Conheço a hidrelétrica de Tucuruí (no Rio Tocantins, no Pará) e, quando estive lá, não consegui nem enquadrá-la numa foto, dado o tamanho do monstro. E a história de Balbina (no estado do Amazonas) todo mundo já conhece, trata-se de um empreendimento com grandes impactos, incompatíveis com os benefícios.

A senhora esteve em Altamira, em cujas imediações estão sendo feitas as obras de Belo Monte. Quando foi isso, e qual era o cenário lá?

Estive lá logo que o burburinho começou, vi o projeto de perto, conversei com as pessoas. O que está acontecendo lá é um desenvolvimento a qualquer custo. Vai afetar muitos povos indígenas, como os Kaiapós e Juruna, além de populações rurais. Não sei como vai ficar quando estiver pronto, os impactos nas populações, porém, serão imensos. E essa energia gerada vai para indústrias, o zé povinho mesmo não leva quase nada para si.

E os Kamaiurá, foco principal de seus estudos desde a década de 1960, serão afetados?

Os Kamaiurá ficam um pouquinho mais para baixo no Pará. Diretamente não serão afetados, mas hoje o que acontece na região do Xingu afeta a todos. Eles não estão mais isolados. Haverá consequências secundárias. Muda a flora, muda a fauna, isso afeta as populações, que são expulsas e não participam do processo. Quando dizem “ah, mas é pouco o que será usado de território”, é um argumento pífio. O território é deles, e, por tudo que fizemos com os povos indígenas, temos pelo menos uma dívida moral com eles.

Quais são as principais mudanças que a senhora já registrou no comportamento do povo Kamaiurá desde o início de suas incursões ao local?

Como todo povo indígena, eles gostam muito de mel. De repente o açúcar surge como um produto baratíssimo. Isso começou desde a época dos irmãos Villas-Boas, que, no entanto não gostavam que os índios adquirissem todos os nossos hábitos. Depois da década de 1990, os Kamaiurá começaram a comprar muito açúcar, em pacotes de cinco quilos. Hoje já tem uma moça com diabetes na aldeia. O que quero dizer com isso? Essa entrada do capitalismo mudou o comportamento nas aldeias. Vai mudando o paladar. O capitalismo coloniza até o apetite dos índios, que passaram a consumir o que nós consumimos. Hoje, as listas de presentes que eles me pedem para quando voltar têm produtos para depilação, embelezamento dos cabelos, e outras coisas que classifico na categoria novidades, que vão desde cachorros a aparelhos eletrônicos. Os jovens hoje têm acesso às redes sociais, o que traz mais mudanças ainda. Eles têm um programa de rádio e, outro dia, foram me entrevistar novamente. Pela primeira vez, um deles passou a palavra ao outro dizendo “É com você, Bené”. Isso não é coisa de futebol? Hoje eles cobram hospedagem, cobram por direito de imagem. Já entenderam a economia monetária, embora para eles a troca ainda seja o primordial.

A ecopolítica, foco do colóquio de hoje, propõe-se a analisar práticas de gestão que incluem mecanismos de controle das populações, dentro da democracia participativa. Quais são os impactos desse controle sobre os povos indígenas?

Eles estão passando por várias mudanças e teoricamente têm mais poder de participação, mas não é real. E sobre mudanças no cotidiano, eles encaram como naturais. Só entram na defensiva mesmo quando pode haver impactos na tradição. Fiz entrevistas na aldeia para descobrir o que eles chamavam de tradição. São ritos e mitos, o que eles mais valorizam. Ainda não é o dinheiro. Enquanto tradição para nós tem a ver com rotina de trabalho, de família, filhos, para eles é a arte. Isso é o que mais valorizam.

E isso tudo pode ser afetado por esses grandes empreendimentos?

Essa perda cultural deve entrar no cálculo dessas grandes obras. Mas não entra, ninguém dá valor a isso. Os grandes empreendimentos modificam a flora e a fauna; é possível que o regime dos peixes se altere. A sobrevivência dos xinguanos pode ser afetada. O maior perigo é com a alimentação precarizada, e que eles comecem a se voltar para o turismo ecológico. Seria desastroso se começassem a realizar suas cerimônias até para inglês ver. Com o tempo, essas cerimônias poderiam perder seu caráter aglutinador, sua memória, tornando-se apenas espetáculo.

(O Globo)