Arquivo da tag: Mediação tecnológica

Problem: Your brain (Medium)

I will be talking mainly about development for the web.

Ilya Dorman, Feb 15, 2015

Our puny brain can handle a very limited amount of logic at a time. While programmers proclaim logic as their domain, they are only sometimes and slightly better at managing complexity than the rest of us, mortals. The more logic our app has, the harder it is to change it or introduce new people to it.

The most common mistake programmers do is assuming they write code for a machine to read. While technically that is true, this mindset leads to the hell that is other people’s code.

I have worked in several start-up companies, some of them even considered “lean.” In each, it took me between few weeks to few months to fully understand their code-base, and I have about 6 years of experience with JavaScript. This does not seem reasonable to me at all.

If the code is not easy to read, its structure is already a monument—you can change small things, but major changes—the kind every start-up undergoes on an almost monthly basis—are as fun as a root canal. Once the code reaches a state, that for a proficient programmer, it is harder to read than this article—doom and suffering is upon you.

Why does the code become unreadable? Let’s compare code to plain text: the longer a sentence is, the easier it is for our mind to forget the beginning of it, and once we reach the end, we forget what was the beginning and lose the meaning of the whole sentence. You had to read the previous sentence twice because it was too long to get in one grasp? Exactly! Same with code. Worse, actually—the logic of code can be way more complex than any sentence from a book or a blog post, and each programmer has his own logic which can be total gibberish to another. Not to mention that we also need to remember the logic. Sometimes we come back to it the same day and sometimes after two month. Nobody remembers anything about their code after not looking at it for two month.

To make code readable to other humans we rely on three things:

1. Conventions

Conventions are good, but they are very limited: enforce them too little and the programmer becomes coupled to the code—no one will ever understand what they meant once they are gone. Enforce too much and you will have hour-long debates about every space and colon (true story.) The “habitable zone” is very narrow and easy to miss.

2. Comments

They are probably the most helpful, if done right. Unfortunately many programmers write their comments in the same spirit they write their code—very idiosyncratic. I do not belong to the school claiming good code needs no comments, but even beautifully commented code can still be extremely complicated.

3. “Other people know this programming language as much as I do, so they must understand my writings.”

Well… This is JavaScript:

This is JAVASCRIPT!

4. Tests

Tests are a devil in disguise. ”How do we make sure our code is good and readable? We write more code!” I know many of you might quit this post right here, but bear with me for a few more lines: regardless of their benefit, tests are another layer of logic. They are more code to be read and understood. Tests try to solve this exact problem: your code is too complicated to calculate it’s result in your brain? So you say “well, this is what should happen in the end.” And when it doesn’t, you go digging for the problem. Your code should be simple enough to read a function or a line and understand what should be the result of running it.

Your life as a programmer could be so much easier!

Solution: Radical Minimalism

I will break down this approach into practical points, but the main idea is: use LESS logic.

  • Cut 80% of your product’s features

Yes! Just like that. Simplicity, first of all, comes from the product. Make it easy for people to understand and use. Make it do one thing well, and only then add up (if there is still a need.)

  • Use nothing but what you absolutely must

Do not include a single line of code (especially from libraries) that you are not 100% sure you will use and that it is the simplest, most straightforward solution available. Need a simple chat app and use Angular.js because it’s nice with the two-way binding? You deserve those hours and days of debugging and debating about services vs. providers.

Side note: The JavaScript browser api is event-driven, it is made to respond when stuff (usually user input) happens. This means that events change data. Many new frameworks (Angular, Meteor) reverse this direction and make data changes trigger events. If your app is simple, you might live happily with the new mysterious layer, but if not — you get a whole new layer of complexity that you need to understand and your life will get exponentially more miserable. Unless your app constantly manages big amounts of data, Avoid those frameworks.

  • Use simplest logic possible

Say you need show different HTML on different occasions. You can use client-side routing with controllers and data passed to each controller that renders the HTML from a template. Or you can just use static HTML pages with normal browser navigation, and update manually the HTML. Use the second.

  • Make short Javascript files

Limit the length of your JS files to a single editor page, and make each file do one thing. Can’t cramp all your glorious logic into small modules? Good, that means you should have less of it, so that other humans will understand your code in reasonable time.

  • Avoid pre-compilers and task-runners like AIDS

The more layers there are between what you write and what you see, the more logic your mind needs to remember. You might think grunt or gulp help you to simplify stuff but then you have 30 tasks that you need to remember what they do to your code, how to use them, update them, and teach them to any new coder. Not to mention compiling.

Side note #1: CSS pre-compilers are OK because they have very little logic but they help a lot in terms of readable structure, compared to plain CSS. I barely used HTML pre-compilers so you’ll have to decide for yourself.

Side note #2: Task-runners could save you time, so if you do use them, do it wisely keeping the minimalistic mindset.

  • Use Javascript everywhere

This one is quite specific, and I am not absolutely sure about it, but having the same language in client and server can simplify the data management between them.

  • Write more human code

Give your non trivial variables (and functions) descriptive names. Make shorter lines but only if it does not compromise readability.

Treat your code like poetry and take it to the edge of the bare minimum.

Panel Urges Research on Geoengineering as a Tool Against Climate Change (New York Times)

Piles at a CCI Energy Solutions coal handling plant in Shelbiana, Ky. Geoengineering proposals might counteract the effects of climate change that are the result of burning fossils fuels, such as coal. Credit: Luke Sharrett/Getty Images 

With the planet facing potentially severe impacts from global warming in coming decades, a government-sponsored scientific panel on Tuesday called for more research on geoengineering — technologies to deliberately intervene in nature to counter climate change.

The panel said the research could include small-scale outdoor experiments, which many scientists say are necessary to better understand whether and how geoengineering would work.

Some environmental groups and others say that such projects could have unintended damaging effects, and could set society on an unstoppable path to full-scale deployment of the technologies.

But the National Academy of Sciences panel said that with proper governance, which it said needed to be developed, and other safeguards, such experiments should pose no significant risk.

In two widely anticipated reports, the panel — which was supported by NASA and other federal agencies, including what the reports described as the “U.S. intelligence community” — noted that drastically reducing emissions of carbon dioxide and other greenhouse gases was by far the best way to mitigate the effects of a warming planet.

A device being developed by a company called Global Thermostat, is made to capture carbon dioxide from the air. This may be one solution to counteract climate change.CreditHenry Fountain/The New York Times 

But the panel, in making the case for more research into geoengineering, said, “It may be prudent to examine additional options for limiting the risks from climate change.”

“The committee felt that the need for information at this point outweighs the need for shoving this topic under the rug,” Marcia K. McNutt, chairwoman of the panel and the editor in chief of the journal Science, said at a news conference in Washington.

Geoengineering options generally fall into two categories: capturing and storing some of the carbon dioxide that has already been emitted so that the atmosphere traps less heat, or reflecting more sunlight away from the earth so there is less heat to start with. The panel issued separate reports on each.

The panel said that while the first option, called carbon dioxide removal, was relatively low risk, it was expensive, and that even if it was pursued on a planetwide scale, it would take many decades to have a significant impact on the climate. But the group said research was needed to develop efficient and effective methods to both remove the gas and store it so it remains out of the atmosphere indefinitely.

The second option, called solar radiation management, is far more controversial. Most discussions of the concept focus on the idea of dispersing sulfates or other chemicals high in the atmosphere, where they would reflect sunlight, in some ways mimicking the effect of a large volcanic eruption.

The process would be relatively inexpensive and should quickly lower temperatures, but it would have to be repeated indefinitely and would do nothing about another carbon dioxide-related problem: the acidification of oceans.

This approach might also have unintended effects on weather patterns around the world — bringing drought to once-fertile regions, for example. Or it might be used unilaterally as a weapon by governments or even extremely wealthy individuals.

Opponents of geoengineering have long argued that even conducting research on the subject presents a moral hazard that could distract society from the necessary task of reducing the emissions that are causing warming in the first place.

“A geoengineering ‘technofix’ would take us in the wrong direction,” Lisa Archer, food and technology program director of the environmental group Friends of the Earth, said in a statement. “Real climate justice requires dealing with root causes of climate change, not launching risky, unproven and unjust schemes.”

But the panel said that society had “reached a point where the severity of the potential risks from climate change appears to outweigh the potential risks from the moral hazard” of conducting research.

Ken Caldeira, a geoengineering researcher at the Carnegie Institution for Science and a member of the committee, said that while the panel felt that it was premature to deploy any sunlight-reflecting technologies today, “it’s worth knowing more about them,” including any problems that might make them unworkable.

“If there’s a real showstopper, we should know about it now,” Dr. Caldeira said, rather than discovering it later when society might be facing a climate emergency and desperate for a solution.

Dr. Caldeira is part of a small community of scientists who have researched solar radiation management concepts. Almost all of the research has been done on computers, simulating the effects of the technique on the climate. One attempt in Britain in 2011 to conduct an outdoor test of some of the engineering concepts provoked a public outcry. The experiment was eventually canceled.

David Keith, a researcher at Harvard University who reviewed the reports before they were released, said in an interview, “I think it’s terrific that they made a stronger call than I expected for research, including field research.” Along with other researchers, Dr. Keith has proposed a field experiment to test the effect of sulfate chemicals on atmospheric ozone.

Unlike some European countries, the United States has never had a separate geoengineering research program. Dr. Caldeira said establishing a separate program was unlikely, especially given the dysfunction in Congress. But he said that because many geoengineering research proposals might also help in general understanding of the climate, agencies that fund climate research might start to look favorably upon them.

Dr. Keith agreed, adding that he hoped the new reports would “break the logjam” and “give program managers the confidence they need to begin funding.”

At the news conference, Waleed Abdalati, a member of the panel and a professor at the University of Colorado, said that geoengineering research would have to be subject to governance that took into account not just the science, “but the human ramifications, as well.”

Dr. Abdalati said that, in general, the governance needed to precede the research. “A framework that addresses what kinds of activities would require governance is a necessary first step,” he said.

Raymond Pierrehumbert, a geophysicist at the University of Chicago and a member of the panel, said in an interview that while he thought that a research program that allowed outdoor experiments was potentially dangerous, “the report allows for enough flexibility in the process to follow that it could be decided that we shouldn’t have a program that goes beyond modeling.”

Above all, he said, “it’s really necessary to have some kind of discussion among broader stakeholders, including the public, to set guidelines for an allowable zone for experimentation.”

The Risks of Climate Engineering (New York Times)

Credit: Sarah Jacoby 

THE Republican Party has long resisted action on climate change, but now that much of the electorate wants something done, it needs to find a way out of the hole it has dug for itself. A committee appointed by the National Research Council may just have handed the party a ladder.

In a two-volume report, the council is recommending that the federal government fund a research program into geoengineering as a response to a warming globe. The study could be a watershed moment because reports from the council, an arm of the National Academies that provides advice on science and technology, are often an impetus for new scientific research programs.

Sometimes known as “Plan B,” geoengineering covers a variety of technologies aimed at deliberate, large-scale intervention in the climate system to counter global warming.

Despairing at global foot-dragging, some climate scientists now believe that a turn to Plan B is inevitable. They see it as inscribed in the logic of the situation. The council’s study begins with the assertion that the “likelihood of eventually considering last-ditch efforts” to address climate destabilization grows every year.

The report is balanced in its assessment of the science. Yet by bringing geoengineering from the fringes of the climate debate into the mainstream, it legitimizes a dangerous approach.

Beneath the identifiable risks is not only a gut reaction to the hubris of it all — the idea that humans could set out to regulate the Earth system, perhaps in perpetuity — but also to what it says about where we are today. As the committee’s chairwoman, Marcia McNutt, told The Associated Press: The public should read this report “and say, ‘This is downright scary.’ And they should say, ‘If this is our Hail Mary, what a scary, scary place we are in.’ ”

Even scarier is the fact that, while most geoengineering boosters see these technologies as a means of buying time for the world to get its act together, others promote them as a substitute for cutting emissions. In 2008, Newt Gingrich, the former House speaker, later Republican presidential candidate and an early backer of geoengineering, said: “Instead of penalizing ordinary Americans, we would have an option to address global warming by rewarding scientific invention,” adding: “Bring on the American ingenuity.”

The report, considerably more cautious, describes geoengineering as one element of a “portfolio of responses” to climate change and examines the prospects of two approaches — removing carbon dioxide from the atmosphere, and enveloping the planet in a layer of sulfate particles to reduce the amount of solar radiation reaching the Earth’s surface.

At the same time, the council makes clear that there is “no substitute for dramatic reductions in the emissions” of greenhouse gases to slow global warming and acidifying oceans.

The lowest-risk strategies for removing carbon dioxide are “currently limited by cost and at present cannot achieve the desired result of removing climatically important amounts,” the report said. On the second approach, the council said that at present it was “opposed to climate-altering deployment” of technologies to reflect radiation back into space.

Still, the council called for research programs to fill the gaps in our knowledge on both approaches, evoking a belief that we can understand enough about how the Earth system operates in order to take control of it.

Expressing interest in geoengineering has been taboo for politicians worried about climate change for fear they would be accused of shirking their responsibility to cut carbon emissions. Yet in some congressional offices, interest in geoengineering is strong. And Congress isn’t the only place where there is interest. Russia in 2013 unsuccessfully sought to insert a pro-geoengineering statement into the latest report of the Intergovernmental Panel on Climate Change.

Early work on geoengineering has given rise to one of the strangest paradoxes in American politics: enthusiasm for geoengineering from some who have attacked the idea of human-caused global warming. The Heartland Institute, infamous for its billboard comparing those who support climate science to the Unabomber, Theodore J. Kaczynski, featured an article in one of its newsletters from 2007 describing geoengineering as a “practical, cost-effective global warming strategy.”

Some scholars associated with conservative think tanks like the Hoover Institution and the Hudson Institute have written optimistically about geoengineering.

Oil companies, too, have dipped their toes into the geoengineering waters with Shell, for instance, having funded research into a scheme to put lime into seawater so it absorbs more carbon dioxide.

With half of Republican voters favoring government action to tackle global warming, any Republican administration would be tempted by the technofix to beat all technofixes.

For some, instead of global warming’s being proof of human failure, engineering the climate would represent the triumph of human ingenuity. While climate change threatens to destabilize the system, geoengineering promises to protect it. If there is such a thing as a right-wing technology, geoengineering is it.

President Obama has been working assiduously to persuade the world that the United States is at last serious about Plan A — winding back its greenhouse gas emissions. The suspicions of much of the world would be reignited if the United States were the first major power to invest heavily in Plan B.

Scientists urge global ‘wake-up call’ to deal with climate change (The Guardian)

Climate change has advanced so rapidly that work must start on unproven technologies now, admits US National Academy of Science

Series of mature thunderstorms located near the Parana River in southern Brazil.

‘The likelihood of eventually considering last-ditch efforts to address damage from climate change grows with every year of inaction on emissions control,’ says US National Academy of Science report. Photograph: ISS/NASA

Climate change has advanced so rapidly that the time has come to look at options for a planetary-scale intervention, the National Academy of Science said on Tuesday.

The scientists were categorical that geoengineering should not be deployed now, and was too risky to ever be considered an alternative to cutting the greenhouse gas emissions that cause climate change.

But it was better to start research on such unproven technologies now – to learn more about their risks – than to be stampeded into climate-shifting experiments in an emergency, the scientists said.

With that, a once-fringe topic in climate science moved towards the mainstream – despite the repeated warnings from the committee that cutting carbon pollution remained the best hope for dealing with climate change.

“That scientists are even considering technological interventions should be a wake-up call that we need to do more now to reduce emissions, which is the most effective, least risky way to combat climate change,” Marcia McNutt, the committee chair and former director of the US Geological Survey, said.

Asked whether she foresaw a time when scientists would eventually turn to some of the proposals studied by the committee, she said: “Gosh, I hope not.”

The two-volume report, produced over 18 months by a team of 16 scientists, was far more guarded than a similar British exercise five years ago which called for an immediate injection of funds to begin research on climate-altering interventions.

The scientists were so sceptical about geo-engineering that they dispensed with the term, opting for “climate intervention”. Engineering implied a measure of control the technologies do not have, the scientists said.

But the twin US reports – Climate Intervention: Carbon Dioxide Removal and Reliable Sequestration and Climate Intervention: Reflecting Sunlight to Cool the Earth – could boost research efforts at a limited scale.

The White House and committee leaders in Congress were briefed on the report’s findings this week.

Bill Gates, among others, argues the technology, which is still confined to computer models, has enormous potential and he has funded research at Harvard. The report said scientific research agencies should begin carrying out co-ordinated research.

But geo-engineering remains extremely risky and relying on a planetary hack – instead of cutting carbon dioxide emissions – is “irresponsible and irrational”, the report said.

The scientists looked at two broad planetary-scale technological fixes for climate change: sucking carbon dioxide emissions out of the atmosphere, or carbon dioxide removal, and increasing the amount of sunlight reflected away from the earth and back into space, or albedo modification.

Albedo modification, injecting sulphur dioxide to increase the amount of reflective particles in the atmosphere and increase the amount of sunlight reflected back into space, is seen as a far riskier proposition.

Tinkering with reflectivity would merely mask the symptoms of climate change, the report said. It would do nothing to reduce the greenhouse gas emissions that cause climate change.

The world would have to commit to continuing a course of albedo modification for centuries on end – or watch climate change come roaring back.

“It’s hard to unthrow that switch once you embark on an albedo modification approach. If you walk back from it, you stop masking the effects of climate change and you unleash the accumulated effects rather abruptly,” Waleed Abdalati, a former Nasa chief scientist who was on the panel, said.

More ominously, albedo modification could alter the climate in new and additional ways from which there would be no return. “It doesn’t go back, it goes different,” he said.

The results of such technologies are still far too unpredictable on a global scale, McNutt said. She also feared they could trigger conflicts. The results of such climate interventions will vary enormously around the globe, she said.

“Kansas may be happy with the answer, but Congo may not be happy at all because of changes in rainfall. It may be quite a bit worse for the Arctic, and it’s not going to address at all ocean acidification,” she said. “There are all sorts of reasons why one might not view albedo modified world as an improvement.”

The report also warned that offering the promise of a quick fix to climate change through planet hacking could discourage efforts to cut the greenhouse gas emissions that cause climate change.

“The message is that reducing carbon dioxide emissions is by far the preferable way of addressing the problem,” said Raymond Pierrehumbert, a University of Chicago climate scientist, who served on the committee writing the report. “Dimming the sun by increasing the earth’s reflectivity shouldn’t be viewed as a cheap substitute for reducing carbon dioxide emissions. It is a very poor and distant third, fourth, or even fifth choice. |It is way down on the list of things you want to do.”

But geoengineering has now landed on the list.

Climate change was advancing so rapidly a climate emergency – such as widespread crop failure – might propel governments into trying such large-scale interventions.

“The likelihood of eventually considering last-ditch efforts to address damage from climate change grows with every year of inaction on emissions control,” the report said.

If that was the case, it was far better to be prepared for the eventualities by carrying out research now.

The report gave a cautious go-ahead to technologies to suck carbon dioxide out of the air, finding them generally low-risk – although they were prohibitively expensive.

The report discounted the idea of seeding the ocean with iron filings to create plankton blooms that absorb carbon dioxide.

But it suggested carbon-sucking technologies could be considered as part of a portfolio of responses to fight climate change.

The Institution of Mechanical Engineers has come up with some ideas for what

Carbon-sucking technologies, such as these ‘artificial forests’, could in future be considered to fight climate change – but reducing carbon dioxide emissions now is by far the preferable way of addressing the problem. Photograph: Guardian

It would involve capturing carbon dioxide from the atmosphere and pumping it underground at high pressure – similar to technology that is only now being tested at a small number of coal plants.

Sucking carbon dioxide out of the air is much more challenging than capturing it from a power plant – which is already prohibitively expensive, the report said. But it still had a place.

“I think there is a good case that eventually this might have to be part of the arsenal of weapons we use against climate change,” said Michael Oppenheimer, a climate scientist at Princeton University, who was not involved with the report.

Drawing a line between the two technologies – carbon dioxide removal and albedo modification – was seen as one of the important outcomes of Tuesday’s report.

The risks and potential benefits of the two are diametrically opposed, said Ken Caldeira, an atmospheric scientist at Carnegie Institution’s Department of Global Ecology and a geoengineering pioneer, who was on the committee.

“The primary concern about carbon dioxide removal is how much does it cost,” he said. “There are no sort of novel, global existential dilemmas that are raised. The main aim of the research is to make it more affordable, and to make sure it is environmentally acceptable.”

In the case of albedo reflection, however, the issue is risk. “A lot of those ideas are relatively cheap,” he said. “The question isn’t about direct cost. The question is, What bad stuff is going to happen?”

There are fears such interventions could lead to unintended consequences that are even worse than climate change – widespread crop failure and famine, clashes between countries over who controls the skies.

But Caldeira, who was on the committee, argued that it made sense to study those consequences now. “If there are real show stoppers and it is not going to work, it would be good to know that in advance and take it off the table, so people don’t do something rash in an emergency situation,” he said.

Spraying sulphur dioxide into the atmosphere could lower temperatures – at least according to computer models and real-life experiences following major volcanic eruptions.

But the cooling would be temporary and it would do nothing to right ocean chemistry, which was thrown off kilter by absorbing those emissions.

“My view of albedo modification is that it is like taking pain killers when you need surgery for cancer,” said Pierrehumbert. “It’s ignoring the problem. The problem is still growing though and it is going to come back and get you.”

Computadores quânticos podem revolucionar teoria da informação (Fapesp)

30 de janeiro de 2015

Por Diego Freire

Agência FAPESP – A perspectiva dos computadores quânticos, com capacidade de processamento muito superior aos atuais, tem levado ao aprimoramento de uma das áreas mais versáteis da ciência, com aplicações nas mais diversas áreas do conhecimento: a teoria da informação. Para discutir essa e outras perspectivas, o Instituto de Matemática, Estatística e Computação Científica (Imecc) da Universidade Estadual de Campinas (Unicamp) realizou, de 19 a 30 de janeiro, a SPCoding School.

O evento ocorreu no âmbito do programa Escola São Paulo de Ciência Avançada (ESPCA), da FAPESP, que oferece recursos para a organização de cursos de curta duração em temas avançados de ciência e tecnologia no Estado de São Paulo.

A base da informação processada pelos computadores largamente utilizados é o bit, a menor unidade de dados que pode ser armazenada ou transmitida. Já os computadores quânticos trabalham com qubits, que seguem os parâmetros da mecânica quântica, ramo da Física que trata das dimensões próximas ou abaixo da escala atômica. Por conta disso, esses equipamentos podem realizar simultaneamente uma quantidade muito maior de cálculos.

“Esse entendimento quântico da informação atribui toda uma complexidade à sua codificação. Mas, ao mesmo tempo em que análises complexas, que levariam décadas, séculos ou até milhares de anos para serem feitas em computadores comuns, poderiam ser executadas em minutos por computadores quânticos, também essa tecnologia ameaçaria o sigilo de informações que não foram devidamente protegidas contra esse tipo de novidade”, disse Sueli Irene Rodrigues Costa, professora do IMECC, à Agência FAPESP.

A maior ameaça dos computadores quânticos à criptografia atual está na sua capacidade de quebrar os códigos usados na proteção de informações importantes, como as de cartão de crédito. Para evitar esse tipo de risco é preciso desenvolver também sistemas criptográficos visando segurança, considerando a capacidade da computação quântica.

“A teoria da informação e a codificação precisam estar um passo à frente do uso comercial da computação quântica”, disse Rodrigues Costa, que coordena o Projeto Temático “Segurança e confiabilidade da informação: teoria e prática”, apoiado pela FAPESP.

“Trata-se de uma criptografia pós-quântica. Como já foi demonstrado no final dos anos 1990, os procedimentos criptográficos atuais não sobreviverão aos computadores quânticos por não serem tão seguros. E essa urgência pelo desenvolvimento de soluções preparadas para a capacidade da computação quântica também impulsiona a teoria da informação a avançar cada vez mais em diversas direções”, disse.

Algumas dessas soluções foram tratadas ao longo da programação da SPCoding School, muitas delas visando sistemas mais eficientes para a aplicação na computação clássica, como o uso de códigos corretores de erros e de reticulados para criptografia. Para Rodrigues Costa, a escalada da teoria da informação em paralelo ao desenvolvimento da computação quântica provocará revoluções em várias áreas do conhecimento.

“A exemplo das múltiplas aplicações da teoria da informação na atualidade, a codificação quântica também elevaria diversas áreas da ciência a novos patamares por possibilitar simulações computacionais ainda mais precisas do mundo físico, lidando com uma quantidade exponencialmente maior de variáveis em comparação aos computadores clássicos”, disse Rodrigues Costa.

A teoria da informação envolve a quantificação da informação e envolve áreas como matemática, engenharia elétrica e ciência da computação. Teve como pioneiro o norte-americano Claude Shannon (1916-2001), que foi o primeiro a considerar a comunicação como um problema matemático.

Revoluções em curso

Enquanto se prepara para os computadores quânticos, a teoria da informação promove grandes modificações na codificação e na transmissão de informações. Amin Shokrollahi, da École Polytechnique Fédérale de Lausanne, na Suíça, apresentou na SPCoding School novas técnicas de codificação para resolver problemas como ruídos na informação e consumo elevado de energia no processamento de dados, inclusive na comunicação chip a chip nos aparelhos.

Shokrollahi é conhecido na área por ter inventado os códigos Raptor e coinventado os códigos Tornado, utilizados em padrões de transmissão móveis de informação, com implementações em sistemas sem fio, satélites e no método de transmissão de sinais televisivos IPTV, que usa o protocolo de internet (IP, na sigla em inglês) para transmitir conteúdo.

“O crescimento do volume de dados digitais e a necessidade de uma comunicação cada vez mais rápida aumentam a susceptibilidade a vários tipos de ruído e o consumo de energia. É preciso buscar novas soluções nesse cenário”, disse.

Shokrollahi também apresentou inovações desenvolvidas na empresa suíça Kandou Bus, da qual é diretor de pesquisa. “Utilizamos algoritmos especiais para codificar os sinais, que são todos transferidos simultaneamente até que um decodificador recupere os sinais originais. Tudo isso é feito evitando que fios vizinhos interfiram entre si, gerando um nível de ruído significativamente menor. Os sistemas também reduzem o tamanho dos chips, aumentam a velocidade de transmissão e diminuem o consumo de energia”, explicou.

De acordo com Rodrigues Costa, soluções semelhantes também estão sendo desenvolvidas em diversas tecnologias largamente utilizadas pela sociedade.

“Os celulares, por exemplo, tiveram um grande aumento de capacidade de processamento e em versatilidade, mas uma das queixas mais frequentes entre os usuários é de que a bateria não dura. Uma das estratégias é descobrir meios de codificar de maneira mais eficiente para economizar energia”, disse.

Aplicações biológicas

Não são só problemas de natureza tecnológica que podem ser abordados ou solucionados por meio da teoria da informação. Professor na City University of New York, nos Estados Unidos, Vinay Vaishampayan coordenou na SPCoding School o painel “Information Theory, Coding Theory and the Real World”, que tratou de diversas aplicações dos códigos na sociedade – entre elas, as biológicas.

“Não existe apenas uma teoria da informação e suas abordagens, entre computacionais e probabilísticas, podem ser aplicadas a praticamente todas as áreas do conhecimento. Nós tratamos no painel das muitas possibilidades de pesquisa à disposição de quem tem interesse em estudar essas interfaces dos códigos com o mundo real”, disse à Agência FAPESP.

Vaishampayan destacou a Biologia como área de grande potencial nesse cenário. “A neurociência apresenta questionamentos importantes que podem ser respondidos com a ajuda da teoria da informação. Ainda não sabemos em profundidade como os neurônios se comunicam entre si, como o cérebro funciona em sua plenitude e as redes neurais são um campo de estudo muito rico também do ponto de vista matemático, assim como a Biologia Molecular”, disse.

Isso porque, de acordo com Max Costa, professor da Faculdade de Engenharia Elétrica e de Computação da Unicamp e um dos palestrantes, os seres vivos também são feitos de informação.

“Somos codificados por meio do DNA das nossas células. Descobrir o segredo desse código, o mecanismo que há por trás dos mapeamentos que são feitos e registrados nesse contexto, é um problema de enorme interesse para a compreensão mais profunda do processo da vida”, disse.

Para Marcelo Firer, professor do Imecc e coordenador da SPCoding School, o evento proporcionou a estudantes e pesquisadores de diversos campos novas possibilidades de pesquisa.

“Os participantes compartilharam oportunidades de engajamento em torno dessas e muitas outras aplicações da Teoria da Informação e da Codificação. Foram oferecidos desde cursos introdutórios, destinados a estudantes com formação matemática consistente, mas não necessariamente familiarizados com codificação, a cursos de maior complexidade, além de palestras e painéis de discussão”, disse Firer, membro da coordenação da área de Ciência e Engenharia da Computação da FAPESP.

Participaram do evento cerca de 120 estudantes de 70 universidades e 25 países. Entre os palestrantes estrangeiros estiveram pesquisadores do California Institute of Technology (Caltech), da Maryland University e da Princeton University, nos Estados Unidos; da Chinese University of Hong Kong, na China; da Nanyang Technological University, em Cingapura; da Technische Universiteit Eindhoven, na Holanda; da Universidade do Porto, em Portugal; e da Tel Aviv University, em Israel.

Mais informações em www.ime.unicamp.br/spcodingschool.

From the Concorde to Sci-Fi Climate Solutions (Truthout)

Thursday, 29 January 2015 00:00 By Almuth Ernsting, Truthout

The interior of the Concorde aircraft at the Scotland Museum of Flight. (Photo: Magnus Hagdorn)

The interior of the Concorde aircraft at the Scotland Museum of Flight. (Photo: Magnus Hagdorn)

Touting “sci-fi climate solutions” – untested technologies not really scalable to the dimensions of our climate change crisis – dangerously delays the day when we actually reduce greenhouse gas emissions.

Last week, I took my son to Scotland’s Museum of Flight. Its proudest exhibit: a Concorde. To me, it looked stunningly futuristic. “How old,” remarked my son, looking at the confusing array of pre-digital controls in the cockpit. Watching the accompanying video – “Past Dreams of the Future” – it occurred to me that the story of the Concorde stands as a symbol for two of the biggest obstacles to addressing climate change.

The Concorde must rank among the most wasteful ways of guzzling fossil fuels ever invented. No other form of transport is as destructive to the climate as aviation – yet the Concorde burned almost five times as much fuel per person per mile as a standard aircraft. Moreover, by emitting pollutants straight into the lower stratosphere, the Concorde contributed to ozone depletion. At the time of the Concorde’s first test flight in 1969, little was known about climate change and the ozone hole had not yet been discovered. Yet by the time the Concorde was grounded – for purely economic reasons – in 2003, concerns about its impact on the ozone layer had been voiced for 32 years and the Intergovernmental Panel on Climate Change’s (IPCC) first report had been published for 13 years.

The Concorde’s history illustrates how the elites will stop at nothing when pursuing their interests or desires. No damage to the atmosphere and no level of noise-induced misery to those living under Concorde flight paths were treated as bad enough to warrant depriving the richest of a glamorous toy.

If this first “climate change lesson” from the Concorde seems depressing, the second will be even less comfortable for many.

Back in 1969, the UK’s technology minister marveled at Concorde’s promises: “It’ll change the shape of the world; it’ll shrink the globe by half . . . It replaces in one step the entire progress made in aviation since the Wright Brothers in 1903.”

Few would have believed at that time that, from 2003, no commercial flight would reach even half the speed that had been achieved back in the 1970s.

The Concorde remained as fast – yet as inefficient and uneconomical – as it had been from its commercial inauguration in 1976 – despite vast amounts of public and industry investment. The term “Concorde fallacy” entered British dictionaries: “The idea that you should continue to spend money on a project, product, etc. in order not to waste the money or effort you have already put into it, which may lead to bad decisions.”

The lessons for those who believe in overcoming climate change through technological progress are sobering: It’s not written in the stars that every technology dreamed up can be realized, nor that, with enough time and money, every technical problem will be overcome and that, over time, every new technology will become better, more efficient and more affordable.

Yet precisely such faith in technological progress informs mainstream responses to climate change, including the response by the IPCC. At a conference last autumn, I listened to a lead author of the IPCC’s latest assessment report. His presentation began with a depressing summary of the escalating climate crisis and the massive rise in energy use and carbon emissions, clearly correlated with economic growth. His conclusion was highly optimistic: Provided we make the right choices, technological progress offers a future with zero-carbon energy for all, with ever greater prosperity and no need for economic growth to end. This, he illustrated with some drawings of what we might expect by 2050: super-grids connecting abundant nuclear and renewable energy sources across continents, new forms of mass transport (perhaps modeled on Japan’s magnetic levitation trains), new forms of aircraft (curiously reminiscent of the Concorde) and completely sustainable cars (which looked like robots on wheels). The last and most obscure drawing in his presentation was unfinished, to remind us that future technological progress is beyond our capacity to imagine; the speaker suggested it might be a printer printing itself in a new era of self-replicating machines.

These may represent the fantasies of just one of many lead authors of the IPCC’s recent report. But the IPCC’s 2014 mitigation report itself relies on a large range of techno-fixes, many of which are a long way from being technically, let alone commercially, viable. Climate justice campaigners have condemned the IPCC’s support for “false solutions” to climate change. But the term “false solutions” does not distinguish between techno-fixes that are real and scalable, albeit harmful and counterproductive on the one hand, and those that remain in the realm of science fiction, or threaten to turn into another “Concorde fallacy,” i.e. to keep guzzling public funds with no credible prospect of ever becoming truly viable. Let’s call the latter “sci-fi solutions.”

The most prominent, though by no means only, sci-fi solution espoused by the IPCC is BECCS – bioenergy with carbon capture and storage. According to their recent report, the vast majority of “pathways” or models for keeping temperature rise below 2 degrees Celsius rely on “negative emissions.” Although the report included words of caution, pointing out that such technologies are “uncertain” and “associated with challenges and risks,” the conclusion is quite clear: Either carbon capture and storage, including BECCS, is introduced on a very large scale, or the chances of keeping global warming within 2 degrees Celsius are minimal. In the meantime, the IPCC’s chair, Rajendra Pachauri, and the co-chair of the panel’s Working Group on Climate Change Mitigation, Ottmar Edenhofer, publicly advocate BECCS without any notes of caution about uncertainties – referring to it as a proven way of reducing carbon dioxide levels and thus global warming. Not surprisingly therefore, BECCS has even entered the UN climate change negotiations. The recent text, agreed at the Lima climate conference in December 2014 (“Lima Call for Action”), introduces the terms “net zero emissions” and “negative emissions,” i.e. the idea that we can reliably suck large amounts of carbon (those already emitted from burning fossil fuels) out of the atmosphere. Although BECCS is not explicitly mentioned in the Lima Call for Action, the wording implies support for it because it is treated as the key “negative emissions” technology by the IPCC.

If BECCS were to be applied at a large scale in the future, then we would have every reason to be alarmed. According to a scientific review, attempting to capture 1 billion tons of carbon through BECCS (far less than many of the “pathways” considered by the IPCC presume) would require 218 to 990 million hectares of switchgrass plantations (or similar scale plantations of other feedstocks, including trees), 1.6 to 7.4 trillion cubic meters of water a year, and 75 percent more than all the nitrogen fertilizers used worldwide (which currently stands at 1 billion tons according to the “conservative” estimates in many studies). By comparison, just 30 million hectares of land worldwide have been converted to grow feedstock for liquid biofuels so far. Yet biofuels have already become the main cause of accelerated growth in demand for vegetable oils and cereals, triggering huge volatility and rises in the price of wood worldwide. And by pushing up palm oil prices, biofuels have driven faster deforestation across Southeast Asia and increasingly in Africa. As a result of the ethanol boom, more than 6 million hectares of US land has been planted with corn, causing prairies and wetlands to be plowed up. This destruction of ecosystems, coupled with the greenhouse gas intensive use of fertilizers, means that biofuels overall are almost certainly worse for the climate than the fossil fuels they are meant to replace. There are no reasons to believe that the impacts of BECCS would be any more benign. And they would be on a much larger scale.

Capturing carbon takes a lot of energy, hence CCS requires around one-third more fuel to be burned to generate the same amount of energy. And sequestering captured carbon is a highly uncertain business. So far, there have been three large-scale carbon sequestration experiments. The longest-standing of these, the Sleipner field carbon sequestration trial in the North Sea, has been cited as proof that carbon dioxide can be sequestered reliably under the seabed. Yet in 2013, unexpected scars and fractures were found in the reservoir and a lead researcher concluded: “We are saying it is very likely something will come out in the end.” Another one of the supposedly “successful,” if much shorter, trials also raised “interesting questions,” according to the researchers: Carbon dioxide migrated further upward in the reservoir than predicted, most likely because injecting the carbon dioxide caused fractures in the cap rock.

There are thus good reasons to be alarmed about the prospect of large-scale bioenergy with CCS. Yet BECCS isn’t for real.

While the IPCC and world leaders conclude that we really need to use carbon capture and storage, including biomass, here’s what is actually happening: The Norwegian government, once proud of being a global pioneer of CCS, has pulled the plug on the country’s first full-scale CCS project after a scathing report from a public auditor. The Swedish state-owned energy company Vattenfall has shut down its CCS demonstration plant in Germany, the only plant worldwide testing a particular and supposedly promising carbon capture technology. The government of Alberta has dropped its previously enthusiastic support for CCS because it no longer sees it as economically viable.

True, 2014 has seen the opening of the world’s largest CCS power station, after SaskPower retrofitted one unit of their Boundary Dam coal power station in Saskatchewan to capture carbon dioxide. But Boundary Dam hardly confirms the techno-optimist’s hopes. The 100-megawatt unit costs approximately $1.4 billion to build – more than twice the cost of a much larger (non-CCS) 400-megawatt gas power station built by SaskPower in 2009. It became viable thanks only to public subsidies and to a contract with the oil company Cenovus, which agreed to buy the carbon dioxide for the next decade in order to inject it into an oil well to facilitate extraction of more hard to reach oil – a process called enhanced oil recovery (EOR). The supposed “carbon dioxide savings” predictably ignore all of the carbon dioxide emissions from burning that oil. But even with such a nearby oil field suitable for EOR, SaskPower had to make the plant far smaller than originally planned so as to avoid capturing more carbon dioxide than they could sell.

If CCS with fossil fuels is reminiscent of the Concorde fallacy, large-scale BECCS is entirely in the realm of science fiction. The supposedly most “promising technology”has never been tested in a biomass power plant and that has so far proven uneconomical with coal. Add to that the fact that biomass power plants need more feedstock and are less efficient and more expensive to run than coal power plants, and a massive-scale BECCS program becomes even more implausible. And then add to that the question of scale: Sequestering 1 billion tons of carbon a year would produce a volume of highly pressurized liquid carbon dioxide larger than the global volume of oil extracted annually. It would require governments and/or companies stumping up the money to build an infrastructure larger than that of the entire global oil industry – without any proven benefit.

This doesn’t mean that we won’t see any little BECCS projects in niche circumstances. One of these already exists: ADM is capturing carbon dioxide from ethanol fermentation in one of its refineries for use in CCS research. Capturing carbon dioxide from ethanol fermentation is relatively simple and cheap. If there happens to be some half-depleted nearby oil field suitable for enhanced oil recovery, some ethanol “CCS” projects could pop up here and there. But this has little to do with a “billion ton negative emissions” vision.

BECCS thus appears as one, albeit a particularly prominent, example of baseless techno-optimism leading to dangerous policy choices. Dangerous, that is, because hype about sci-fi solutions becomes a cover for the failure to curb fossil fuel burning and ecosystem destruction today.

Monitoramento e análise de dados – A crise nos mananciais de São Paulo (Probabit)

Situação 25.1.2015

4,2 milímetros de chuva em 24.1.2015 nos reservatórios de São Paulo (média ponderada).

305 bilhões de litros (13,60%) de água em estoque. Em 24 horas, o volume subiu 4,4 bilhões de litros (0,19%).

134 dias até acabar toda a água armazenada, com chuvas de 996 mm/ano e mantida a eficiência corrente do sistema.

66% é a redução no consumo necessária para equilibrar o sistema nas condições atuais e 33% de perdas na distribuição.


Para entender a crise

Como ler este gráfico?

Os pontos no gráfico mostram 4040 intervalos de 1 ano para o acumulado de chuva e a variação no estoque total de água (do dia 1º de janeiro de 2003/2004 até hoje). O padrão mostra que mais chuva faz o estoque variar para cima e menos chuva para baixo, como seria de se esperar.

Este e os demais gráficos desta página consideram sempre a capacidade total de armazenamento de água em São Paulo (2,24 trilhões de litros), isto é, a soma dos reservatórios dos Sistemas Cantareira, Alto Tietê, Guarapiranga, Cotia, Rio Grande e Rio Claro. Quer explorar os dados?

A região de chuva acumulada de 1.400 mm a 1.600 mm ao ano concentra a maioria dos pontos observados de 2003 para cá. É para esse padrão usual de chuvas que o sistema foi projetado. Nessa região, o sistema opera sem grandes desvios de seu equilíbrio: máximo de 15% para cima ou para baixo em um ano. Por usar como referência a variação em 1 ano, esse modo de ver os dados elimina a oscilação sazonal de chuvas e destaca as variações climáticas de maior amplitude. Ver padrões ano a ano.

Uma segunda camada de informação no mesmo gráfico são as zonas de risco. A zona vermelha é delimitada pelo estoque atual de água em %. Todos os pontos dentro dessa área (com frequência indicada à direita) representam, portanto, situações que se repetidas levarão ao colapso do sistema em menos de 1 ano. A zona amarela mostra a incidência de casos que se repetidos levarão à diminuição do estoque. Só haverá recuperação efetiva do sistema se ocorrerem novos pontos acima da faixa amarela.

Para contextualizar o momento atual e dar uma ideia de tendência, pontos interligados em azul destacam a leitura adicionada hoje (acumulado de chuva e variação entre hoje e mesmo dia do ano passado) e as leituras de 30, 60 e 90 atrás (em tons progressivamente mais claros).


Discussão a partir de um modelo simples

O ajuste de um modelo linear aos casos observados mostra que existe uma razoável correlação entre o acumulado de chuva e a variação no estoque hídrico, como o esperado.

Ao mesmo tempo, fica clara a grande dispersão de comportamento do sistema, especialmente na faixa de chuvas entre 1.400 mm e 1.500 mm. Acima de 1.600 mm há dois caminhos bem separados, o inferior corresponde ao perído entre 2009 e 2010 quando os reservatórios ficaram cheios e não foi possível estocar a chuva excedente.

Além de uma gestão deliberadamente mais ou menos eficiente da água disponível, podem contribuir para as flutuações observadas as variações combinadas no consumo, nas perdas e na efetividade da captação de água. Entretanto, não há dados para examinarmos separadamente o efeito de cada uma dessas variáveis.

Simulação 1: Efeito do aumento do estoque de água

Nesta simulação foi hipoteticamente incluído no sistema de abastecimento a reserva adicional da represa Billings, com volume de 998 bilhões de litros (já descontados o braço “potável” do reservatório Rio Grande).

Aumentar o estoque disponível não muda o ponto de equilíbrio, mas altera a inclinação da reta que representa a relação entre a chuva e a variação no estoque. A diferença de inclinação entre a linha azul (simulada) e a vermelha (real) mostra o efeito da ampliação do estoque.

Se a Billings não fosse hoje um depósito gigante de esgotos, poderíamos estar fora da situação crítica. Entretanto, vale enfatizar que o simples aumento de estoque não é capaz de evitar indefinidamente a escassez se a quantidade de chuva persistir abaixo do ponto de equilíbrio.

Simulação 2: Efeito da melhoria na eficiência

O único modo de manter o estoque estável quando as chuvas se tornam mais escassas é mudar a ‘curva de eficiência’ do sistema. Em outras palavras, é preciso consumir menos e se adaptar a uma menor entrada de água no sistema.

A linha azul no gráfico ao lado indica o eixo ao redor do qual os pontos precisariam flutuar para que o sistema se equilibrasse com uma oferta anual de 1.200 mm de chuva.

A melhoria da eficiência pode ser alcançada por redução no consumo, redução nas perdas e melhoria na tecnologia de captação de água (por exemplo pela recuperação das matas ciliares e nascentes em torno dos mananciais).

Se persistir a situação desenhada de 2013 a 2015, com chuvas em torno de 1.000 mm será necessário atingir uma curva de eficiência que está muito distante do que já se conseguiu praticar, acima mesmo dos melhores casos já observados.

Com o equilíbrio de “projeto” em torno de 1.500 mm, a conta é mais ou menos assim: a Sabesp perde 500 mm (33% da água distribuída), a população consume 1.000 mm. Para chegar rapidamente ao equilíbrio em 1.000 mm, o consumo deveria ser de 500 mm, uma vez que as perdas não poderão ser rapidamente evitadas e acontecem antes do consumo.

Se 1/3 da água distribuída não fosse sistematicamente perdida não haveria crise. Os 500 mm de chuva disperdiçados anualmente pela precariedade do sistema de distribução não fazem falta quando chove 1.500 mm, mas com 1.000 mm cada litro jogado fora de um lado é um litro que terá de ser economizado do outro.

Simulação 3: Eficiência corrente e economia necessária

Para estimar a eficiência corrente são usadas as últimas 120 observações do comportamento do sistema.

A curva de eficiência corrente permite estimar o ponto de equilíbrio atual do sistema (ponto vermelho em destaque).

O ponto azul indica a última observação do acumulado anual de chuvas. A diferença entre os dois mede o tamanho do desequilíbrio.

Apenas para estancar a perda de água do sistema, é preciso reduzir em 49% o fluxo de retirada. Como esse fluxo inclui todas as perdas, se depender apenas da redução no consumo, a economia precisa ser de 66% se as perdas forem de 33%, ou de 56% se as perdas forem de 17%.

Parece incrível que a eficiência do sistema esteja tão baixa em meio a uma crise tão grave. A tentativa de contenção no consumo está aumentando o consumo? Volumes menores e mais rasos evaporam mais? As pessoas ainda não perceberam a tamanho do desastre?


Prognóstico

Supondo que novos estoques de água não serão incorporados no curto prazo, o prognóstico sobre se e quando a água vai acabar depende da quantidade de chuva e da eficiência do sistema.

O gráfico mostra quantos dias restam de água em função do acumulado de chuva, considerando duas curvas de eficiência: a média e a corrente (estimada a partir dos últimos 120 dias).

O ponto em destaque considera a observação mais recente de chuva acumulada no ano e mostra quantos dias restam de água se persistirem as condições atuais de chuva e de eficiência.

O prognóstico é uma referência que varia de acordo com as novas observações e não tem probabilidade definida. Trata-se de uma projeção para melhor visualizar as condições necessárias para escapar do colapso.

Porém, lembrando que a média histórica de chuvas em São Paulo é de 1.441 mm ao ano, uma curva que cruze esse limite significa um sistema com mais de 50% de chances de colapsar em menos de um ano. Somos capazes de evitar o desastre?


Os dados

O ponto de partida são os dados divulgados diariamente pela Sabesp. A série de dados original atualizada está disponível aqui.

Porém, há duas importantes limitações nesses dados que podem distorcer a interpretação da realidade: 1) a Sabesp usa somente porcentagens para se referir a reservatórios com volumes totais muito diferentes; 2) a entrada de novos volumes não altera a base-de-cálculo sobre o qual essa porcentagem é medida.

Por isso, foi necessário corrigir as porcentagens da série de dados original em relação ao volume total atual, uma vez que os volumes que não eram acessíveis se tornaram acessíveis e, convenhamos, sempre estiveram lá nas represas. A série corrigida pode ser obtida aqui. Ela contém uma coluna adicional com os dados dos volumes reais (em bilhões de litros: hm3)

Além disso, decidimos tratar os dados de forma consolidada, como se toda a água estivesse em um único grande reservatório. A série de dados usada para gerar os gráficos desta página contém apenas a soma ponderada do estoque (%) e da chuva (mm) diários e também está disponível.

As correções realizadas eliminam os picos causados pelas entradas dos volumes mortos e permitem ver com mais clareza o padrão de queda do estoque em 2014.


Padrões ano a ano


Média e quartis do estoque durante o ano


Sobre este estudo

Preocupado com a escassez de água, comecei a estudar o problema ao final de 2014. Busquei uma abordagem concisa e consistente de apresentar os dados, dando destaque para as três variáveis que realmente importam: a chuva, o estoque total e a eficiência do sistema. O site entrou no ar em 16 de janeiro de 2015. Todos os dias, os modelos e os gráficos são refeitos com as novas informações.

Espero que esta página ajude a informar a real dimensão da crise da água em São Paulo e estimule mais ações para o seu enfrentamento.

Mauro Zackiewicz

maurozacgmail.com

scientia probabitlaboratório de dados essenciais

O que esperar da ciência em 2015 (Zero Hora)

Apostamos em cinco coisas que tendem a aparecer neste ano

19/01/2015 | 06h01

O que esperar da ciência em 2015 SpaceX/Youtube
Foto: SpaceX/Youtube

Em 2014, a ciência conseguiu pousar em um cometa, descobriu que estava errada sobre a evolução genética das aves, revelou os maiores fósseis da história. Miguel Nicolelis apresentou seu exoesqueleto na Copa do Mundo, o satélite brasileiro CBERS-4, em parceria com a China, foi ao espaço com sucesso, um brasileiro trouxe a principal medalha da matemática para casa.

Mas e em 2015, o que veremos? Apostamos em cinco coisas que poderão aparecer neste ano.

Foguetes reusáveis


Se queremos colonizar Marte, não adianta passagem só de ida. Esses foguetes, capazes de ir e voltar, são a promessa para transformar o futuro das viagens espaciais. Veremos se a empresa SpaceX, que já está nessa, consegue.

Robôs em casa


Os japoneses da Softbank começam a vender, em fevereiro, um robô humanoide chamado Pepper. Ele usa inteligência artificial para reconhecer o humor do dono e fala quatro línguas. Apesar de ser mais um ajudante do que um cara que faz, logo logo aprenderá novas funções.

Universo invisível


Grande Colisor de Hádrons vai voltar a funcionar em março e terá potência duas vezes maior de quebrar partículas. Uma das possibilidades é que ele ajude a descobrir novas superpartículas que, talvez, componham a matéria escura. Seria o primeiro novo estado da matéria descoberto em um século.

Cura para o ebola


Depois da crise de 2014, pode ser que as vacinas para o ebola comecem a funcionar e salvem muitas vidas na África. Vale o mesmo para a aids. O HIV está cercado, esperamos que a ciência finalmente o vença neste ano.

Discussões climáticas


2014 foi um dos mais quentes da história e, do jeito que a coisa vai, 2015 seguirá a mesma trilha. Em dezembro, o mundo vai discutir um acordo para tentar reverter o grau de emissões de gases em Paris. São medidas para ser implementadas a partir de 2020. Que sejam sensatos nossos líderes.

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: arxiv.org/abs/1404.1499 : A Ballistic Monte Carlo Approximation of π

Butterflies, Ants and the Internet of Things (Wired)

[Isn’t it scary that there are bright people who are that innocent? Or perhaps this is just a propaganda piece. – RT]

BY GEOFF WEBB, NETIQ

12.10.14  |  12:41 PM

Autonomous Cars (Autopia)

Buckminster Fuller once wrote, “there is nothing in the caterpillar that tells you it’s going to be a butterfly.”  It’s true that often our capacity to look at things and truly understand their final form is very limited.  Nor can we necessarily predict what happens when many small changes combine – when small pebbles roll down a hillside and turn in a landslide that dams a river and floods a plain.

This is the situation we face now as we try to understand the final form and impact of the Internet of Things (IoT). Countless small, technological pebbles have begun to roll down the hillside from initial implementation to full realization.  In this case, the “pebbles” are the billions of sensors, actuators, and smart technologies that are rapidly forming the Internet of Things. And like the caterpillar in Fuller’s quote, the final shape of the IoT may look very different from our first guesses.

In whatever the world looks like as the IoT begins to bear full fruit, the experience of our lives will be markedly different.  The world around us will not only be aware of our presence, it will know who we are, and it will react to us, often before we are even aware of it.  The day-to-day process of living will change because almost every piece of technology we touch (and many we do not) will begin to tailor their behavior to our specific needs and desires.  Our car will talk to our house.

Walking into a store will be very different, as the displays around us could modify their behavior based on our preferences and buying habits.  The office of the future will be far more adaptive, less rigid, more connected – the building will know who we are and will be ready for us when we arrive.  Everything, from the way products are built and packaged and the way our buildings and cities are managed, to the simple process of travelling around, interacting with each other, will change and change dramatically. And it’s happening now.

We’re already seeing mainstream manufacturers building IoT awareness into their products, such as Whirlpool building Internet-aware washing machines, and specialized IoT consumer tech such as LIFX light bulbs which can be managed from a smartphone and will respond to events in your house. Even toys are becoming more and more connected as our children go online at even younger ages.  And while many of the consumer purchases may already be somehow “IoT” aware, we are still barely scratching the surface of the full potential of a fully connected world. The ultimate impact of the IoT will run far deeper, into the very fabric of our lives and the way we interact with the world around us.

One example is the German port of Hamburg. The Hamburg port Authority is building what they refer to as a smartPort. Literally embedding millions of sensors in everything from container handling systems to street lights – to provide data and management capabilities to move cargo through the port more efficiently, avoid traffic snarl-ups, and even predict environmental impacts through sensors that respond to noise and air pollution.

Securing all those devices and sensors will require a new way of thinking about technology and the interactions of “things,” people, and data. What we must do, then, is to adopt an approach that scales to manage the staggering numbers of these sensors and devices, while still enabling us to identify when they are under attack or being misused.

This is essentially the same problem we already face when dealing with human beings – how do I know when someone is doing something they shouldn’t? Specifically how can I identify a bad person in a crowd of law-abiding citizens?

The best answer is what I like to call, the “Vegas Solution.” Rather than adopting a model that screens every person as they enter a casino, the security folks out in Nevada watch for behavior that indicates someone is up to no good, and then respond accordingly. It’s low impact for everyone else, but works with ruthless efficiency (as anyone who has ever tried counting cards in a casino will tell you.)

This approach focuses on known behaviors and looks for anomalies. It is, at its most basic, the practical application of “identity.” If I understand the identity of the people I am watching, and as a result, their behavior, I can tell when someone is acting badly.

Now scale this up to the vast number of devices and sensors out there in the nascent IoT. If I understand the “identity” of all those washing machines, smart cars, traffic light sensors, industrial robots, and so on, I can determine what they should be doing, see when that behavior changes (even in subtle ways such as how they communicate with each other) and respond quickly when I detect something potentially bad.

The approach is sound, in fact, it’s probably the only approach that will scale to meet the complexity of all those billions upon billions of “things” that make up the IoT. The challenge of this is brought to the forefront by the fact that there must be a concept of identity applied to so many more “things” than we have ever managed before. If there is an “Internet of Everything” there will be an “Identity of Everything” to go with it? And those identities will tell us what each device is, when it was created, how it should behave, what it is capable of, and so on.  There are already proposed standards for this kind of thing, such as the UK’s HyperCatstandard, which lets one device figure out what another device it can talk to actually does and therefore what kind of information it might want to share.

Where things get really interesting, however, is when we start to watch the interactions of all these identities – and especially the interactions of the “thing” identities and our own. How we humans of Internet users compared to the “things”, interact with all the devices around us will provide even more insight into our lives, wants, and behaviors. Watching how I interact with my car, and the car with the road, and so on, will help manage city traffic far more efficiently than broad brush traffic studies. Likewise, as the wearable technology I have on my person (or in my person) interacts with the sensors around me, so my experience of almost everything, from shopping to public services, can be tailored and managed more efficiently. This, ultimately is the promise of the IoT, a world that is responsive, intelligent and tailored for every situation.

As we continue to add more and more sensors and smart devices, the potential power of the IoT grows.  Many small, slightly smart things have a habit of combining to perform amazing feats. Taking another example from nature, leaf-cutter ants (tiny in the extreme) nevertheless combine to form the second most complex social structures on earth (after humans) and can build staggeringly large homes.

When we combine the billions of smart devices into the final IoT, we should expect to be surprised by the final form all those interactions take, and by the complexity of the thing we create.  Those things can and will work together, and how they behave will be defined by the identities we give them today.

Geoff Webb is Director of Solution Strategy at NetIQ.

USP lança projeto “Chuva Online” (IAG)

Com mini radares meteorológicos, Instituto de Astronomia, Geofísica e Ciências Atmosféricas (IAG) testa tecnologia de baixo custo para pequenas e grandes cidades

A apenas alguns dias do início do verão, a USP está lançando o projeto Chuva Online, que conta com dois mini radares meteorológicos instalados em prédios da Universidade de São Paulo. O projeto é encabeçado pelo Instituto de Astronomia, Geofísica e Ciências Atmosféricas (IAG) da USP, sob a coordenação do professor Carlos Morales.

A cerimônia de lançamento do projeto acontece dia 16 de dezembro, às 10:00, na Escola de Artes, Ciências e Humanidades  (EACH) da USP, onde um dos mini radares foi instalado na caixa d’água da Escola. O outro equipamento foi instalado no topo da torre do Pelletron, no Instituto de Física (IF), na Cidade Universitária.

Um dos objetivos do projeto é testar uma nova tecnologia de monitoramento meteorológico capaz de monitorar a chuva com alta resolução espacial e temporal, muito útil para cidades de pequeno e médio porte. Os mini radares foram configurados para terem um alcance de 21 quilômetros com uma resolução de 90 metros e varreduras a cada 5 minutos.

“É uma tecnologia simples que poderá ser adotada por várias cidades  e por empresas que precisam saber onde está chovendo e se existe probabilidade de ocorrer alagamentos em ruas e bairros, por exemplo”, explica o professor Carlos Morales (IAG). Cada equipamento tem custo de cerca de 350 mil reais, enquanto um radar meteorológico convencional pode custar até 5 milhões de reais. Outra vantagem é que o equipamento, com peso de 100 kg, é bastante portátil e pode ser alimentado pela rede elétrica comum.

Juntos, os dois mini radares coletarão informações meteorológicas da Região Metropolitana de São Paulo. Os dados estarão disponíveis em tempo real e online, no portal do projeto que será apresentado durante a inauguração. Na EACH, dois monitores de alta definição exibirão as informações obtidas pelo radar, enquanto no IAG esses dados serão mostrados em um videowall.

O Chuva Online é um dos projetos que compõem o Sistema Integrado de Gestão da Infraestrutura Urbana (SIGINURB) da Prefeitura do Campus da Capital da USP (PUSP-C). Coordenado pelo professor Sidnei Martini (Escola Politécnica da USP), o SIGINURB busca aperfeiçoar a operação da infraestrutura urbana. Com o Chuva Online, a Prefeitura do Campus da Capital testará tecnologias que subsidiam o gerenciamento de pequenas cidades.

Ambos os projetos interagem com ações do Centro de Estudos e Pesquisas em Desastres da USP (CEPED/USP).  Com a aprovação do projeto PRÓ-ALERTA do CEPED USP pela Coordenadoria de Aperfeiçoamento em Pessoal de Nível Superior (CAPES), coordenado pelos professores Carlos Morales e Hugo Yoshizaki, a rede do Chuva Online também será utilizada na capacitação de especialistas do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden) e da Defesa Civil do Estado de São Paulo. Com esses radares e essa tecnologia, os cursos de graduação e pós-graduação da USP passam a contar com ferramental importante para capacitar alunos na área de meteorologia por radar, além de viabilizar o desenvolvimento de aplicativos e fazer previsão de tempo de curtíssimo prazo.

O mini radar no IF/USP foi instalado por meio de projeto do IAG com a PUSP-C. Na EACH, foi feita uma parceria do IAG com a empresa Climatempo e a Fundespa. Essa rede de mini radares também passará a receber dados de um terceiro radar meteorológico, a ser instalado no Parque da Água Funda, onde o IAG mantém sua Estação Meteorológica. Esse terceiro radar será operado pela Fundação Centro Tecnológico de Hidráulica (FCTH), com apoio do governo francês, e está previsto para ser instalado em fevereiro de 2015.

Durante o evento de inauguração será apresentado ao público o portal Chuva Online e suas funcionalidades em mapas geo-referenciados com alta resolução, além de detalhes sobre os projetos Chuva Online, SIGINURB e CEPED da USP e da Climatempo.

Para mais informações, os interessados podem entrar em contato com o professor Carlos Morales no e-mail: carlos.morales@iag.usp.br e telefone (11) 3091-4713.

(IAG)

Jovens ‘biohackers’ instalam chips na mão para abrir a porta de casa (Folha de S.Paulo)

LETÍCIA MORI

DE SÃO PAULO

07/12/2014 02h00

Paulo Cesar Saito, 27, não usa mais chave para entrar em seu apartamento, em Pinheiros. Desde o mês passado, a porta “reconhece” quando ele chega. Basta espalmar a mão na frente da fechadura e ela se abre.

A mágica está no chip que ele próprio (com a ajuda de uma amiga que estuda medicina) implantou na mão. Pouco maior que um grão de arroz, o chip tem tecnologia de reconhecimento por radiofrequência. Quando está próximo, uma base na porta desencadeia uma ação pré-programada. No caso, abrir a fechadura.

Instalar modificações tecnológicas no próprio corpo é uma das atividades de um movimento que surgiu em 2008 nos EUA e é chamado mundo afora de biohacking: se envolver com experimentos em biologia fora de grandes laboratórios.

São basicamente os mesmos nerds que desenvolvem geringonças eletrônicas na garagem e se aprofundam no conhecimento de sistemas de informática. Só que agora eles se aventuram no campo da biotecnologia.

Os grupos de DIYBio (do-it-yourself biology, ou “biologia faça-você-mesmo”) importam conceitos do movimento hacker: acesso à informação, divulgação do conhecimento e soluções simples e baratas para melhorar a vida. E são abertos para cientistas amadores —jovens na graduação ou pessoas não necessariamente formadas em biologia.

Saito, por exemplo, começou a cursar física e meteorologia na USP, mas agora se dedica somente à sua start-up na área de tecnologia. O seu envolvimento com o biohacking se resume a modificações corporais –ele também vai instalar um pequeno ímã no dedo. “Como trabalho com equipamentos eletrônicos, tomo muitos choques. O ímã faz você sentir campos magnéticos, evitando o choque”, diz.

Já seu sócio, Erico Perrella, 23, graduando em química ambiental na USP, é um dos principais entusiastas da DIYBio em São Paulo. Ele também tem uma microcicatriz do chip que instalou junto com o amigo. O aparelhinho tem 12 mm de comprimento e uma cobertura biocompatível para que não seja rejeitado pelo corpo. A proteção impede que o chip se mova de lugar e, por não grudar no tecido interno, é de fácil remoção. Perrella também é um dos organizadores de um grupo de DIYBio que se encontra toda segunda-feira.

O movimento está começando na capital paulista, mas mundialmente já chama a atenção —há laboratórios em cerca de 50 cidades, a maioria nos EUA e na Europa. O grupo de Perrella trabalha para montar em São Paulo o primeiro “wetlab” de DIYBio: um espaço estéril, com equipamentos específicos para materiais biológicos.

Eles se reúnem no Garoa Hacker Clube, espaço para entusiastas em tecnologia. O local, no entanto, tem infraestrutura voltada para projetos com hardware, eletrônica etc. “Para um wetlab’ é preciso uma área limpa, que parece mais uma cozinha do que uma oficina”, diz o estudante de química Otto Werner Heringer, 24, integrante do grupo. “O Garoa já tem uma área assim, nossa ideia é levar e deixar mais equipamentos [no local]”

Aproveitar espaços “geeks” é comum no movimento. O Open Wetlab de Amsterdam, por exemplo, começou como parte da Waag Society, um instituto sem fins lucrativos que promove arte, ciência e tecnologia.

Certos experimentos exigem equipamentos complexos, que podem custar milhares de dólares. “A solução é montar algumas coisas e consertar equipamentos velhos que a universidade iria jogar fora”, explica Heringer.

Grande parte dos biohackers se dedica mais à montagem dos equipamentos do que a experimentos. Heringer já fez uma centrífuga usando uma peça impressa em 3D encaixada em uma furadeira. Agora está montando um contador de células. Ajudado por amigos, Perrella criou biorreatores com material reciclado de uma mineradora.

Para esses jovens entusiasmados, são grandes as vantagens de fazer ciência fora da academia ou da indústria.

Longe do controle minucioso da universidade, é possível desenvolver projetos sem a aprovação de diversos comitês e conselhos. “O ambiente [acadêmico] é muito engessado. Você fica desestimulado”, diz Heringer.

O trabalho dos amadores acaba até contribuindo para a ciência “formal”. Heringer está criando com amigos uma pipetadora automática no InovaLab da Escola Politécnica da USP baseada em um projeto de DIYBio e financiada por um fundo de ex-alunos. “A gente nunca conseguiria financiamento pelos meios normais da USP. E, se conseguisse, ia demorar!”, diz ele.

SEGURANÇA

O amplo acesso gera preocupações: laboratórios amadores não poderiam criar organismos nocivos? Defensores dizem que, para quem pratica DIYBio, interessa manter tudo dentro dos padrões de segurança –se algo der errado, o controle vai aumentar e tornar a vida mais difícil.

Não existe no Brasil uma regulação para laboratórios amadores. Nos EUA, o FBI monitora o movimento e há restrições ao uso de alguns materiais, porém não há regulação específica.

O cientista francês Thomas Landrain, que estuda o movimento, argumenta em sua pesquisa que os espaços ainda não têm sofisticação suficiente para gerar problemas.

Mas, apesar da limitação técnica, os laboratórios permitem inúmeras possibilidades. “Quem se dedica tem uma crença profunda no potencial transformador dessas novas tecnologias”, explica Perrella, que tem um projeto de mineração com uso de bactérias. Há grupos que focam a saúde, criando sensores de contaminação em alimentos ou “mapas biológicos” que podem monitorar a evolução de doenças.

É possível trabalhar com DNA Barcode, método que identifica a qual espécie pertence um tecido. “Daria para checar qual é a carne da esfirra do Habib’s”, diz Perrella, citando um experimentocom análise de carne que já está sendo feito no OpenWetlab, em Amsterdam. Dá até para descobrir qual é o vizinho que não recolhe o cocô do cachorro. Foi o que fez o alemão Sascha Karberg, comparando pelo de cães da vizinhança com o “presente” à sua porta. O método usado em projetos como esse pode ser encontrado por outros “biohackers”. O risco é aumentar as brigas entre vizinhos.

Geoengineering Gone Wild: Newsweek Touts Turning Humans Into Hobbits To Save Climate (Climate Progress)

POSTED ON DECEMBER 5, 2014 AT 9:37 AM

Matamata, New Zealand - "Hobbiton," site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings.

A Newsweek cover story touts genetically engineering humans to be smaller, with better night vision (like, say, hobbits) to save the Earth. Matamata, New Zealand, or “Hobbiton,” site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings. CREDIT: SHUTTERSTOCK

Newsweek has an entire cover story devoted to raising the question, “Can Geoengineering Save the Earth?” After reading it, though, you may not realize the answer is a resounding “no.” In part that’s because Newsweek manages to avoid quoting even one of the countless general critics of geoengineering in its 2700-word (!) piece.

20141205cover600-x-800Geoengineering is not a well-defined term, but at its broadest, it is the large-scale manipulation of the Earth and its biosphere to counteract the effects of human-caused global warming. Global warming itself is geo-engineering — originally unintentional, but now, after decades of scientific warnings, not so much.

I have likened geoengineering to a dangerous, never tested, course of chemotherapy prescribed to treat a condition curable through diet and exercise — or, in this case, greenhouse gas emissions reduction. If your actual doctor were to prescribe such a treatment, you would get another doctor.

The media likes geoengineering stories because they are clickbait involving all sorts of eye-popping science fiction (non)solutions to climate change that don’t actually require anything of their readers (or humanity) except infinite credulousness. And so Newsweek informs us that adorable ants might solve the problem or maybe phytoplankton can if given Popeye-like superstrength with a diet of iron or, as we’ll see, maybe we humans can, if we allow ourselves to be turned into hobbit-like creatures. The only thing they left out was time-travel.

The author does talk to an unusually sober expert supporter of geoengineering, climatologist Ken Caldeira. Caldeira knows that of all the proposed geoengineering strategies, only one makes even the tiniest bit of sense — and he knows even that one doesn’t make much sense. That would be the idea of spewing vast amounts of tiny particulates (sulfate aerosols) into the atmosphere to block sunlight, mimicking the global temperature drops that follow volcanic eruptions. But they note the caveat: “that said, Caldeira doesn’t believe any method of geoengineering is really a good solution to fighting climate change — we can’t test them on a large scale, and implementing them blindly could be dangerous.”

Actually, it’s worse than that. As Caldeira told me in 2009, “If we keep emitting greenhouse gases with the intent of offsetting the global warming with ever increasing loadings of particles in the stratosphere, we will be heading to a planet with extremely high greenhouse gases and a thick stratospheric haze that we would need to maintain more-or-less indefinitely. This seems to be a dystopic world out of a science fiction story.”

And the scientific literature has repeatedly explained that the aerosol-cooling strategy — or indeed any large-scale effort to manipulate sunlight — is very dangerous. Just last month, the UK Guardian reported that the aerosol strategy “risks ‘terrifying’ consequences including droughts and conflicts,” according to recent studies.

“Billions of people would suffer worse floods and droughts if technology was used to block warming sunlight, the research found.”

And remember, this dystopic world where billions suffer is the best geoengineering strategy out there. And it still does nothing to stop the catastrophic acidification of the ocean.

There simply is no rational or moral substitute for aggressive greenhouse gas cuts. But Newsweek quickly dispenses with that supposedly “seismic shift in what has become a global value system” so it can move on to its absurdist “reimagining of what it means to be human”:

In a paper released in 2012, S. Matthew Liao, a philosopher and ethicist at New York University, and some colleagues proposed a series of human-engineering projects that could make our very existence less damaging to the Earth. Among the proposals were a patch you can put on your skin that would make you averse to the flavor of meat (cattle farms are a notorious producer of the greenhouse gas methane), genetic engineering in utero to make humans grow shorter (smaller people means fewer resources used), technological reengineering of our eyeballs to make us better at seeing at night (better night vision means lower energy consumption)….

Yes, let’s turn humans into hobbits (who are “about 3 feet tall” and “their night vision is excellent“). Anyone can see that could easily be done for billions of people in the timeframe needed to matter. Who could imagine any political or practical objection?

Now you may be thinking that Newsweek can’t possibly be serious devoting ink to such nonsense. But if not, how did the last two paragraphs of the article make it to print:

Geoengineering, Liao argues, doesn’t address the root cause. Remaking the planet simply attempts to counteract the damage that’s been done, but it does nothing to stop the burden humans put on the planet. “Human engineering is more of an upstream solution,” says Liao. “You get right to the source. If we’re smaller on average, then we can have a smaller footprint on the planet. You’re looking at the source of the problem.”

It might be uncomfortable for humans to imagine intentionally getting smaller over generations or changing their physiology to become averse to meat, but why should seeding the sky with aerosols be any more acceptable? In the end, these are all actions we would enact only in worst-case scenarios. And when we’re facing the possible devastation of all mankind, perhaps a little humanity-wide night vision won’t seem so dramatic.

Memo to Newsweek: We are already facing the devastation of all mankind. And science has already provided the means of our “rescue,” the means of reducing “the burden humans put on the planet” — the myriad carbon-free energy technologies that reduce greenhouse gas emissions. Perhaps LED lighting would make a slightly more practical strategy than reengineering our eyeballs, though perhaps not one dramatic enough to inspire one of your cover stories.

As Caldeira himself has said elsewhere of geoengineering, “I think that 99% of our effort to avoid climate change should be put on emissions reduction, and 1% of our effort should be looking into these options.” So perhaps Newsweek will consider 99 articles on the real solutions before returning to the magical thinking of Middle Earth.

Cidade submarina projetada no Japão pode abrigar 5 mil moradores (Portal do Meio Ambiente)

PUBLICADO  21 NOVEMBRO 2014

9730
Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Uma empresa de construção japonesa diz que, no futuro, os seres humanos podem viver em grandes complexos habitacionais submarinos.

Pelo projeto, cerca de 5 mil pessoas poderiam viver e trabalhar em modernas vesões da cidade perdida da Atlântida.

As construções teriam hotéis, espaços residenciais e conjuntos comerciais, informou o site Busines Insider.

A grande globo que flutua na superfície do mar, mas pode ser submerso em mau tempo, seria o centro de uma estrutura espiral gigantesca que mergulha a profundidades de até 4 mil metros.

A espiral formaria um caminho 15 quilômetros de um edifício até o fundo do oceano, o que poderia servir como uma fábrica para aproveitar recursos como metais e terras raras.

Os visionários da construtora Shimizu dizem que seria possível usar micro-organismos para converter dióxido de carbono capturado na superfície em metano.

9730b
Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Energia. O conceito foi desenvolvido em conjunto com várias organizações, incluindo a Universidade de Tóquio e a agência japonesa de ciência e tecnologia.

A grande diferença de temperaturas da água entre o topo e o fundo do mar poderia ser usada para gerar energia.

A construtora Shimizu diz que a cidade submarina custaria cerca de três trilhões de ienes (ou US$ 25 bilhões), e toda a tecnologia poderia estar disponível em 2030.

A empresa já projetou uma metrópole flutuante e um anel de energia solar ao redor da lua.

Fonte: Estadão.

Em site, indígenas ensinam sua história e derrubam preconceitos (Estadão)

Índio Educa publica material didático multimídia sobre histórias, tradições e lutas de povos do Brasil

Sempre que o índio xucuru Casé Angatu deixa Ilhéus, na Bahia, para oferecer em São Paulo um curso sobre culturas indígenas, ele ouve de algum participante: “Vocês comem pessoas?”. De tão acostumado a ser lembrado pelos estereótipos, Casé ri, disfarça e aproveita a oportunidade para apresentar ao grupo, na frente do Pátio do Colégio, o projeto Índio Educa. No site, indígenas de todo o Brasil produzem material didático multimídia sobre suas histórias, tradições e lutas.

Veja o texto na íntegra em: http://educacao.estadao.com.br/noticias/geral,em-site-indigenas-ensinam-sua-historia,1601271

(Estado S.Paulo)

Manifestações neozapatistas (Fapesp)

24.11.2014 

Para além das reivindicações contra os gastos públicos na organização da Copa do Mundo e por melhorias no transporte, na saúde e educação, as manifestações de junho de 2013 no Brasil ressaltaram uma expressão simbólica das articulações do chamado “net-ativismo”, expressão-chave de um estudo financiado pela FAPESP. No vídeo produzido pela equipe de Pesquisa FAPESP, o sociólogo Massimo Di Felice, do Centro de Pesquisa Atopos da Escola de Comunicações e Artes da Universidade de São Paulo (ECA-USP) e coordenador do estudo, fala sobre a qualidade e o lugar das ações net-ativistas e como as redes digitais e os novos dispositivos móveis de conectividade estão mudando práticas de participação social no Brasil e no mundo.

High-tech mirror beams heat away from buildings into space (Science Daily)

Date:

November 26, 2014

Source:

Stanford School of Engineering

Summary:

Engineers have invented a material designed to help cool buildings. The material reflects incoming sunlight, and it sends heat from inside the structure directly into space as infrared radiation.

141126133821-large

Stanford engineers have invented a material designed to help cool buildings. The material reflects incoming sunlight and sends heat from inside the structure directly into space as infrared radiation – represented by reddish rays. Credit: Illustration: Nicolle R. Fuller, Sayo-Art LLC

Stanford engineers have invented a revolutionary coating material that can help cool buildings, even on sunny days, by radiating heat away from the buildings and sending it directly into space.

A new ultrathin multilayered material can cool buildings without air conditioning by radiating warmth from inside the buildings into space while also reflecting sunlight to reduce incoming heat.

A team led by electrical engineering Professor Shanhui Fan and research associate Aaswath Raman reported this energy-saving breakthrough in the journal Nature.

The heart of the invention is an ultrathin, multilayered material that deals with light, both invisible and visible, in a new way.

Invisible light in the form of infrared radiation is one of the ways that all objects and living things throw off heat. When we stand in front of a closed oven without touching it, the heat we feel is infrared light. This invisible, heat-bearing light is what the Stanford invention shunts away from buildings and sends into space.

Of course, sunshine also warms buildings. The new material, in addition to dealing with infrared light, is also a stunningly efficient mirror that reflects virtually all of the incoming sunlight that strikes it.

The result is what the Stanford team calls photonic radiative cooling — a one-two punch that offloads infrared heat from within a building while also reflecting the sunlight that would otherwise warm it up. The result is cooler buildings that require less air conditioning.

“This is very novel and an extraordinarily simple idea,” said Eli Yablonovitch, a professor of engineering at the University of California, Berkeley, and a pioneer of photonics who directs the Center for Energy Efficient Electronics Science. “As a result of professor Fan’s work, we can now [use radiative cooling], not only at night but counter-intuitively in the daytime as well.”

The researchers say they designed the material to be cost-effective for large-scale deployment on building rooftops. Though it’s still a young technology, they believe it could one day reduce demand for electricity. As much as 15 percent of the energy used in buildings in the United States is spent powering air conditioning systems.

In practice the researchers think the coating might be sprayed on a more solid material to make it suitable for withstanding the elements.

“This team has shown how to passively cool structures by simply radiating heat into the cold darkness of space,” said Nobel Prize-winning physicist Burton Richter, professor emeritus at Stanford and former director of the research facility now called the SLAC National Accelerator Laboratory.

A warming world needs cooling technologies that don’t require power, according to Raman, lead author of the Nature paper. “Across the developing world, photonic radiative cooling makes off-grid cooling a possibility in rural regions, in addition to meeting skyrocketing demand for air conditioning in urban areas,” he said.

Using a window into space

The real breakthrough is how the Stanford material radiates heat away from buildings.

As science students know, heat can be transferred in three ways: conduction, convection and radiation. Conduction transfers heat by touch. That’s why you don’t touch a hot oven pan without wearing a mitt. Convection transfers heat by movement of fluids or air. It’s the warm rush of air when the oven is opened. Radiation transfers heat in the form of infrared light that emanates outward from objects, sight unseen.

The first part of the coating’s one-two punch radiates heat-bearing infrared light directly into space. The ultrathin coating was carefully constructed to send this infrared light away from buildings at the precise frequency that allows it to pass through the atmosphere without warming the air, a key feature given the dangers of global warming.

“Think about it like having a window into space,” Fan said.

Aiming the mirror

But transmitting heat into space is not enough on its own.

This multilayered coating also acts as a highly efficient mirror, preventing 97 percent of sunlight from striking the building and heating it up.

“We’ve created something that’s a radiator that also happens to be an excellent mirror,” Raman said.

Together, the radiation and reflection make the photonic radiative cooler nearly 9 degrees Fahrenheit cooler than the surrounding air during the day.

The multilayered material is just 1.8 microns thick, thinner than the thinnest aluminum foil.

It is made of seven layers of silicon dioxide and hafnium oxide on top of a thin layer of silver. These layers are not a uniform thickness, but are instead engineered to create a new material. Its internal structure is tuned to radiate infrared rays at a frequency that lets them pass into space without warming the air near the building.

“This photonic approach gives us the ability to finely tune both solar reflection and infrared thermal radiation,” said Linxiao Zhu, doctoral candidate in applied physics and a co-author of the paper.

“I am personally very excited about their results,” said Marin Soljacic, a physics professor at the Massachusetts Institute of Technology. “This is a great example of the power of nanophotonics.”

From prototype to building panel

Making photonic radiative cooling practical requires solving at least two technical problems.

The first is how to conduct the heat inside the building to this exterior coating. Once it gets there, the coating can direct the heat into space, but engineers must first figure out how to efficiently deliver the building heat to the coating.

The second problem is production. Right now the Stanford team’s prototype is the size of a personal pizza. Cooling buildings will require large panels. The researchers say large-area fabrication facilities can make their panels at the scales needed.

The cosmic fridge

More broadly, the team sees this project as a first step toward using the cold of space as a resource. In the same way that sunlight provides a renewable source of solar energy, the cold universe supplies a nearly unlimited expanse to dump heat.

“Every object that produces heat has to dump that heat into a heat sink,” Fan said. “What we’ve done is to create a way that should allow us to use the coldness of the universe as a heat sink during the day.”

In addition to Fan, Raman and Zhu, this paper has two additional co-authors: Marc Abou Anoma, a master’s student in mechanical engineering who has graduated; and Eden Rephaeli, a doctoral student in applied physics who has graduated.

This research was supported by the Advanced Research Project Agency-Energy (ARPA-E) of the U.S. Department of Energy.

Story Source:

The above story is based on materials provided by Stanford School of Engineering. The original article was written by Chris Cesare. Note: Materials may be edited for content and length.

Journal Reference:

  1. Aaswath P. Raman, Marc Abou Anoma, Linxiao Zhu, Eden Rephaeli, Shanhui Fan. Passive radiative cooling below ambient air temperature under direct sunlight. Nature, 2014; 515 (7528): 540 DOI: 10.1038/nature13883

Manipulação do clima pode causar efeitos indesejados (N.Y.Times/FSP)

Ilvy Njiokiktjien/The New York Times
Olivine, a green-tinted mineral said to remove carbon dioxide from the atmosphere, in the hands of retired geochemist Olaf Schuiling in Maasland, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Ilvy Njiokiktjien/The New York Times)
Olivina, um mineral esverdeado que ajudaria remover o dióxido de carbono da atmosfera

HENRY FOUNTAIN
DO “NEW YORK TIMES”

18/11/2014 02h01

Para Olaf Schuiling, a solução para o aquecimento global está sob nossos pés.

Schuiling, geoquímico aposentado, acredita que a salvação climática está na olivina, mineral de tonalidade verde abundante no mundo inteiro. Quando exposta aos elementos, ela extrai lentamente o gás carbônico da atmosfera.

A olivina faz isso naturalmente há bilhões de anos, mas Schuiling quer acelerar o processo espalhando-a em campos e praias e usando-a em diques, trilhas e até playgrounds. Basta polvilhar a quantidade certa de rocha moída, diz ele, e ela acabará removendo gás carbônico suficiente para retardar a elevação das temperaturas globais.

“Vamos deixar a Terra nos ajudar a salvá-la”, disse Schuiling, 82, em seu gabinete na Universidade de Utrecht.
Ideias para combater as mudanças climáticas, como essas propostas de geoengenharia, já foram consideradas meramente fantasiosas.

Todavia, os efeitos das mudanças climáticas podem se tornar tão graves que talvez tais soluções passem a ser consideradas seriamente.

A ideia de Schuiling é uma das várias que visam reduzir os níveis de gás carbônico, o principal gás responsável pelo efeito estufa, de forma que a atmosfera retenha menos calor.

Outras abordagens, potencialmente mais rápidas e viáveis, porém mais arriscadas, criariam o equivalente a um guarda-sol ao redor do planeta, dispersando gotículas reflexivas na estratosfera ou borrifando água do mar para formar mais nuvens acima dos oceanos. A menor incidência de luz solar na superfície da Terra reduziria a retenção de calor, resultando em uma rápida queda das temperaturas.

Ninguém tem certeza de que alguma técnica de geoengenharia funcionaria, e muitas abordagens nesse campo parecem pouco práticas. A abordagem de Schuiling, por exemplo, levaria décadas para ter sequer um pequeno impacto, e os próprios processos de mineração, moagem e transporte dos bilhões de toneladas de olivina necessários produziriam enormes emissões de carbono.

Jasper Juinen/The New York Times
Kids play on a playground made with Olivine, a material said to remove carbon dioxide from the atmosphere, in Arnhem, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Jasper Juinen/The New York Times)
Crianças brincam em playground na Holanda revestido com olivina; minério esverdeado retira lentamento o gás carbônico presente na atmosfera

Muitas pessoas consideram a ideia da geoengenharia um recurso desesperado em relação à mudança climática, o qual desviaria a atenção mundial da meta de eliminar as emissões que estão na raiz do problema.

O clima é um sistema altamente complexo, portanto, manipular temperaturas também pode ter consequências, como mudanças na precipitação pluviométrica, tanto catastróficas como benéficas para uma região à custa de outra. Críticos também apontam que a geoengenharia poderia ser usada unilateralmente por um país, criando outra fonte de tensões geopolíticas.

Especialistas, porém, argumentam que a situação atual está se tornando calamitosa. “Em breve poderá nos restar apenas a opção entre geoengenharia e sofrimento”, opinou Andy Parker, do Instituto de Estudos Avançados sobre Sustentabilidade, em Potsdam, Alemanha.

Em 1991, uma erupção vulcânica nas Filipinas expeliu a maior nuvem de gás anidrido sulforoso já registrada na alta atmosfera. O gás formou gotículas de ácido sulfúrico, que refletiam os raios solares de volta para o Espaço. Durante três anos, a média das temperaturas globais teve uma queda de cerca de 0,5 grau Celsius. Uma técnica de geoengenharia imitaria essa ação borrifando gotículas de ácido sulfúrico na estratosfera.

David Keith, pesquisador na Universidade Harvard, disse que essa técnica de geoengenharia, chamada de gestão da radiação solar (SRM na sigla em inglês), só deve ser utilizada lenta e cuidadosamente, para que possa ser interrompida caso prejudique padrões climáticos ou gere outros problemas.

Certos críticos da geoengenharia duvidam que qualquer impacto possa ser equilibrado. Pessoas em países subdesenvolvidos são afetadas por mudanças climáticas em grande parte causadas pelas ações de países industrializados. Então, por que elas confiariam que espalhar gotículas no céu as ajudaria?

“Ninguém gosta de ser o rato no laboratório alheio”, disse Pablo Suarez, do Centro do Clima da Cruz Vermelha/Crescente Vermelho.

Ideias para retirar gás carbônico do ar causam menos alarme. Embora tenham questões espinhosas –a olivina, por exemplo, contém pequenas quantidades de metais que poderiam contaminar o meio ambiente–,elas funcionariam de maneira bem mais lenta e indireta, afetando o clima ao longo de décadas ao alterar a atmosfera.

Como o doutor Schuiling divulga há anos sua ideia na Holanda, o país se tornou adepto da olivina. Estando ciente disso, qualquer um pode notar a presença da rocha moída em trilhas, jardins e áreas lúdicas.

Eddy Wijnker, ex-engenheiro acústico, criou a empresa greenSand na pequena cidade de Maasland. Ela vende areia de olivina para uso doméstico ou comercial. A empresa também vende “certificados de areia verde” que financiam a colocação da areia ao longo de rodovias.

A obstinação de Schuiling também incitou pesquisas. No Instituto Real de Pesquisa Marítima da Holanda em Yerseke, o ecologista Francesc Montserrat está pesquisando a possibilidade de espalhar olivina no leito do mar. Na Bélgica, pesquisadores na Universidade de Antuérpia estudam os efeitos da olivina em culturas agrícolas como cevada e trigo.

Boa parte dos profissionais de geoengenharia aponta a necessidade de haver mais pesquisas e o fato de as simulações em computador serem limitadas.

Poucas verbas no mundo são destinadas a pesquisas de geoengenharia. No entanto, até a sugestão de realizar experimentos em campo pode causar clamor popular. “As pessoas gostam de linhas bem demarcadas, e uma bem óbvia é que não há problema em testar coisas em um computador ou em uma bancada de laboratório”, comentou Matthew Watson, da Universidade de Bristol, no Reino Unido. “Mas elas reagem mal assim que você começa a entrar no mundo real.”

Watson conhece bem essas delimitações. Ele liderou um projeto financiado pelo governo britânico, que incluía um teste relativamente inócuo de uma tecnologia. Em 2011, os pesquisadores pretendiam soltar um balão a cerca de um quilômetro de altitude e tentar bombear um pouco de água por uma mangueira até ele. A proposta desencadeou protestos no Reino Unido, foi adiada por meio ano e, finalmente, cancelada.

Hoje há poucas perspectivas de apoio governamental a qualquer tipo de teste de geoengenharia nos EUA, onde muitos políticos negam sequer que as mudanças climáticas sejam uma realidade.

“O senso comum é que a direita não quer falar sobre isso porque reconhece o problema”, disse Rafe Pomerance, que trabalhou com questões ambientais no Departamento de Estado. “E a esquerda está preocupada com o impacto das emissões.”

Portanto, seria bom discutir o assunto abertamente, afirmou Pomerance. “Isso ainda vai levar algum tempo, mas é inevitável”, acrescentou.

Worlding Anthropologies of Technosciences? (Blog.castac.org)

October 28th, 2014, by

The past 4S meeting in Buenos Aires made visible the expansion of STS to various regions of the globe. Those of us who happened to be at the 4S meeting at University of Tokyo four years ago will remember the excitement of having the opportunity to work side-by-side with STS scholars from East and Southeast Asia. The same opportunity for worlding STS was opened again this past summer in Buenos Aires.

In order to help increase diversity of perspectives, Sharon Traweek and I organized a 4S panel on the relationships between STS and anthropology with a focus on the past, present, and future of the exchange among national traditions. The idea came out of our conversations about the intersections between science studies and the US anthropology of the late 1980’s with the work of CASTAC pioneers such as Diana Forsythe, Gary Downey, Joseph Dumit, David Hakken, David Hess, and Sharon Traweek, among several others who helped to establish the technosciences as legitimate domains of anthropological inquiry. It was not an easy battle, as Chris Furlow’s post on the history of CASTAC reminded us, but the results are undeniably all around us today. Panels on anthropology of science and technology can always be found at professional meetings. Publications on science and technology have space in various journals and the attention of university publishers these days.

For our panel this year we had the opening remarks of Gary Downey who, after reading our proposal aloud, emphasized the importance of advancing a cultural critique of science and technology through a situated, grounded stance. Quoting Marcus and Fischer’s “Anthropology as Cultural Critique” (1986) he emphasized that anthropology of science and technology could not dispense with the reflection upon the place, the situation, and the positioning of the anthropologist. Downey described his own positioning as an anthropologist and critical participant in engineering. Two decades ago Downey challenged the project of “anthropology as cultural critique” to speak widely to audiences outside anthropology and to practice anthropology as cultural critique, as suggested by the title of his early AAA paper, “Outside the Hotel”.

Yet “Anthropology as Cultural Critique” represented, he pointed out, one of the earliest reflexive calls in US anthropology for us to rethink canonical fieldwork orientations and our approach to the craft of ethnography with its representational politics. Downey and many others who invented new spaces to advance critical agendas in the context of science and technology did so by adding to the identity of the anthropologist other identities and responsibilities, such as that of former mechanical engineer, laboratory physicist, theologian, and experimenter of alternative forms of sociality, etc. These overlapping and intersecting identities opened up a whole field of possibilities for renewed modes of inquiry which, after “Anthropology as Cultural Critique”, consisted, as Downey suggested, in the juxtaposition of knowledge, forms of expertise, positionalities, and commitments. This is where we operate as STS scholars: at intersecting research areas, bridging “fault lines” (as Traweek’s felicitous expression puts it), and doing anthropology with and not without anthropologists.

The order of presentations for our panel was defined in a way to elicit contrasts and parallels between different modes of inquiry, grounded in different national anthropological traditions. The first session had Marko Monteiro (UNICAMP), Renzo Taddei (UNIFESP), Luis Felipe R. Murillo (UCLA), and Aalok Khandekar (Maastricht University) as presenters and Michael M. J. Fischer (MIT) as commentator. Marko Monteiro, an anthropologist working for an interdisciplinary program in science and technology policy in Brazil addressed questions of scientific modeling and State policy regarding the issue of deforestation in the Amazon. His paper presented the challenges of conducting multi-sited ethnography alongside multinational science collaborations, and described how scientific modeling for the Amazalert project was designed to accommodate natural and sociocultural differences with the goal of informing public policy. In the context of his ethnographic work, Monteiro soon found himself in a double position as a panelist expert and as an anthropologist interested in how different groups of scientists and policy makers negotiate the incorporation of “social life” through a “politics of associations.”

Similarly to Monteiro’s positioning, Khandekar benefited in his ethnographic work for being an active participant and serving as the organizer of expert panels involving STS scholars and scientists to design nanotechnology-based development programs in India. Drawing from Fischer’s notion of “third space”, Khandekar addressed how India could be framed productively as such for being a fertile ground for conceptual work where cross-disciplinary efforts have articulated humanities and technosciences under the rubric of innovation. Serving as a knowledge broker for an international collaboration involving India, Kenya, South Africa, and the Netherlands on nanotechnology, Khandekar had first-hand experience in promoting “third spaces” as postcolonial places for cross-disciplinary exchange through story telling.

Shifting the conversation to the context of computing and political action, Luis Felipe R. Murillo’s paper described a controversy surrounding the proposal of a “feminist programming language” and discussed the ways in which it provides access to the contemporary technopolitical dynamics of computing. The feminist programming language parody served as an entry point to analyze how language ideologies render symbolic boundaries visible, highlighting fundamental aspects of socialization in the context of computing in order to reproduce concepts and notions of the possible, logical, and desirable technical solutions. In respect to socioeconomic and political divisions, he suggested that feminist approaches in their intersectionality became highly controversial for addressing publicly systemic inequalities that are transversal to the context of computing and characterize a South that is imbricated in the North of “big computing” (an apparatus that encompasses computer science, information technology industries, infrastructures, and cultures with their reinvented peripheries within the global North and South).

Renzo Taddei recasted the debate regarding belief in magic drawing from a long lasting thread of anthropological research on logical reasoning and cultural specificity. Taddei opened up his take on our conversation with the assertion that to conduct ethnography on witchcraft assuming that it does not exist is fundamentally ethnocentric. This observation was meant to take us the core of his concerns regarding climate sciences vis-à-vis traditional Brazilian forms of forecasting from Sertão, a semi-arid and extremely impoverished area of the Northeast of Brazil. He then proceeded to discuss magical manipulation of the atmosphere from native and Afro-Brazilian perspectives in Brazil.

For the second day of our panel, we had papers by Kim Fortun (RPI), Mike Fortun (RPI), Sharon Traweek (UCLA) and the commentary of Claudia Fonseca (UFRGS) whose long-term contributions to study of adoption, popular culture, science and human rights in Brazil has been highly influential. In her paper, Kim Fortun addressed the double bind of expertise, the in-between of competence and hubris, structural risk and unpredictability of the very infrastructures experts are called upon to take responsibility. Fortun’s call was for a mode of interaction and engagement among science and humanities scholars oriented toward friendship and hospitality as well as commitment for our technoscientific futures under the aegis of late industrialism. “Ethnographic insight”, according to Fortun, “can loop back into the world” through the means of creative pedagogies which are attentive to the fact that science practitioners and STS scholars mobilize different analytic lenses while speaking through and negotiating with distinct discursive registers in the context of international collaborations. Our assumptions of what is conceptually shared should not anticipate what is to be seen or forged in the context of our international exchange, since what is foregrounded in discourse always implicates one form or another of erasure. The image Fortun suggested for us to think with is not that of a network, but that of a kaleidoscope in which the complexity of disasters can be seen across multiple dimensions and scales in their imbrication at every turn.

In his presentation, Michael Fortun questioned the so-called “ontological turn” to recast the “hauntological” dimensions of our research practices vis-à-vis those of our colleagues in the biosciences, that is, to account for the imponderables of scientific and anthropological languages and practices through the lens of a poststructural understanding of the historical functioning of language. In his study of asthma, Fortun attends to multiple perspectives and experiences with asthma across national, socioeconomic, scientific and technical scales. In the context of his project “The Asthma Files”, he suggests, alongside Kim Fortun, hospitality and friendship as frames for engaging instead of disciplining the contingency of ethnographic encounters and ethnographic projects. For future collaborations, two directions are suggested: 1) investigating and experimenting with modes of care and 2) designing collaborative digital platforms for experimental ethnography. The former is related to the scientists care for their instruments, methods, theories, intellectual reproduction, infrastructures, and problems in their particular research fields, while the latter poses the question of care among ourselves and the construction of digital platforms to facilitate and foster collaboration in anthropology.

This panel was closed with Sharon Traweek’s paper on multi-scalar complexity of contemporary scientific collaborations, based on her current research on data practices and gender imbalance in astronomy. Drawing from concepts of meshwork and excess proposed by researchers with distinct intellectual projects such as Jennifer McWeeny, Arturo Escobar, Susan Paulson, and Tim Ingold, Traweek discussed billion-dollar science projects which involve multiple research communities clustered around a few recent research devices and facilities, such as the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the International Thermonuclear Experimental Reactor (ITER) in France. In the space of ongoing transformations of big science toward partially-global science, women and ethnic minorities are building meshworks as overlapping networks in their attempts to build careers in astronomy. Traweek proposed a revision of the notion of “enrollment” to account for the ways in which mega projects in science are sustained for decades of planning, development, construction, and operation at excessive scales which require more than support and consensus. Mega projects in the technosciences are, in Traweek’s terms, “over-determined collages that get built and used” by international teams with “glocal” structures of governance and funding.

In his concluding remarks Michael M. J. Fischer addressed the relationship between anthropology and STS through three organizing axes: time, topic, and audiences. As a question of time, a quarter century has passed for the shared history of STS and anthropology and probing questions have been asked and explored in the technosciences in respect to its apparatuses, codes, languages, life cycle of machines, educational curricula, personal and technical trajectories, which is well represented in one of the foundational texts of our field, Traweek’s “Beamtimes and Lifetimes” (1988). Traweek has helped establish a distinctive anthropological style “working alongside scientists and engineers through juxtaposition not against them.” In respect to the relationships between anthropology and STS, Fischer raised the question of pedagogies as, at once, a prominent form of engagement in the technosciences as well as an anthropological mode of engagement with the technosciences. The common thread connecting all the panel contributions was the potential for new pedagogies to emerge with the contribution of world anthropologies of sciences and technologies. That is, in the space of socialization of scientists, engineers, and the public, space of the convention, as well as invention, and knowledge-making, all the presenters addressed the question of how to advance an anthropology of science and technology with forms of participation, as Fischer suggests, as productive critique.

Along similar lines, Claudia Fonseca offered closing remarks about her own trajectory and the persistence of national anthropological traditions informing our cross-dialogs and border crossings. Known in Brazil as an “anthropologist with an accent”, an anthropologist born in the US, trained in France, and based in Brazil for the most part of her academic life, she cannot help but emphasize the style and forms of engagement that are specific to Brazilian anthropology which has a tradition of conducting ethnography at home. The panel served, in sum, for the participants to find a common thread connecting a rather disparate set of papers and for advancing a form of dialogue across national traditions and modes of engagement which is attentive to local political histories and (national) anthropological trajectories. As suggested by Michael Fortun, we are just collectively conjuring – with much more empiria than magic – a new beginning in the experimental tradition for world anthropologies of sciences and technologies.

Latour on digital methods (Installing [social] order)

Capture

In a fascinating, apparently not-peer-reviewed non-article available free online here, Tommaso Venturini and Bruno Latour discuss the potential of “digital methods” for the contemporary social sciences.

The paper summarizes, and quite nicely, the split of sociological methods to the statistical aggregate using quantitative methods (capturing supposedly macro-phenomenon) and irreducibly basic interactions using qualitative methods (capturing supposedly micro-phenomenon). The problem is that neither of which aided the sociologist in capture emergent phenomenon, that is, capturing controversies and events as they happen rather than estimate them after they have emerged (quantitative macro structures) or capture them divorced from non-local influences (qualitative micro phenomenon).

The solution, they claim, is to adopt digital methods in the social sciences. The paper is not exactly a methodological outline of how to accomplish these methods, but there is something of a justification available for it, and it sounds something like this:

Thanks to digital traceability, researchers no longer need to choose between precision and scope in their observations: it is now possible to follow a multitude of interactions and, simultaneously, to distinguish the specific contribution that each one makes to the construction of social phenomena. Born in an era of scarcity, the social sciences are entering an age of abundance. In the face of the richness of these new data, nothing justifies keeping old distinctions. Endowed with a quantity of data comparable to the natural sciences, the social sciences can finally correct their lazy eyes and simultaneously maintain the focus and scope of their observations.

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.


Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332