Arquivo da tag: Mediação tecnológica

Anthropocene: The human age (Nature)

Momentum is building to establish a new geological epoch that recognizes humanity’s impact on the planet. But there is fierce debate behind the scenes.

Richard Monastersky

11 March 2015

Illustration by Jessica Fortner

Almost all the dinosaurs have vanished from the National Museum of Natural History in Washington DC. The fossil hall is now mostly empty and painted in deep shadows as palaeobiologist Scott Wing wanders through the cavernous room.

Wing is part of a team carrying out a radical, US$45-million redesign of the exhibition space, which is part of the Smithsonian Institution. And when it opens again in 2019, the hall will do more than revisit Earth’s distant past. Alongside the typical displays of Tyrannosaurus rex and Triceratops, there will be a new section that forces visitors to consider the species that is currently dominating the planet.

“We want to help people imagine their role in the world, which is maybe more important than many of them realize,” says Wing.

This provocative exhibit will focus on the Anthropocene — the slice of Earth’s history during which people have become a major geological force. Through mining activities alone, humans move more sediment than all the world’s rivers combined. Homo sapiens has also warmed the planet, raised sea levels, eroded the ozone layer and acidified the oceans.

Given the magnitude of these changes, many researchers propose that the Anthropocene represents a new division of geological time. The concept has gained traction, especially in the past few years — and not just among geoscientists. The word has been invoked by archaeologists, historians and even gender-studies researchers; several museums around the world have exhibited art inspired by the Anthropocene; and the media have heartily adopted the idea. “Welcome to the Anthropocene,” The Economist announced in 2011.

The greeting was a tad premature. Although the term is trending, the Anthropocene is still an amorphous notion — an unofficial name that has yet to be accepted as part of the geological timescale. That may change soon. A committee of researchers is currently hashing out whether to codify the Anthropocene as a formal geological unit, and when to define its starting point.

But critics worry that important arguments against the proposal have been drowned out by popular enthusiasm, driven in part by environmentally minded researchers who want to highlight how destructive humans have become. Some supporters of the Anthropocene idea have even been likened to zealots. “There’s a similarity to certain religious groups who are extremely keen on their religion — to the extent that they think everybody who doesn’t practise their religion is some kind of barbarian,” says one geologist who asked not to be named.

The debate has shone a spotlight on the typically unnoticed process by which geologists carve up Earth’s 4.5 billion years of history. Normally, decisions about the geological timescale are made solely on the basis of stratigraphy — the evidence contained in layers of rock, ocean sediments, ice cores and other geological deposits. But the issue of the Anthropocene “is an order of magnitude more complicated than the stratigraphy”, says Jan Zalasiewicz, a geologist at the University of Leicester, UK, and the chair of the Anthropocene Working Group that is evaluating the issue for the International Commission on Stratigraphy (ICS).

Written in stone

For geoscientists, the timescale of Earth’s history rivals the periodic table in terms of scientific importance. It has taken centuries of painstaking stratigraphic work — matching up major rock units around the world and placing them in order of formation — to provide an organizing scaffold that supports all studies of the planet’s past. “The geologic timescale, in my view, is one of the great achievements of humanity,” says Michael Walker, a Quaternary scientist at the University of Wales Trinity St David in Lampeter, UK.

Walker’s work sits at the top of the timescale. He led a group that helped to define the most recent unit of geological time, the Holocene epoch, which began about 11,700 years ago.

Sources: Dams/Water/Fertilizer, IGBP; Fallout, Ref. 5; Map, E. C. Ellis Phil. Trans. R. Soc. A 369, 1010–1035 (2011); Methane, Ref. 4

The decision to formalize the Holocene in 2008 was one of the most recent major actions by the ICS, which oversees the timescale. The commission has segmented Earth’s history into a series of nested blocks, much like the years, months and days of a calendar. In geological time, the 66 million years since the death of the dinosaurs is known as the Cenozoic era. Within that, the Quaternary period occupies the past 2.58 million years — during which Earth has cycled in and out of a few dozen ice ages. The vast bulk of the Quaternary consists of the Pleistocene epoch, with the Holocene occupying the thin sliver of time since the end of the last ice age.

When Walker and his group defined the beginning of the Holocene, they had to pick a spot on the planet that had a signal to mark that boundary. Most geological units are identified by a specific change recorded in rocks — often the first appearance of a ubiquitous fossil. But the Holocene is so young, geologically speaking, that it permits an unusual level of precision. Walker and his colleagues selected a climatic change — the end of the last ice age’s final cold snap — and identified a chemical signature of that warming at a depth of 1,492.45 metres in a core of ice drilled near the centre of Greenland1. A similar fingerprint of warming can be seen in lake and marine sediments around the world, allowing geologists to precisely identify the start of the Holocene elsewhere.

“The geologic timescale, in my view, is one of the great achievements of humanity.”

Even as the ICS was finalizing its decision on the start of the Holocene, discussion was already building about whether it was time to end that epoch and replace it with the Anthropocene. This idea has a long history. In the mid-nineteenth century, several geologists sought to recognize the growing power of humankind by referring to the present as the ‘anthropozoic era’, and others have since made similar proposals, sometimes with different names. The idea has gained traction only in the past few years, however, in part because of rapid changes in the environment, as well as the influence of Paul Crutzen, a chemist at the Max Plank Institute for Chemistry in Mainz, Germany.

Crutzen has first-hand experience of how human actions are altering the planet. In the 1970s and 1980s, he made major discoveries about the ozone layer and how pollution from humans could damage it — work that eventually earned him a share of a Nobel prize. In 2000, he and Eugene Stoermer of the University of Michigan in Ann Arbor argued that the global population has gained so much influence over planetary processes that the current geological epoch should be called the Anthropocene2. As an atmospheric chemist, Crutzen was not part of the community that adjudicates changes to the geological timescale. But the idea inspired many geologists, particularly Zalasiewicz and other members of the Geological Society of London. In 2008, they wrote a position paper urging their community to consider the idea3.

Those authors had the power to make things happen. Zalasiewicz happened to be a member of the Quaternary subcommission of the ICS, the body that would be responsible for officially considering the suggestion. One of his co-authors, geologist Phil Gibbard of the University of Cambridge, UK, chaired the subcommission at the time.

Although sceptical of the idea, Gibbard says, “I could see it was important, something we should not be turning our backs on.” The next year, he tasked Zalasiewicz with forming the Anthropocene Working Group to look into the matter.

A new beginning

Since then, the working group has been busy. It has published two large reports (“They would each hurt you if they dropped on your toe,” says Zalasiewicz) and dozens of other papers.

The group has several issues to tackle: whether it makes sense to establish the Anthropocene as a formal part of the geological timescale; when to start it; and what status it should have in the hierarchy of the geological time — if it is adopted.

When Crutzen proposed the term Anthropocene, he gave it the suffix appropriate for an epoch and argued for a starting date in the late eighteenth century, at the beginning of the Industrial Revolution. Between then and the start of the new millennium, he noted, humans had chewed a hole in the ozone layer over Antarctica, doubled the amount of methane in the atmosphere and driven up carbon dioxide concentrations by 30%, to a level not seen in 400,000 years.

When the Anthropocene Working Group started investigating, it compiled a much longer long list of the changes wrought by humans. Agriculture, construction and the damming of rivers is stripping away sediment at least ten times as fast as the natural forces of erosion. Along some coastlines, the flood of nutrients from fertilizers has created oxygen-poor ‘dead zones’, and the extra CO2 from fossil-fuel burning has acidified the surface waters of the ocean by 0.1 pH units. The fingerprint of humans is clear in global temperatures, the rate of species extinctions and the loss of Arctic ice.

The group, which includes Crutzen, initially leaned towards his idea of choosing the Industrial Revolution as the beginning of the Anthropocene. But other options were on the table.

Some researchers have argued for a starting time that coincides with an expansion of agriculture and livestock cultivation more than 5,000 years ago4, or a surge in mining more than 3,000 years ago (see ‘Humans at the helm’). But neither the Industrial Revolution nor those earlier changes have left unambiguous geological signals of human activity that are synchronous around the globe (see ‘Landscape architecture’).

This week in Nature, two researchers propose that a potential marker for the start of the Anthropocene could be a noticeable drop in atmospheric CO2 concentrations between 1570 and 1620, which is recorded in ice cores (see page 171). They link this change to the deaths of some 50 million indigenous people in the Americas, triggered by the arrival of Europeans. In the aftermath, forests took over 65 million hectares of abandoned agricultural fields — a surge of regrowth that reduced global CO2.

Landscape architecture

A model of land use, based on human-population estimates, suggests that people modified substantial parts of the continents even thousands of years ago.

Land used intensively by humans.

8,000 years before present (bp)

8,000 years before present (bp)

1,000 years before present (bp)




Source: E. C. Ellis Phil. Trans. R. Soc. A 369, 1010–1035 (2011).

In the working group, Zalasiewicz and others have been talking increasingly about another option — using the geological marks left by the atomic age. Between 1945 and 1963, when the Limited Nuclear Test Ban Treaty took effect, nations conducted some 500 above-ground nuclear blasts. Debris from those explosions circled the globe and created an identifiable layer of radioactive elements in sediments. At the same time, humans were making geological impressions in a number of other ways — all part of what has been called the Great Acceleration of the modern world. Plastics started flooding the environment, along with aluminium, artificial fertilizers, concrete and leaded petrol, all of which have left signals in the sedimentary record.

In January, the majority of the 37-person working group offered its first tentative conclusion. Zalasiewicz and 25 other members reported5 that the geological markers available from the mid-twentieth century make this time “stratigraphically optimal” for picking the start of the Anthropocene, whether or not it is formally defined. Zalasiewicz calls it “a candidate for the least-worst boundary”.

The group even proposed a precise date: 16 July 1945, the day of the first atomic-bomb blast. Geologists thousands of years in the future would be able to identify the boundary by looking in the sediments for the signature of long-lived plutonium from mid-century bomb blasts or many of the other global markers from that time.

A many-layered debate

The push to formalize the Anthropocene upsets some stratigraphers. In 2012, a commentary published by the Geological Society of America6 asked: “Is the Anthropocene an issue of stratigraphy or pop culture?” Some complain that the working group has generated a stream of publicity in support of the concept. “I’m frustrated because any time they do anything, there are newspaper articles,” says Stan Finney, a stratigraphic palaeontologist at California State University in Long Beach and the chair of the ICS, which would eventually vote on any proposal put forward by the working group. “What you see here is, it’s become a political statement. That’s what so many people want.”

Finney laid out some of his concerns in a paper7 published in 2013. One major question is whether there really are significant records of the Anthropocene in global stratigraphy. In the deep sea, he notes, the layer of sediments representing the past 70 years would be thinner than 1 millimetre. An even larger issue, he says, is whether it is appropriate to name something that exists mainly in the present and the future as part of the geological timescale.

“It’s become a political statement. That’s what so many people want.”

Some researchers argue that it is too soon to make a decision — it will take centuries or longer to know what lasting impact humans are having on the planet. One member of the working group, Erle Ellis, a geographer at the University of Maryland, Baltimore County, says that he raised the idea of holding off with fellow members of the group. “We should set a time, perhaps 1,000 years from now, in which we would officially investigate this,” he says. “Making a decision before that would be premature.”

That does not seem likely, given that the working group plans to present initial recommendations by 2016.

Some members with different views from the majority have dropped out of the discussion. Walker and others contend that human activities have already been recognized in the geological timescale: the only difference between the current warm period, the Holocene, and all the interglacial times during the Pleistocene is the presence of human societies in the modern one. “You’ve played the human card in defining the Holocene. It’s very difficult to play the human card again,” he says.

Walker resigned from the group a year ago, when it became clear that he had little to add. He has nothing but respect for its members, he says, but he has heard concern that the Anthropocene movement is picking up speed. “There’s a sense in some quarters that this is something of a juggernaut,” he says. “Within the geologic community, particularly within the stratigraphic community, there is a sense of disquiet.”

Zalasiewicz takes pains to make it clear that the working group has not yet reached any firm conclusions.“We need to discuss the utility of the Anthropocene. If one is to formalize it, who would that help, and to whom it might be a nuisance?” he says. “There is lots of work still to do.”

Any proposal that the group did make would still need to pass a series of hurdles. First, it would need to receive a supermajority — 60% support — in a vote by members of the Quaternary subcommission. Then it would need to reach the same margin in a second vote by the leadership of the full ICS, which includes chairs from groups that study the major time blocks. Finally, the executive committee of the International Union of Geological Sciences must approve the request.

At each step, proposals are often sent back for revision, and they sometimes die altogether. It is an inherently conservative process, says Martin Head, a marine stratigrapher at Brock University in St Catharines, Canada, and the current head of the Quaternary subcommission. “You are messing around with a timescale that is used by millions of people around the world. So if you’re making changes, they have to be made on the basis of something for which there is overwhelming support.”

Some voting members of the Quaternary subcommission have told Nature that they have not been persuaded by the arguments raised so far in favour of the Anthropocene. Gibbard, a friend of Zalasiewicz’s, says that defining this new epoch will not help most Quaternary geologists, especially those working in the Holocene, because they tend not to study material from the past few decades or centuries. But, he adds: “I don’t want to be the person who ruins the party, because a lot of useful stuff is coming out as a consequence of people thinking about this in a systematic way.”

If a proposal does not pass, researchers could continue to use the name Anthropocene on an informal basis, in much the same way as archaeological terms such as the Neolithic era and the Bronze Age are used today. Regardless of the outcome, the Anthropocene has already taken on a life of its own. Three Anthropocene journals have started up in the past two years, and the number of papers on the topic is rising sharply, with more than 200 published in 2014.

By 2019, when the new fossil hall opens at the Smithsonian’s natural history museum, it will probably be clear whether the Anthropocene exhibition depicts an official time unit or not. Wing, a member of the working group, says that he does not want the stratigraphic debate to overshadow the bigger issues. “There is certainly a broader point about human effects on Earth systems, which is way more important and also more scientifically interesting.”

As he walks through the closed palaeontology hall, he points out how much work has yet to be done to refashion the exhibits and modernize the museum, which opened more than a century ago. A hundred years is a heartbeat to a geologist. But in that span, the human population has more than tripled. Wing wants museum visitors to think, however briefly, about the planetary power that people now wield, and how that fits into the context of Earth’s history. “If you look back from 10 million years in the future,” he says, “you’ll be able to see what we were doing today.”

Nature 519, 144–147 (12 March 2015), doi:10.1038/519144a


  1. Walker, M. et alJ. Quat. Sci. 24317 (2009).
  2. Crutzen, P. J. & Stoermer, E. F. IGBP Newsletter 411718 (2000).
  3. Zalasiewicz. J. et alGSA Today 18(2), 48 (2008).
  4. Ruddiman, W. F. Ann. Rev. Earth. Planet. Sci. 414568 (2013).
  5. Zalasiewicz, J. et alQuatern. Int. (2015).
  6. Autin, W. J. & Holbrook, J. M. GSA Today 22(7), 6061 (2012).
  7. Finney, S. C. Geol. Soc. Spec. Publ. 3952328 (2013).

On Reverse Engineering (Anthropology and Algorithms)

Nick Seaver

Looking for the cultural work of engineers

The Atlantic welcomed 2014 with a major feature on web behemoth Netflix. If you didn’t know, Netflix has developed a system for tagging movies and for assembling those tags into phrases that look like hyper-specific genre names: Visually-striking Foreign Nostalgic Dramas, Critically-acclaimed Emotional Underdog Movies, Romantic Chinese Crime Movies, and so on. The sometimes absurd specificity of these names (or “altgenres,” as Netflix calls them) is one of the peculiar pleasures of the contemporary web, recalling the early days of website directories and Usenet newsgroups, when it seemed like the internet would be a grand hotel, providing a room for any conceivable niche.

Netflix’s weird genres piqued the interest of Atlantic editor Alexis Madrigal, who set about scraping the whole list. Working from the US in late 2013, his scraper bot turned up a startling 76,897 genre names — clearly the emanations of some unseen algorithmic force. How were they produced? What was their generative logic? What made them so good—plausible, specific, with some inexpressible touch of the human? Pursuing these mysteries brought Madrigal to the world of corpus analysis software and eventually to Netflix’s Silicon Valley offices.

The resulting article is an exemplary piece of contemporary web journalism — a collaboratively produced, tech-savvy 5,000-word “long read” that is both an exposé of one of the largest internet companies (by volume) and a reflection on what it is like to be human with machines. It is supported by a very entertaining altgenre-generating widget, built by professor and software carpenter Ian Bogost and illustrated by Twitter mystery darth. Madrigal pieces the story together with his signature curiosity and enthusiasm, and the result feels so now that future corpus analysts will be able to use it as a model to identify texts written in the United States from 2013–14. You really should read it.

A Māori eel trap. The design and construction of traps (or filters) like this are classic topics of interest for anthropologists of technology. cc-by-sa-3.0

As a cultural anthropologist in the middle of a long-term research project on algorithmic filtering systems, I am very interested in how people think about companies like Netflix, which take engineering practices and apply them to cultural materials. In the popular imagination, these do not go well together: engineering is about universalizable things like effectiveness, rationality, and algorithms, while culture is about subjective and particular things, like taste, creativity, and artistic expression. Technology and culture, we suppose, make an uneasy mix. When Felix Salmon, in his response to Madrigal’s feature, complains about “the systematization of the ineffable,” he is drawing on this common sense: engineers who try to wrangle with culture inevitably botch it up.

Yet, in spite of their reputations, we always seem to find technology and culture intertwined. The culturally-oriented engineering of companies like Netflix is a quite explicit case, but there are many others. Movies, for example, are a cultural form dependent on a complicated system of technical devices — cameras, editing equipment, distribution systems, and so on. Technologies that seem strictly practical — like the Māori eel trap pictured above—are influenced by ideas about effectiveness, desired outcomes, and interpretations of the natural world, all of which vary cross-culturally. We may talk about technology and culture as though they were independent domains, but in practice, they never stay where they belong. Technology’s straightforwardness and culture’s contingency bleed into each other.

This can make it hard to talk about what happens when engineers take on cultural objects. We might suppose that it is a kind of invasion: The rationalizers and quantifiers are over the ridge! They’re coming for our sensitive expressions of the human condition! But if technology and culture are already mixed up with each other, then this doesn’t make much sense. Aren’t the rationalizers expressing their own cultural ideas? Aren’t our sensitive expressions dependent on our tools? In the present moment, as companies like Netflix proliferate, stories trying to make sense of the relationship between culture and technology also proliferate. In my own research, I examine these stories, as told by people from a variety of positions relative to the technology in question. There are many such stories, and they can have far-reaching consequences for how technical systems are designed, built, evaluated, and understood.

The story Madrigal tells in The Atlantic is framed in terms of “reverse engineering.” The engineers of Netflix have not invaded cultural turf — they’ve reverse engineered it and figured out how it works. To report on this reverse engineering, Madrigal has done some of his own, trying to figure out the organizing principles behind the altgenre system. So, we have two uses of reverse engineering here: first, it is a way to describe what engineers do to cultural stuff; second, it is a way to figure out what engineers do.

So what does “reverse engineering” mean? What kind of things can be reverse engineered? What assumptions does reverse engineering make about its objects? Like any frame, reverse engineering constrains as well as enables the presentation of certain stories. I want to suggest here that, while reverse engineering might be a useful strategy for figuring out how an existing technology works, it is less useful for telling us how it came to work that way. Because reverse engineering starts from a finished technical object, it misses the accidents that happened along the way — the abandoned paths, the unusual stories behind features that made it to release, moments of interpretation, arbitrary choice, and failure. Decisions that seemed rather uncertain and subjective as they were being made come to appear necessary in retrospect. Engineering looks a lot different in reverse.

This is especially evident in the case of explicitly cultural technologies. Where “technology” brings to mind optimization, functionality, and necessity, “culture” seems to represent the opposite: variety, interpretation, and arbitrariness. Because it works from a narrowly technical view of what engineering entails, reverse engineering has a hard time telling us about the cultural work of engineers. It is telling that the word “culture” never appears in this piece about the contemporary state of the culture industry.

Inspired by Madrigal’s article, here are some notes on the consequences of reverse engineering for how we think about the cultural lives of engineers. As culture and technology continue to escape their designated places and intertwine, we need ways to talk about them that don’t assume they can be cleanly separated.

Ben Affleck, fact extractor.

There is a terrible movie about reverse engineering, based on a short story by Philip K. Dick. It is called Paycheck, stars Ben Affleck, and is not currently available for streaming on Netflix. In it, Affleck plays a professional reverse engineer (the “best in the business”), who is hired by companies to figure out the secrets of their competitors. After doing this, his memory of the experience is wiped and in return, he is compensated very well. Affleck is a sort of intellectual property conduit: he extracts secrets from devices, and having moved those secrets from one company to another, they are then extracted from him. As you might expect, things go wrong: Affleck wakes up one day to find that he has forfeited his payment in exchange for an envelope of apparently worthless trinkets and, even worse, his erstwhile employer now wants to kill him. The trinkets turn out to be important in unexpected ways as Affleck tries to recover the facts that have been stricken from his memory. The movie’s tagline is “Remember the Future”—you get the idea.

Paycheck illustrates a very popular way of thinking about engineering knowledge. To know about something is to know the facts about how it works. These facts are like physical objects — they can be hidden (inside of technologies, corporations, envelopes, or brains), and they can be retrieved and moved around. In this way of thinking about knowledge, facts that we don’t yet know are typically hidden on the other side of some barrier. To know through reverse engineering is to know by trying to pull those pre-existing facts out.

This is why reverse engineering is sometimes used as a metaphor in the sciences to talk about revealing the secrets of Nature. When biologists “reverse engineer” a cell, for example, they are trying to uncover its hidden functional principles. This kind of work is often described as “pulling back the curtain” on nature (or, in older times, as undressing a sexualized, female Nature — the kind of thing we in academia like to call “problematic”). Nature, if she were a person, holds the secrets her reverse engineers want.

In the more conventional sense of the term, reverse engineering is concerned with uncovering secrets held by engineers. Unlike its use in the natural sciences, here reverse engineering presupposes that someone already knows what we want to find out. Accessing this kind of information is often described as “pulling back the curtain” on a company. (This is likely the unfortunate naming logic behind Kimono, a new service for scraping websites and automatically generating APIs to access the scraped data.) Reverse engineering is not concerned with producing “new” knowledge, but with extracting facts from one place and relocating them to another.

Reverse engineering (and I guess this is obvious) is concerned with finished technologies, so it presumes that there is a straightforward fact of the matter to be worked out. Something happened to Ben Affleck before his memory was wiped, and eventually he will figure it out. This is not Rashomonwhich suggests there might be multiple interpretations of the same event (although that isn’t available for streaming either)The problem is that this narrow scope doesn’t capture everything we might care about: why this technology and not another one? If a technology is constantly changing, like the algorithms and data structures under the hood at Netflix, then why is it changing as it does? Reverse engineering, at best, can only tell you the what, not the why or the how. But it even has some trouble with the what.

“Fantastic powers at his command / And I’m sure that he will understand / He’s the Wiz and he lives in Oz”

Netflix, like most companies today, is surrounded by a curtain of non-disclosure agreements and intellectual property protections. This curtain animates Madrigal’s piece, hiding the secrets that his reverse engineering is aimed at. For people inside the curtain, nothing in his article is news. What is newsworthy, Madrigal writes, is that “no one outside the company has ever assembled this data before.” The existence of the curtain shapes what we imagine knowledge about Netflix to be: something possessed by people on the inside and lacked by people on the outside.

So, when Madrigal’s reverse engineering runs out of steam, the climax of the story comes and the curtain is pulled back to reveal the “Wizard of Oz, the man who made the machine”: Netflix’s VP of Product Innovation Todd Yellin. Here is the guy who holds the secrets behind the altgenres, the guy with the knowledge about how Netflix has tried to bridge the world of engineering and the world of cultural production. According to the logic of reverse engineering, Yellin should be able to tell us everything we want to know.

From Yellin, Madrigal learns about the extensiveness of the tagging that happens behind the curtain. He learns some things that he can’t share publicly, and he learns of the existence of even more secrets — the contents of the training manual which dictate how movies are to be entered into the system. But when it comes to how that massive data and intelligence infrastructure was put together, he learns this:

“It’s a real combination: machine-learned, algorithms, algorithmic syntax,” Yellin said, “and also a bunch of geeks who love this stuff going deep.”

This sentence says little more than “we did it with computers,” and it illustrates a problem for the reverse engineer: there is always another curtain to get behind. Scraping altgenres will only get you so far, and even when you get “behind the curtain,” companies like Netflix are only willing to sketch out their technical infrastructure in broad strokes. In more technically oriented venues or the academic research community, you may learn more, but you will never get all the way to the bottom of things. The Wizard of Oz always holds on to his best secrets.

But not everything we want to know is a trade secret. While reverse engineers may be frustrated by the first part of Yellin’s sentence — the vagueness of “algorithms, algorithmic syntax” — it’s the second part that hides the encounter between culture and technology: What does it look like when “geeks who love this stuff go deep”? How do the people who make the algorithms understand the “deepness” of cultural stuff? How do the loves of geeks inform the work of geeks? The answers to these questions are not hidden away as proprietary technical information; they’re often evident in the ways engineers talk about and work with their objects. But because reverse engineering focuses narrowly on revealing technical secrets, it fails to piece together how engineers imagine and engage with culture. For those of us interested in the cultural ramifications of algorithmic filtering, these imaginings and engagements—not usually secret, but often hard to access — are more consequential than the specifics of implementation, which are kept secret and frequently change.

“My first goal was: tear apart content!”

While Yellin may not have told us enough about the technical secrets of Netflix to create a competitor, he has given us some interesting insights into the way he thinks about movies and how to understand them. If you’re familiar with research on algorithmic recommenders, you’ll recognize the system he describes as an example of content-based recommendation. Where “classic” recommender systems rely on patterns in ratings data and have little need for other information, content-based systems try to understand the material they recommend, through various forms of human or algorithmic analysis. These analyses are a lot of work, but over the past decade, with the increasing availability of data and analytical tools, content-based recommendation has become more popular. Most big recommender systems today (including Netflix’s) are hybrids, drawing on both user ratings and data about the content of recommended items.

The “reverse engineering of Hollywood” is the content side of things: Netflix’s effort to parse movies into its database so that they can be recommended based on their content. By calling this parsing “reverse engineering,” Madrigal implies that there is a singular fact of the matter to be retrieved from these movies, and as a result, he focuses his description on Netflix’s thoroughness. What is tagged? “Everything. Everyone.” But the kind of parsing Yellin describes is not the only way to understand cultural objects; rather, it is a specific and recognizable mode of interpretation. It bears a strong resemblance to structuralism — a style of cultural analysis that had its heyday in the humanities and social sciences during the mid-20th century.

Structuralism, according to Roland Barthes, is a way of interpreting objects by decomposing them into parts and then recomposing those parts into new wholes. By breaking a text apart and putting it back together, the structuralist aims to understand its underlying structure: what order lurks under the surface of apparently idiosyncratic objects?

For example, the arch-structuralist anthropologist Claude Lévi-Strauss took such an approach in his study of myth. Take the Oedipus myth: there are many different ways to tell the same basic story, in which a baby is abandoned in the wilderness and then grows up to unknowingly kill his father, marry his mother, and blind himself when he finds out (among other things). But, across different tellings of the myth, there is a fairly persistent set of elements that make up the story. Lévi-Strauss called these elements “mythemes” (after linguistic “phonemes”). By breaking myths down into their constituent parts, you could see patterns that linked them together, not only across different tellings of the “same” myth, but even across apparently disparate myths from other cultures. Through decomposition and recomposition, structuralists sought what Barthes called the object’s “rules of functioning.” These rules, governing the combination of mythemes, were the object of Lévi-Strauss’s cultural analysis.

Todd Yellin is, by all appearances, a structuralist. He tells Madrigal that his goal was to “tear apart content” and create a “Netflix Quantum Theory,” under which movies could be broken down into their constituent parts — into “quanta” or the “little ‘packets of energy’ that compose each movie.” Those quanta eventually became “microtags,” which Madrigal tells us are used to describe everything in the movie. Large teams of human taggers are trained, using a 36-page secret manual, and they go to town, decomposing movies into microtags. Take those tags, recompose them, and you get the altgenres, a weird sort of structuralist production intended to help you find things in Netflix’s pool of movies. If Lévi-Strauss had lived to be 104 instead of just 100, he might have had some thoughts about this computerized structuralism: in his 1955 article on the structural study of myth, he suggested that further advances would require mathematicians and “I.B.M. equipment” to handle the complicated analysis. Structuralism and computers go way back.

Although structuralism sounds like a fairly technical way to analyze cultural material, it is not, strictly speaking, objective. When you break an object down into its parts and put it back together again, you have not simply copied it — you’ve made something new. A movie’s set of microtags, no matter how fine-grained, is not the same thing as the movie. It is, as Barthes writes, a “directed, interested simulacrum” of the movie, a re-creation made with particular goals in mind. If you had different goals — different ideas about what the significant parts of movies were, different imagined use-cases — you might decompose differently. There is more than one way to tear apart content.

This does not jive well with common sense ideas about what engineering is like. Instead of the cold, rational pursuit of optimal solutions, we have something a little more creative. We have options, a variety of choices which are all potentially valid, depending on a range of contextual factors not exhausted by obviously “technical” concerns. Barthes suggested that composing a structuralist analysis was like composing a poem, and engineering is likewise expressive. Netflix’s altgenres are in no way the final statement on the movies. They are, rather, one statement among many — a cultural production in their own right, influenced by local assumptions about meaning, relevance, and taste. “Reverse engineering” seems a poor name for this creative practice, because it implies a singular right answer — a fact of the matter that merely needs to be retrieved from the insides of the movies. We might instead, more accurately, call this work “interpretation.”

So, where does this leave us with reverse engineering? There are two questions at issue here:

  1. Does “reverse engineering” as a term adequately describe the work that engineers like those employed at Netflix do when they interact with cultural objects?
  2. Is reverse engineering a useful strategy for figuring out what engineers do?

The answer to both of these questions, I think, is a measured “no,” and for the same reason: reverse engineering, as both a descriptor and a research strategy, misses the things engineers do that do not fit into conventional ideas about engineering. In the ongoing mixture of culture and technology, reverse engineering sticks too closely to the idealized vision of technical work. Because it assumes engineers care strictly about functionality and efficiency, it is not very good at telling stories about accidents, interpretations, and arbitrary choices. It assumes that cultural objects or practices (like movies or engineering) can be reduced to singular, universally-intelligible logics. It takes corporate spokespeople at their word when they claim that there was a straight line from conception to execution.

As Nicholas Diakopoulos has written, reverse engineering can be a useful way to figure out what obscured technologies do, but it cannot get us answers to “the question of why.” As these obscured technologies — search engines, recommender systems, and other algorithmic filters — are constantly refined, we need better ways to talk about the whys and hows of engineering as a practice, not only the what of engineered objects that immediately change.

The risk of reverse engineering is that we come to imagine that the only things worth knowing about companies like Netflix are the technical details hidden behind the curtain. In my own research, I argue that the cultural lives and imaginations of the people behind the curtain are as important, if not more, for understanding how these systems come to exist and function as they do. Moreover, these details are not generally considered corporate secrets, so they are accessible if we look for them. Not everything worth knowing has been actively hidden, and transparency can conceal as much as it reveals.

All engineering mixes culture and technology. Even Madrigal’s “reverse engineering” does not stay put in technical bounds: he supplements the work of his bot by talking with people, drawing on their interpretations and offering his own, reading the altgenres, populated with serendipitous algorithmic accidents, as “a window unto the American soul.” Engineers, reverse and otherwise, have cultural lives, and these lives inform their technical work. To see these effects, we need to get beyond the idea that the technical and the cultural are necessarily distinct. But if we want to understand the work of companies like Netflix, it is not enough to simply conclude that culture and technology — humans and computers — are mixed. The question we need to answer is how.

‘Technological Disobedience’: How Cubans Manipulate Everyday Technologies For Survival (WLRN)

12:05  PM

MON JULY 1, 2013

In Cuban Spanish, there is a word for overcoming great obstacles with minimal resources: resolver.

Literally, it means to resolve, but to many Cubans on the island and living in South Florida, resolviendo is an enlightened reality born of necessity.

When the Soviet Union collapsed in 1991, Cuba entered a “Special Period in Times of Peace”, which saw unprecedented shortages of every day items. Previously, the Soviets had been Cuba’s principal traders, sending goods for low prices and buying staple export commodities like sugar at above market prices.

Rationing goods was a normal part of life for a long time, but Cubans found themselves in dire straights without Soviet support. As the crisis got worse and worse over time, the more creative people would have to get.

Verde Olivo, the publishing house for the Cuban Revolutionary Armed Forces, published a largely crowdsourced book shortly after the Special Period began. Titled Con Nuestros Propios Esfuerzos (With Our Own Efforts), the book detailed all the possible ways that household items could be manipulated and turned inside out in order to fulfill the needs of a starving population.

Included in the book is a famous recipe for turning grapefruit rind into makeshift beef steak (after heavy seasoning).

Cuban artist and designer Ernesto Oroza watched with amazement as uses sprang from everyday items, and he soon began collecting these items from this sad but ingeniously creative period of Cuban history.

A Cuban rikimbili-- the word for bicycles that have been converted into motorcycles. The engine of 100cc's or less typically is constructed out of motor-powered, misting backpacks or Russian tank AC generators.

A Cuban rikimbili– the word for bicycles that have been converted into motorcycles. The engine of 100cc’s or less typically is constructed out of motor-powered, misting backpacks or Russian tank AC generators. Credit

“People think beyond the normal capacities of an object, and try to surpass the limitations that it imposes on itself”, Oraza explains in a recently published Motherboard documentary that originally aired in 2011.

Oraza coined the phrase “Technological Disobedience”, which he says summarizes how Cubans reacted to technology during this time.

After graduating from design school to an abysmal economy, Oraza and a friend began to travel the island and collect these unique items from every province.

These post-apocalyptic contraptions reflect a hunger for more, and a resilience to fatalism within the Cuban community.

“The same way a surgeon, after having opened so many bodies, becomes insensitive to blood, to the smell of blood and organs… It’s the same for a Cuban,” Oraza explains.

“Once he has opened a fan, he is used to seeing everything from the inside… All the symbols that unify an object, that make a unique entity– for a Cuban those don’t exist.”

Geoengineering proposal may backfire: Ocean pipes ‘not cool,’ would end up warming climate (Science Daily)

Date: March 19, 2015

Source: Carnegie Institution

Summary: There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits. One idea involves using ocean pipes to facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters. New research shows that these pipes could actually increase global warming quite drastically.

To combat global climate change caused by greenhouse gases, alternative energy sources and other types of environmental recourse actions are needed. There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits.A new study from a group of Carnegie scientists determines that these types of pipes could actually increase global warming quite drastically. It is published in Environmental Research Letters.

One proposed strategy–called Ocean Thermal Energy Conversion, or OTEC–involves using the temperature difference between deeper and shallower water to power a heat engine and produce clean electricity. A second proposal is to move carbon from the upper ocean down into the deep, where it wouldn’t interact with the atmosphere. Another idea, and the focus of this particular study, proposes that ocean pipes could facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters.

“Our prediction going into the study was that vertical ocean pipes would effectively cool the Earth and remain effective for many centuries,” said Ken Caldeira, one of the three co-authors.

The team, which also included lead author Lester Kwiatkowski as well as Katharine Ricke, configured a model to test this idea and what they found surprised them. The model mimicked the ocean-water movement of ocean pipes if they were applied globally reaching to a depth of about a kilometer (just over half a mile). The model simulated the motion created by an idealized version of ocean pipes, not specific pipes. As such the model does not include real spacing of pipes, nor does it calculate how much energy they would require.

Their simulations showed that while global temperatures could be cooled by ocean pipe systems in the short term, warming would actually start to increase just 50 years after the pipes go into use. Their model showed that vertical movement of ocean water resulted in a decrease of clouds over the ocean and a loss of sea-ice.

Colder air is denser than warm air. Because of this, the air over the ocean surface that has been cooled by water from the depths has a higher atmospheric pressure than the air over land. The cool air over the ocean sinks downward reducing cloud formation over the ocean. Since more of the planet is covered with water than land, this would result in less cloud cover overall, which means that more of the Sun’s rays are absorbed by Earth, rather than being reflected back into space by clouds.

Water mixing caused by ocean pipes would also bring sea ice into contact with warmer waters, resulting in melting. What’s more, this would further decrease the reflection of the Sun’s radiation, which bounces off ice as well as clouds.

After 60 years, the pipes would cause an increase in global temperature of up to 1.2 degrees Celsius (2.2degrees Fahrenheit). Over several centuries, the pipes put the Earth on a warming trend towards a temperature increase of 8.5 degrees Celsius (15.3 degrees Fahrenheit).

“I cannot envisage any scenario in which a large scale global implementation of ocean pipes would be advisable,” Kwiatkowski said. “In fact, our study shows it could exacerbate long-term warming and is therefore highly inadvisable at global scales.”

The authors do say, however, that ocean pipes might be useful on a small scale to help aerate ocean dead zones.

Journal Reference:

  1. Lester Kwiatkowski, Katharine L Ricke and Ken Caldeira. Atmospheric consequences of disruption of the ocean thermoclineEnvironmental Research Letters, 2015 DOI: 10.1088/1748-9326/10/3/034016

‘Não somos ratos de laboratório’, diz diretor da Sangamo Biosciences (O Globo)

Aparelho para sequenciamento genético. Para Lanphier, pesquisas com células-tronco não-reprodutivas são as únicas aceitáveis – David Paul Morris BLOOMBERG

Edward Lanphier comanda entidade ligada à medicina regenerativa e pede um freio nas pesquisas de manipulação do DNA com células reprodutivas


RIO – Edward Lanphier comanda a Sangamo Biosciences, uma das entidades que formam a Aliança para a Medicina Regenerativa (ARM, na sigla em inglês), organização que reúne mais de 200 empresas no mundo da área de biotecnologia e instituições de pesquisa. A Aliança pediu uma moratória de prazo indefinido para pesquisas e prática de manipulação do DNA de células reprodutivas.

O debate sobre o tema, que já dura anos, esquentou com o desenvolvimento de técnicas que permitem que a edição de genes ocorra na prática, o que abre a possibilidade de gerar bebês sob medida.

Lanphier anunciou o pedido de interrupção nas pesquisas em um documento assinado por ele e outros membros da aliança. O texto, publicado na “Nature”, revista científica de renome internacional, declara que este tipo de pesquisa não deve ser levada adiante.

Edward Lanphier, diretor-presidente da Sangamo Biosciences – Divulgação

Enquanto nos Estados Unidos e em países europeus ainda não há uma definição prática sobre se é permitida ou não a manipulação genética de células reprodutivas, no Brasil, estudos deste tipo já foram proibidos. A resolução foi publicada em 2004 pela Comissão Nacional de Ética em Pesquisa (Conep), órgão do Ministério da Saúde. Ela diz: “As pesquisas com intervenção para modificação do genoma humano só poderão ser realizadas em células somáticas (não-reprodutivas).” Agora a questão seria o uso ilícito de técnicas desenvolvidas no exterior.

Em entrevista publicada esta segunda-feira na revista digital O GLOBO a Mais, Lanphier explica por que considera que até mesmo a pesquisa básica deve ser banida.

A moratória é geral?

Sim. O pedido de moratória é para que tenhamos um tempo para que todas as partes discutam o assunto. É um pedido. Mas a premissa da qual partimos é que mesmo com essa discussão existe uma linha que não pode ser ultrapassada.

Qual é o principal risco de editar o genoma de células reprodutivas (espermatozoides e óvulos)?

O grande problema é ético, apesar de haver também riscos de segurança e técnicos, que limitam o uso prático. A questão ética ultrapassa a barreira da legislação e políticas de cada país. Ela é fundamental.

Se é possível alterar o genoma, é possível escolher a cor do cabelo, dos olhos ou até da pele de um bebê?

Vai além disso. O problema é não só poder alterar as características de um indivíduo, mas também de todas as futuras gerações. Não somos ratos de laboratório, muito menos algo como um milho transgênico. Como espécie, nós humanos decidimos que somos únicos. Por décadas, os países desenvolvidos debateram a modificação de genes em células reprodutivas e se posicionaram contra isso.

É possível alterar genes que ditam características como inteligência ou até comportamento?

Essa é a nossa preocupação. Pois o indivíduo alterado passará as mudanças para as gerações futuras. Aberta a oportunidade deste tipo de pesquisa, ela pode ser usada para objetivos que não têm valor terapêutico, de tratamento de doenças. É um caminho sem volta. Nós, como sociedade, precisamos pensar no que nos torna humanos. No passado já nos posicionamos contra ações deste tipo, que podem nos levar a uma sociedade pautada pela eugenia.

O senhor poderia explicar a diferença entre a manipulação de células somáticas (não-reprodutivas) e a de óvulos e espermatozoides?

Existe uma diferença fundamental. É preto e branco. Na manipulação de células somáticas, você busca alterar um gene para criar uma resistência no indivíduo contra uma doença específica. Você não altera os genes de futuras gerações, caso a pessoa tenha filhos. A única coisa que se tenta fazer é curar doenças. Existe, porém, uma linha que não deve ser ultrapassada. E ela é alterar óvulos e espermatozoides, pois eles conferem hereditariedade para as características manipuladas.

Se o senhor muda uma única característica e ela é passada para gerações futuras, não é possível que outras mutações inesperadas aconteçam?

Isso é perfeitamente possível. É uma das questões que levantamos. Atualmente a natureza disto e suas possíveis consequências são completamente desconhecidas. Há muitas questões sem respostas. Precisamos responder a todas antes de sequer considerar a questão maior, que é a ética do processo. Ainda é muito cedo. Por isso, pedimos uma moratória.

Quais limites o senhor sente que é necessário criar a longo prazo?

Propusemos a moratória justamente para discutir o assunto. Não existe justificativa para realizar alterações genéticas em células reprodutivas.

O senhor cita uma possibilidade de rejeição da sociedade contra este tipo de pesquisa. O temor é de que isso atinja também a pesquisa com as demais células?

Seria uma rejeição motivada por falta de conhecimento.

Que linhas de estudo são consideradas promissoras?

As doenças com mais chances de serem curadas por este tipo de pesquisa são aquelas que têm um gene específico associado. É o caso de hemofilia, anemia falciforme e vários tipos de câncer. Essas são as oportunidades mais imediatas que se abrem com a pesquisa. Tecnicamente e teoricamente é possível ainda usar a tecnologia para alterar mais de um gene, para curar doenças relacionadas a múltiplos genes.

Há algum argumento a favor da alteração de genes em óvulos e espermatozoides?

Não. Mesmo em situações onde pais tenham genes com falhas ligadas a doenças hereditárias, não se justifica. Há exames pré-natais e tratamentos de fertilização in vitro para contornar estes problemas. Não há justificativa para editar o genoma humano em células reprodutivas.

Se é possível alterar o genoma humano, não é necessário questionar o que nos torna humanos? Não estaríamos criando uma nova espécie?

A grande questão é que, se mudarmos o DNA, mudamos a espécie.

How Silicon Valley controls our future (Fear and the Technopanic)

How Silicon Valley controls our future

Jeff Jarvis

Oh, My!

Just 12 hours ago, I posted a brief piece about the continuing Europtechnopanic in Germany and the effort of publishers there to blame their every trouble on Google—even the so-called sin of free content and the price of metaphoric wurst.

Now Germany one-ups even itself with the most amazing specimen of Europtechnopanic I have yet seen. The cover of Der Spiegel, the country’s most important news outlet, makes the titans of Silicon Valley look dark, wicked, and, well—I just don’t know how else to say it—all too much like this.

This must be Spiegel’s Dystopian Special Issue. Note the additional cover billing: “Michel Houellebecq: ‘Humanism and enlightenment are dead.’”

I bought the issue online—you’re welcome—so you can read along with me (and correct my translations, please).

The cover story gets right to the point. Inside, the opening headline warns: “Tomorrowland: In Silicon Valley, a new elite doesn’t just want to determine what we consume but how we live. They want to change the world and accept no regulation. Must we stop them?”

Ah, yes, German publishers want to regulate Google—and now, watch out, Facebook, Apple, Uber, and Yahoo! (Yahoo?), they’re gunning for you next.

Turn the page and the first thing you read is this: “By all accounts, Travis Kalanick, founder and head of Uber, is an asshole.”

Oh, my.

It continues: “Uber is not the only company with plans for such world conquest. That’s how they all think: Google and Facebook, Apple and Airbnb, all those digital giants and thousands of smaller firms in their neighborhood. Their goal is never the niche but always the whole world. They don’t follow delusional fantasies but have thoroughly realistic goals in sight. It’s all made possible by a Dynamic Duo almost unique in economic history: globalization coupled with digitilization.”

Digitalization, you see, is not just a spectre haunting Europe but a dark force overcoming the world. Must it be stopped? We’re merely asking.

Spiegel’s editors next fret that “progress will be faster and bigger, like an avalanche:” iPhone, self-driving cars, the world’s knowledge now digital and retrievable, 70% of stock trading controlled by algorithms, commercial drones, artificial intelligence, robots. “Madness but everyday madness,” Spiegel cries. “No longer science fiction.”

What all this means is misunderstood, Spiegel says, “above all by politicians,” who must decide whether to stand by as spectators while “others organize a global revolution. Because what is happening is much more than the triumph of new technology, much more than an economic phenomenon. It’s not just about ‘the internet’ or ‘social networks,’ not about intelligence and Edward Snowden and the question of what Google does with data.” It’s not just about newspapers shutting down and jobs lost to software. We are in the path of social change, “which in the end no one can escape.” Distinct from the industrial revolution, this time “digitization doesn’t just change industries but how we think and how we live. Only this time the change is controlled centrally by a few hundred people…. They aren’t stumbling into the future, they are ideologues with a clear agenda…. a high-tech doctrine of salvation.”


Oh, fuck!

The article then takes us on a tour of our new world capital, home to our “new Masters of the Universe,” who—perversely, apparently—are not concerned primarily about money. “Power through money isn’t enough for them.” It examines the roots of their philosophy from the “tradition of radical thinkers such as Noam Chomsky, Ayn Rand, and Friedrich Hayek,” leading to a “strange mixture of esoteric hippie-thinking and bare-knuckled capitalism.” Spiegel calls it their Menschheitsbeglückungswerks. I had to ask Twitter WTF that means.

Aha. So must we just go along with having this damned happiness shoved down our throats? “Is now the time for regulation before the world is finally dominated by digital monopolies?” Spiegel demands — I mean, merely asks? “Is this the time for democratic societies to defend themselves?”

Spiegel then visits four Silicon Valley geniuses: singularity man Ray Kurzweil; the conveniently German Sebastian Thrun, he of the self-driving car and online university; the always-good-for-a-WTF Peter Thiel (who was born in Germany but moved away after a year); and Airbnb’s Joe Gebbia. It recounts German President Joachim Gauck telling Thrun, “you scare me.” And it allows Thrun to respond that it’s the optimists, not the naysayers, who change the world.

I feared that these hapless four would be presented as ugly caricatures of the frightening, alien tribe of dark-bearded technopeople. You know what I’m getting at. But I’m relieved to say that’s not the case. What follows all the fear-mongering bluster of the cover story’s start is actual reporting. That is to say, a newsmagazine did what a newsmagazine does: It tops off its journalism with its agenda: frosting on the cupcake. And the agenda here is that of German publishers—some of them, which I explored last night and earlier. They attack Google and enlist politicians to do their bidding with new regulations to disadvantage their big, new, American, technological competitors.

And you know what? The German publishers’ strategy is working. German lawmakers passed a new ancillary copyright (nevermind that Google won that round when publishers gave it permission to quote their snippets) and EU politicians are talking not just about creating new copyright and privacy law but even about breaking up Google. The publishers are bringing Google to heel. The company waited far too long to empathize with publishers’ plight—albeit self-induced—and to recognize their political clout (a dangerous combination: desperation and power, as Google now knows). Now see how Matt Brittin, the head of EMEA for Google, drops birds at Europe’s feet like a willing hund, showing all the good that Google does indeed bring them.

I have also noted that Google is working on initiatives with European publishers to find mutual benefit and I celebrate that. That is why—ever helpful as I am—I wrote this post about what Google could do for news and this one about what news could do for Google. I see real opportunity for enlightened self-interest to take hold both inside Google and among publishers and for innovation and investment to come to news. But I’m one of those silly and apparently dangerous American optimists.

As I’ve often said, the publishers—led by Mathias Döpfner of Axel Springer and Paul-Bernhard Kallen of Burda—are smart. I admire them both. They know what they’re doing, using the power of their presses and thus their political clout to box in even big, powerful Google. It’s a game to them. It’s negotiation. It’s just business. I don’t agree with or much like their message or the tactic. But I get it.

Then comes this Scheißebombe from Der Spiegel. It goes far beyond the publishers’ game. It is nothing less than prewar propaganda, trying to stir up a populace against a boogeyman enemy in hopes of goading politicians to action to stop these people. If anyone would know better, you’d think they would. Schade.

Vídeo mostra como o Brasil monitora os riscos de desastres naturais (MCTI/INPE)

JC 5125, 26 de fevereiro de 2015

Os sistemas de monitoramento e prevenção de seus impactos no Brasil também integram o vídeo educacional lançado pelo INCT-MC

Os desastres naturais e os sistemas de monitoramento e prevenção de seus impactos no Brasil são tema do vídeo educacional lançado pelo Instituto Nacional de Ciência e Tecnologia para Mudanças Climáticas (INCT-MC).

material integra o projeto de difusão do conhecimento gerado pelas pesquisas realizadas durante os seis anos de vigência do INCT-MC (2008-2014), sediado no Instituto Nacional de Pesquisas Espaciais (Inpe/MCTI).

Dirigido a educadores, estudantes de ensino médio e graduação, e formuladores de políticas públicas, o vídeo traz informações sobre as causas do aumento do número de desastres naturais nos últimos anos e como o País está se preparando para prevenir e reduzir os prejuízos nos diversos setores da sociedade. Pesquisadores e tecnologistas do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden/MCTI) mostram como é feito o monitoramento de áreas de risco 24 horas por dia. Também são apresentadas as dimensões humanas, ou seja, como os desastres interferem e prejudicam a vida das pessoas e como o surgimento de novos cenários de risco pode e deve ser evitados.

Até junho, serão concluídos outros cinco vídeos educacionais, abordando temas relacionados às pesquisas do INCT para Mudanças Climáticas: segurança alimentar, segurança energética, segurança hídrica, saúde e biodiversidade.


O conhecimento produzido durante seis anos de pesquisas realizadas no âmbito do INCT para Mudanças Climáticas está sendo reunido em um portal na internet, a ser lançado neste semestre. O ambiente virtual oferecerá conteúdos com linguagem adequada para os diversos públicos de interesse: pesquisadores, educadores, estudantes (divididos por faixas etárias) e formuladores de políticas públicas. O material estará organizado em seis grandes áreas temáticas: segurança alimentar, segurança energética, segurança hídrica, saúde humana, biodiversidade e desastres naturais.

Leia mais.

(MCTI, via Inpe)

When Exponential Progress Becomes Reality (Medium)

Niv Dror

“I used to say that this is the most important graph in all the technology business. I’m now of the opinion that this is the most important graph ever graphed.”

Steve Jurvetson

Moore’s Law

The expectation that your iPhone keeps getting thinner and faster every two years. Happy 50th anniversary.

Components get cheapercomputers get smallera lot of comparisontweets.

In 1965 Intel co-founder Gordon Moore made his original observation, noticing that over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years. The prediction was specific to semiconductors and stretched out for a decade. Its demise has long been predicted, and eventually will come to an end, but continues to be valid to this day.

Expanding beyond semiconductors, and reshaping all kinds of businesses, including those not traditionally thought of as tech.

Yes, Box co-founder Aaron Levie is the official spokesperson for Moore’s Law, and we’re all perfectly okay with that. His cloud computing company would not be around without it. He’s grateful. We’re all grateful. In conversations Moore’s Law constantly gets referenced.

It has become both a prediction and an abstraction.

Expanding far beyond its origin as a transistor-centric metric.

But Moore’s Law of integrated circuits is only the most recent paradigm in a much longer and even more profound technological trend.

Humanity’s capacity to compute has been compounding for as long as we could measure it.

5 Computing Paradigms: Electromechanical computer build by IBM for the 1890 U.S. Census → Alan Turing’s relay based computer that cracked the Nazi Enigma → Vacuum-tube computer predicted Eisenhower’s win in 1952 → Transistor-based machines used in the first space launches → Integrated-circuit-based personal computer

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines Google’s Director of Engineering, futurist, and author Ray Kurzweil proposed “The Law of Accelerating Returns”, according to which the rate of change in a wide variety of evolutionary systems tends to increase exponentially. A specific paradigm, a method or approach to solving a problem (e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the paradigm exhausts its potential. When this happens, a paradigm shift, a fundamental change in the technological approach occurs, enabling the exponential growth to continue.

Kurzweil explains:

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing’s relay-based machine that cracked the Nazi enigma code, to the vacuum tube computer that predicted Eisenhower’s win in 1952, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

This graph, which venture capitalist Steve Jurvetson describes as the most important concept ever to be graphed, is Kurzweil’s 110 year version of Moore’s Law. It spans across five paradigm shifts that have contributed to the exponential growth in computing.

Each dot represents the best computational price-performance device of the day, and when plotted on a logarithmic scale, they fit on the same double exponential curve that spans over a century. This is a very long lasting and predictable trend. It enables us to plan for a time beyond Moore’s Law, without knowing the specifics of the paradigm shift that’s ahead. The next paradigm will advance our ability to compute to such a massive scale, it will be beyond our current ability to comprehend.

The Power of Exponential Growth

Human perception is linear, technological progress is exponential. Our brains are hardwired to have linear expectations because that has always been the case. Technology today progresses so fast that the past no longer looks like the present, and the present is nowhere near the future ahead. Then seemingly out of nowhere, we find ourselves in a reality quite different than what we would expect.

Kurzweil uses the overall growth of the internet as an example. The bottom chart being linear, which makes the internet growth seem sudden and unexpected, whereas the the top chart with the same data graphed on a logarithmic scale tell a very predictable story. On the exponential graph internet growth doesn’t come out of nowhere; it’s just presented in a way that is more intuitive for us to comprehend.

We are still prone to underestimate the progress that is coming because it’s difficult to internalize this reality that we’re living in a world of exponential technological change. It is a fairly recent development. And it’s important to get an understanding for the massive scale of advancements that the technologies of the future will enable. Particularly now, as we’ve reachedwhat Kurzweil calls the “Second Half of the Chessboard.”

(In the end the emperor realizes that he’s been tricked, by exponents, and has the inventor beheaded. In another version of the story the inventor becomes the new emperor).

It’s important to note that as the emperor and inventor went through the first half of the chessboard things were fairly uneventful. The inventor was first given spoonfuls of rice, then bowls of rice, then barrels, and by the end of the first half of the chess board the inventor had accumulated one large field’s worth — 4 billion grains — which is when the emperor started to take notice. It was only as they progressed through the second half of the chessboard that the situation quickly deteriorated.

# of Grains on 1st half: 4,294,967,295

# of Grains on 2nd half: 18,446,744,069,414,600,000

Mind-bending nonlinear gains in computing are about to get a lot more realistic in our lifetime, as there have been slightly more than 32 doublings of performance since the first programmable computers were invented.

Kurzweil’s Predictions

Kurzweil is known for making mind-boggling predictions about the future. And his track record is pretty good.

“…Ray is the best person I know at predicting the future of artificial intelligence.” —Bill Gates

Ray’s prediction for the future may sound crazy (they do sound crazy), but it’s important to note that it’s not about the specific prediction or the exact year. What’s important to focus on is what the they represent. These predictions are based on an understanding of Moore’s Law and Ray’s Law of Accelerating Returns, an awareness for the power of exponential growth, and an appreciation that information technology follows an exponential trend. They may sound crazy, but they are not based out of thin air.

And with that being said…

Second Half of the Chessboard Predictions

“By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.”

“By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.”

To expand image →

Not quite there yet…

“By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.”

These clones are cute.

“By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.”

Multiplying our intelligence a billionfold by linking our neocortex to a synthetic neocortex in the cloud — what does that actually mean?

In March 2014 Kurzweil gave an excellent talk at the TED Conference. It was appropriately called: Get ready for hybrid thinking.

Here is a summary:

To expand image →

These are the highlights:

Nanobots will connect our neocortex to a synthetic neocortex in the cloud, providing an extension of our neocortex.

Our thinking then will be a hybrid of biological and non-biological thinking(the non-biological portion is subject to the Law of Accelerating Returns and it will grow exponentially).

The frontal cortex and neocortex are not really qualitatively different, so it’s a quantitative expansion of the neocortex (like adding processing power).

The last time we expanded our neocortex was about two million years ago. That additional quantity of thinking was the enabling factor for us to take aqualitative leap and advance language, science, art, technology, etc.

We’re going to again expand our neocortex, only this time it won’t be limited by a fixed architecture of inclosure. It will be expanded without limits, by connecting our brain directly to the cloud.

We already carry a supercomputer in our pocket. We have unlimited access to all the world’s knowledge at our fingertips. Keeping in mind that we are prone to underestimate technological advancements (and that 2045 is not a hard deadline) is it really that far of a stretch to imagine a future where we’re always connected directly from our brain?

Progress is underway. We’ll be able to reverse engineering the neural cortex within five years. Kurzweil predicts that by 2030 we’ll be able to reverse engineer the entire brain. His latest book is called How to Create a Mind… This is the reason Google hired Kurzweil.

Hybrid Human Machines

To expand image →

“We’re going to become increasingly non-biological…”

“We’ll also have non-biological bodies…”

“If the biological part went away it wouldn’t make any difference…”

They* will be as realistic as real reality.”

Impact on Society

technological singularity —“the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization” — is beyond the scope of this article, but these advancements will absolutely have an impact on society. Which way is yet to be determined.

There may be some regret

Politicians will not know who/what to regulate.

Evolution may take an unexpected twist.

The rich-poor gap will expand.

The unimaginable will become reality and society will change.

Problem: Your brain (Medium)

I will be talking mainly about development for the web.

Ilya Dorman, Feb 15, 2015

Our puny brain can handle a very limited amount of logic at a time. While programmers proclaim logic as their domain, they are only sometimes and slightly better at managing complexity than the rest of us, mortals. The more logic our app has, the harder it is to change it or introduce new people to it.

The most common mistake programmers do is assuming they write code for a machine to read. While technically that is true, this mindset leads to the hell that is other people’s code.

I have worked in several start-up companies, some of them even considered “lean.” In each, it took me between few weeks to few months to fully understand their code-base, and I have about 6 years of experience with JavaScript. This does not seem reasonable to me at all.

If the code is not easy to read, its structure is already a monument—you can change small things, but major changes—the kind every start-up undergoes on an almost monthly basis—are as fun as a root canal. Once the code reaches a state, that for a proficient programmer, it is harder to read than this article—doom and suffering is upon you.

Why does the code become unreadable? Let’s compare code to plain text: the longer a sentence is, the easier it is for our mind to forget the beginning of it, and once we reach the end, we forget what was the beginning and lose the meaning of the whole sentence. You had to read the previous sentence twice because it was too long to get in one grasp? Exactly! Same with code. Worse, actually—the logic of code can be way more complex than any sentence from a book or a blog post, and each programmer has his own logic which can be total gibberish to another. Not to mention that we also need to remember the logic. Sometimes we come back to it the same day and sometimes after two month. Nobody remembers anything about their code after not looking at it for two month.

To make code readable to other humans we rely on three things:

1. Conventions

Conventions are good, but they are very limited: enforce them too little and the programmer becomes coupled to the code—no one will ever understand what they meant once they are gone. Enforce too much and you will have hour-long debates about every space and colon (true story.) The “habitable zone” is very narrow and easy to miss.


They are probably the most helpful, if done right. Unfortunately many programmers write their comments in the same spirit they write their code—very idiosyncratic. I do not belong to the school claiming good code needs no comments, but even beautifully commented code can still be extremely complicated.

3. “Other people know this programming language as much as I do, so they must understand my writings.”

Well… This is JavaScript:


4. Tests

Tests are a devil in disguise. ”How do we make sure our code is good and readable? We write more code!” I know many of you might quit this post right here, but bear with me for a few more lines: regardless of their benefit, tests are another layer of logic. They are more code to be read and understood. Tests try to solve this exact problem: your code is too complicated to calculate it’s result in your brain? So you say “well, this is what should happen in the end.” And when it doesn’t, you go digging for the problem. Your code should be simple enough to read a function or a line and understand what should be the result of running it.

Your life as a programmer could be so much easier!

Solution: Radical Minimalism

I will break down this approach into practical points, but the main idea is: use LESS logic.

  • Cut 80% of your product’s features

Yes! Just like that. Simplicity, first of all, comes from the product. Make it easy for people to understand and use. Make it do one thing well, and only then add up (if there is still a need.)

  • Use nothing but what you absolutely must

Do not include a single line of code (especially from libraries) that you are not 100% sure you will use and that it is the simplest, most straightforward solution available. Need a simple chat app and use Angular.js because it’s nice with the two-way binding? You deserve those hours and days of debugging and debating about services vs. providers.

Side note: The JavaScript browser api is event-driven, it is made to respond when stuff (usually user input) happens. This means that events change data. Many new frameworks (Angular, Meteor) reverse this direction and make data changes trigger events. If your app is simple, you might live happily with the new mysterious layer, but if not — you get a whole new layer of complexity that you need to understand and your life will get exponentially more miserable. Unless your app constantly manages big amounts of data, Avoid those frameworks.

  • Use simplest logic possible

Say you need show different HTML on different occasions. You can use client-side routing with controllers and data passed to each controller that renders the HTML from a template. Or you can just use static HTML pages with normal browser navigation, and update manually the HTML. Use the second.

  • Make short Javascript files

Limit the length of your JS files to a single editor page, and make each file do one thing. Can’t cramp all your glorious logic into small modules? Good, that means you should have less of it, so that other humans will understand your code in reasonable time.

  • Avoid pre-compilers and task-runners like AIDS

The more layers there are between what you write and what you see, the more logic your mind needs to remember. You might think grunt or gulp help you to simplify stuff but then you have 30 tasks that you need to remember what they do to your code, how to use them, update them, and teach them to any new coder. Not to mention compiling.

Side note #1: CSS pre-compilers are OK because they have very little logic but they help a lot in terms of readable structure, compared to plain CSS. I barely used HTML pre-compilers so you’ll have to decide for yourself.

Side note #2: Task-runners could save you time, so if you do use them, do it wisely keeping the minimalistic mindset.

  • Use Javascript everywhere

This one is quite specific, and I am not absolutely sure about it, but having the same language in client and server can simplify the data management between them.

  • Write more human code

Give your non trivial variables (and functions) descriptive names. Make shorter lines but only if it does not compromise readability.

Treat your code like poetry and take it to the edge of the bare minimum.

Panel Urges Research on Geoengineering as a Tool Against Climate Change (New York Times)

Piles at a CCI Energy Solutions coal handling plant in Shelbiana, Ky. Geoengineering proposals might counteract the effects of climate change that are the result of burning fossils fuels, such as coal. Credit: Luke Sharrett/Getty Images 

With the planet facing potentially severe impacts from global warming in coming decades, a government-sponsored scientific panel on Tuesday called for more research on geoengineering — technologies to deliberately intervene in nature to counter climate change.

The panel said the research could include small-scale outdoor experiments, which many scientists say are necessary to better understand whether and how geoengineering would work.

Some environmental groups and others say that such projects could have unintended damaging effects, and could set society on an unstoppable path to full-scale deployment of the technologies.

But the National Academy of Sciences panel said that with proper governance, which it said needed to be developed, and other safeguards, such experiments should pose no significant risk.

In two widely anticipated reports, the panel — which was supported by NASA and other federal agencies, including what the reports described as the “U.S. intelligence community” — noted that drastically reducing emissions of carbon dioxide and other greenhouse gases was by far the best way to mitigate the effects of a warming planet.

A device being developed by a company called Global Thermostat, is made to capture carbon dioxide from the air. This may be one solution to counteract climate change.CreditHenry Fountain/The New York Times 

But the panel, in making the case for more research into geoengineering, said, “It may be prudent to examine additional options for limiting the risks from climate change.”

“The committee felt that the need for information at this point outweighs the need for shoving this topic under the rug,” Marcia K. McNutt, chairwoman of the panel and the editor in chief of the journal Science, said at a news conference in Washington.

Geoengineering options generally fall into two categories: capturing and storing some of the carbon dioxide that has already been emitted so that the atmosphere traps less heat, or reflecting more sunlight away from the earth so there is less heat to start with. The panel issued separate reports on each.

The panel said that while the first option, called carbon dioxide removal, was relatively low risk, it was expensive, and that even if it was pursued on a planetwide scale, it would take many decades to have a significant impact on the climate. But the group said research was needed to develop efficient and effective methods to both remove the gas and store it so it remains out of the atmosphere indefinitely.

The second option, called solar radiation management, is far more controversial. Most discussions of the concept focus on the idea of dispersing sulfates or other chemicals high in the atmosphere, where they would reflect sunlight, in some ways mimicking the effect of a large volcanic eruption.

The process would be relatively inexpensive and should quickly lower temperatures, but it would have to be repeated indefinitely and would do nothing about another carbon dioxide-related problem: the acidification of oceans.

This approach might also have unintended effects on weather patterns around the world — bringing drought to once-fertile regions, for example. Or it might be used unilaterally as a weapon by governments or even extremely wealthy individuals.

Opponents of geoengineering have long argued that even conducting research on the subject presents a moral hazard that could distract society from the necessary task of reducing the emissions that are causing warming in the first place.

“A geoengineering ‘technofix’ would take us in the wrong direction,” Lisa Archer, food and technology program director of the environmental group Friends of the Earth, said in a statement. “Real climate justice requires dealing with root causes of climate change, not launching risky, unproven and unjust schemes.”

But the panel said that society had “reached a point where the severity of the potential risks from climate change appears to outweigh the potential risks from the moral hazard” of conducting research.

Ken Caldeira, a geoengineering researcher at the Carnegie Institution for Science and a member of the committee, said that while the panel felt that it was premature to deploy any sunlight-reflecting technologies today, “it’s worth knowing more about them,” including any problems that might make them unworkable.

“If there’s a real showstopper, we should know about it now,” Dr. Caldeira said, rather than discovering it later when society might be facing a climate emergency and desperate for a solution.

Dr. Caldeira is part of a small community of scientists who have researched solar radiation management concepts. Almost all of the research has been done on computers, simulating the effects of the technique on the climate. One attempt in Britain in 2011 to conduct an outdoor test of some of the engineering concepts provoked a public outcry. The experiment was eventually canceled.

David Keith, a researcher at Harvard University who reviewed the reports before they were released, said in an interview, “I think it’s terrific that they made a stronger call than I expected for research, including field research.” Along with other researchers, Dr. Keith has proposed a field experiment to test the effect of sulfate chemicals on atmospheric ozone.

Unlike some European countries, the United States has never had a separate geoengineering research program. Dr. Caldeira said establishing a separate program was unlikely, especially given the dysfunction in Congress. But he said that because many geoengineering research proposals might also help in general understanding of the climate, agencies that fund climate research might start to look favorably upon them.

Dr. Keith agreed, adding that he hoped the new reports would “break the logjam” and “give program managers the confidence they need to begin funding.”

At the news conference, Waleed Abdalati, a member of the panel and a professor at the University of Colorado, said that geoengineering research would have to be subject to governance that took into account not just the science, “but the human ramifications, as well.”

Dr. Abdalati said that, in general, the governance needed to precede the research. “A framework that addresses what kinds of activities would require governance is a necessary first step,” he said.

Raymond Pierrehumbert, a geophysicist at the University of Chicago and a member of the panel, said in an interview that while he thought that a research program that allowed outdoor experiments was potentially dangerous, “the report allows for enough flexibility in the process to follow that it could be decided that we shouldn’t have a program that goes beyond modeling.”

Above all, he said, “it’s really necessary to have some kind of discussion among broader stakeholders, including the public, to set guidelines for an allowable zone for experimentation.”

The Risks of Climate Engineering (New York Times)

Credit: Sarah Jacoby 

THE Republican Party has long resisted action on climate change, but now that much of the electorate wants something done, it needs to find a way out of the hole it has dug for itself. A committee appointed by the National Research Council may just have handed the party a ladder.

In a two-volume report, the council is recommending that the federal government fund a research program into geoengineering as a response to a warming globe. The study could be a watershed moment because reports from the council, an arm of the National Academies that provides advice on science and technology, are often an impetus for new scientific research programs.

Sometimes known as “Plan B,” geoengineering covers a variety of technologies aimed at deliberate, large-scale intervention in the climate system to counter global warming.

Despairing at global foot-dragging, some climate scientists now believe that a turn to Plan B is inevitable. They see it as inscribed in the logic of the situation. The council’s study begins with the assertion that the “likelihood of eventually considering last-ditch efforts” to address climate destabilization grows every year.

The report is balanced in its assessment of the science. Yet by bringing geoengineering from the fringes of the climate debate into the mainstream, it legitimizes a dangerous approach.

Beneath the identifiable risks is not only a gut reaction to the hubris of it all — the idea that humans could set out to regulate the Earth system, perhaps in perpetuity — but also to what it says about where we are today. As the committee’s chairwoman, Marcia McNutt, told The Associated Press: The public should read this report “and say, ‘This is downright scary.’ And they should say, ‘If this is our Hail Mary, what a scary, scary place we are in.’ ”

Even scarier is the fact that, while most geoengineering boosters see these technologies as a means of buying time for the world to get its act together, others promote them as a substitute for cutting emissions. In 2008, Newt Gingrich, the former House speaker, later Republican presidential candidate and an early backer of geoengineering, said: “Instead of penalizing ordinary Americans, we would have an option to address global warming by rewarding scientific invention,” adding: “Bring on the American ingenuity.”

The report, considerably more cautious, describes geoengineering as one element of a “portfolio of responses” to climate change and examines the prospects of two approaches — removing carbon dioxide from the atmosphere, and enveloping the planet in a layer of sulfate particles to reduce the amount of solar radiation reaching the Earth’s surface.

At the same time, the council makes clear that there is “no substitute for dramatic reductions in the emissions” of greenhouse gases to slow global warming and acidifying oceans.

The lowest-risk strategies for removing carbon dioxide are “currently limited by cost and at present cannot achieve the desired result of removing climatically important amounts,” the report said. On the second approach, the council said that at present it was “opposed to climate-altering deployment” of technologies to reflect radiation back into space.

Still, the council called for research programs to fill the gaps in our knowledge on both approaches, evoking a belief that we can understand enough about how the Earth system operates in order to take control of it.

Expressing interest in geoengineering has been taboo for politicians worried about climate change for fear they would be accused of shirking their responsibility to cut carbon emissions. Yet in some congressional offices, interest in geoengineering is strong. And Congress isn’t the only place where there is interest. Russia in 2013 unsuccessfully sought to insert a pro-geoengineering statement into the latest report of the Intergovernmental Panel on Climate Change.

Early work on geoengineering has given rise to one of the strangest paradoxes in American politics: enthusiasm for geoengineering from some who have attacked the idea of human-caused global warming. The Heartland Institute, infamous for its billboard comparing those who support climate science to the Unabomber, Theodore J. Kaczynski, featured an article in one of its newsletters from 2007 describing geoengineering as a “practical, cost-effective global warming strategy.”

Some scholars associated with conservative think tanks like the Hoover Institution and the Hudson Institute have written optimistically about geoengineering.

Oil companies, too, have dipped their toes into the geoengineering waters with Shell, for instance, having funded research into a scheme to put lime into seawater so it absorbs more carbon dioxide.

With half of Republican voters favoring government action to tackle global warming, any Republican administration would be tempted by the technofix to beat all technofixes.

For some, instead of global warming’s being proof of human failure, engineering the climate would represent the triumph of human ingenuity. While climate change threatens to destabilize the system, geoengineering promises to protect it. If there is such a thing as a right-wing technology, geoengineering is it.

President Obama has been working assiduously to persuade the world that the United States is at last serious about Plan A — winding back its greenhouse gas emissions. The suspicions of much of the world would be reignited if the United States were the first major power to invest heavily in Plan B.

Scientists urge global ‘wake-up call’ to deal with climate change (The Guardian)

Climate change has advanced so rapidly that work must start on unproven technologies now, admits US National Academy of Science

Series of mature thunderstorms located near the Parana River in southern Brazil.

‘The likelihood of eventually considering last-ditch efforts to address damage from climate change grows with every year of inaction on emissions control,’ says US National Academy of Science report. Photograph: ISS/NASA

Climate change has advanced so rapidly that the time has come to look at options for a planetary-scale intervention, the National Academy of Science said on Tuesday.

The scientists were categorical that geoengineering should not be deployed now, and was too risky to ever be considered an alternative to cutting the greenhouse gas emissions that cause climate change.

But it was better to start research on such unproven technologies now – to learn more about their risks – than to be stampeded into climate-shifting experiments in an emergency, the scientists said.

With that, a once-fringe topic in climate science moved towards the mainstream – despite the repeated warnings from the committee that cutting carbon pollution remained the best hope for dealing with climate change.

“That scientists are even considering technological interventions should be a wake-up call that we need to do more now to reduce emissions, which is the most effective, least risky way to combat climate change,” Marcia McNutt, the committee chair and former director of the US Geological Survey, said.

Asked whether she foresaw a time when scientists would eventually turn to some of the proposals studied by the committee, she said: “Gosh, I hope not.”

The two-volume report, produced over 18 months by a team of 16 scientists, was far more guarded than a similar British exercise five years ago which called for an immediate injection of funds to begin research on climate-altering interventions.

The scientists were so sceptical about geo-engineering that they dispensed with the term, opting for “climate intervention”. Engineering implied a measure of control the technologies do not have, the scientists said.

But the twin US reports – Climate Intervention: Carbon Dioxide Removal and Reliable Sequestration and Climate Intervention: Reflecting Sunlight to Cool the Earth – could boost research efforts at a limited scale.

The White House and committee leaders in Congress were briefed on the report’s findings this week.

Bill Gates, among others, argues the technology, which is still confined to computer models, has enormous potential and he has funded research at Harvard. The report said scientific research agencies should begin carrying out co-ordinated research.

But geo-engineering remains extremely risky and relying on a planetary hack – instead of cutting carbon dioxide emissions – is “irresponsible and irrational”, the report said.

The scientists looked at two broad planetary-scale technological fixes for climate change: sucking carbon dioxide emissions out of the atmosphere, or carbon dioxide removal, and increasing the amount of sunlight reflected away from the earth and back into space, or albedo modification.

Albedo modification, injecting sulphur dioxide to increase the amount of reflective particles in the atmosphere and increase the amount of sunlight reflected back into space, is seen as a far riskier proposition.

Tinkering with reflectivity would merely mask the symptoms of climate change, the report said. It would do nothing to reduce the greenhouse gas emissions that cause climate change.

The world would have to commit to continuing a course of albedo modification for centuries on end – or watch climate change come roaring back.

“It’s hard to unthrow that switch once you embark on an albedo modification approach. If you walk back from it, you stop masking the effects of climate change and you unleash the accumulated effects rather abruptly,” Waleed Abdalati, a former Nasa chief scientist who was on the panel, said.

More ominously, albedo modification could alter the climate in new and additional ways from which there would be no return. “It doesn’t go back, it goes different,” he said.

The results of such technologies are still far too unpredictable on a global scale, McNutt said. She also feared they could trigger conflicts. The results of such climate interventions will vary enormously around the globe, she said.

“Kansas may be happy with the answer, but Congo may not be happy at all because of changes in rainfall. It may be quite a bit worse for the Arctic, and it’s not going to address at all ocean acidification,” she said. “There are all sorts of reasons why one might not view albedo modified world as an improvement.”

The report also warned that offering the promise of a quick fix to climate change through planet hacking could discourage efforts to cut the greenhouse gas emissions that cause climate change.

“The message is that reducing carbon dioxide emissions is by far the preferable way of addressing the problem,” said Raymond Pierrehumbert, a University of Chicago climate scientist, who served on the committee writing the report. “Dimming the sun by increasing the earth’s reflectivity shouldn’t be viewed as a cheap substitute for reducing carbon dioxide emissions. It is a very poor and distant third, fourth, or even fifth choice. |It is way down on the list of things you want to do.”

But geoengineering has now landed on the list.

Climate change was advancing so rapidly a climate emergency – such as widespread crop failure – might propel governments into trying such large-scale interventions.

“The likelihood of eventually considering last-ditch efforts to address damage from climate change grows with every year of inaction on emissions control,” the report said.

If that was the case, it was far better to be prepared for the eventualities by carrying out research now.

The report gave a cautious go-ahead to technologies to suck carbon dioxide out of the air, finding them generally low-risk – although they were prohibitively expensive.

The report discounted the idea of seeding the ocean with iron filings to create plankton blooms that absorb carbon dioxide.

But it suggested carbon-sucking technologies could be considered as part of a portfolio of responses to fight climate change.

The Institution of Mechanical Engineers has come up with some ideas for what

Carbon-sucking technologies, such as these ‘artificial forests’, could in future be considered to fight climate change – but reducing carbon dioxide emissions now is by far the preferable way of addressing the problem. Photograph: Guardian

It would involve capturing carbon dioxide from the atmosphere and pumping it underground at high pressure – similar to technology that is only now being tested at a small number of coal plants.

Sucking carbon dioxide out of the air is much more challenging than capturing it from a power plant – which is already prohibitively expensive, the report said. But it still had a place.

“I think there is a good case that eventually this might have to be part of the arsenal of weapons we use against climate change,” said Michael Oppenheimer, a climate scientist at Princeton University, who was not involved with the report.

Drawing a line between the two technologies – carbon dioxide removal and albedo modification – was seen as one of the important outcomes of Tuesday’s report.

The risks and potential benefits of the two are diametrically opposed, said Ken Caldeira, an atmospheric scientist at Carnegie Institution’s Department of Global Ecology and a geoengineering pioneer, who was on the committee.

“The primary concern about carbon dioxide removal is how much does it cost,” he said. “There are no sort of novel, global existential dilemmas that are raised. The main aim of the research is to make it more affordable, and to make sure it is environmentally acceptable.”

In the case of albedo reflection, however, the issue is risk. “A lot of those ideas are relatively cheap,” he said. “The question isn’t about direct cost. The question is, What bad stuff is going to happen?”

There are fears such interventions could lead to unintended consequences that are even worse than climate change – widespread crop failure and famine, clashes between countries over who controls the skies.

But Caldeira, who was on the committee, argued that it made sense to study those consequences now. “If there are real show stoppers and it is not going to work, it would be good to know that in advance and take it off the table, so people don’t do something rash in an emergency situation,” he said.

Spraying sulphur dioxide into the atmosphere could lower temperatures – at least according to computer models and real-life experiences following major volcanic eruptions.

But the cooling would be temporary and it would do nothing to right ocean chemistry, which was thrown off kilter by absorbing those emissions.

“My view of albedo modification is that it is like taking pain killers when you need surgery for cancer,” said Pierrehumbert. “It’s ignoring the problem. The problem is still growing though and it is going to come back and get you.”

Computadores quânticos podem revolucionar teoria da informação (Fapesp)

30 de janeiro de 2015

Por Diego Freire

Agência FAPESP – A perspectiva dos computadores quânticos, com capacidade de processamento muito superior aos atuais, tem levado ao aprimoramento de uma das áreas mais versáteis da ciência, com aplicações nas mais diversas áreas do conhecimento: a teoria da informação. Para discutir essa e outras perspectivas, o Instituto de Matemática, Estatística e Computação Científica (Imecc) da Universidade Estadual de Campinas (Unicamp) realizou, de 19 a 30 de janeiro, a SPCoding School.

O evento ocorreu no âmbito do programa Escola São Paulo de Ciência Avançada (ESPCA), da FAPESP, que oferece recursos para a organização de cursos de curta duração em temas avançados de ciência e tecnologia no Estado de São Paulo.

A base da informação processada pelos computadores largamente utilizados é o bit, a menor unidade de dados que pode ser armazenada ou transmitida. Já os computadores quânticos trabalham com qubits, que seguem os parâmetros da mecânica quântica, ramo da Física que trata das dimensões próximas ou abaixo da escala atômica. Por conta disso, esses equipamentos podem realizar simultaneamente uma quantidade muito maior de cálculos.

“Esse entendimento quântico da informação atribui toda uma complexidade à sua codificação. Mas, ao mesmo tempo em que análises complexas, que levariam décadas, séculos ou até milhares de anos para serem feitas em computadores comuns, poderiam ser executadas em minutos por computadores quânticos, também essa tecnologia ameaçaria o sigilo de informações que não foram devidamente protegidas contra esse tipo de novidade”, disse Sueli Irene Rodrigues Costa, professora do IMECC, à Agência FAPESP.

A maior ameaça dos computadores quânticos à criptografia atual está na sua capacidade de quebrar os códigos usados na proteção de informações importantes, como as de cartão de crédito. Para evitar esse tipo de risco é preciso desenvolver também sistemas criptográficos visando segurança, considerando a capacidade da computação quântica.

“A teoria da informação e a codificação precisam estar um passo à frente do uso comercial da computação quântica”, disse Rodrigues Costa, que coordena o Projeto Temático “Segurança e confiabilidade da informação: teoria e prática”, apoiado pela FAPESP.

“Trata-se de uma criptografia pós-quântica. Como já foi demonstrado no final dos anos 1990, os procedimentos criptográficos atuais não sobreviverão aos computadores quânticos por não serem tão seguros. E essa urgência pelo desenvolvimento de soluções preparadas para a capacidade da computação quântica também impulsiona a teoria da informação a avançar cada vez mais em diversas direções”, disse.

Algumas dessas soluções foram tratadas ao longo da programação da SPCoding School, muitas delas visando sistemas mais eficientes para a aplicação na computação clássica, como o uso de códigos corretores de erros e de reticulados para criptografia. Para Rodrigues Costa, a escalada da teoria da informação em paralelo ao desenvolvimento da computação quântica provocará revoluções em várias áreas do conhecimento.

“A exemplo das múltiplas aplicações da teoria da informação na atualidade, a codificação quântica também elevaria diversas áreas da ciência a novos patamares por possibilitar simulações computacionais ainda mais precisas do mundo físico, lidando com uma quantidade exponencialmente maior de variáveis em comparação aos computadores clássicos”, disse Rodrigues Costa.

A teoria da informação envolve a quantificação da informação e envolve áreas como matemática, engenharia elétrica e ciência da computação. Teve como pioneiro o norte-americano Claude Shannon (1916-2001), que foi o primeiro a considerar a comunicação como um problema matemático.

Revoluções em curso

Enquanto se prepara para os computadores quânticos, a teoria da informação promove grandes modificações na codificação e na transmissão de informações. Amin Shokrollahi, da École Polytechnique Fédérale de Lausanne, na Suíça, apresentou na SPCoding School novas técnicas de codificação para resolver problemas como ruídos na informação e consumo elevado de energia no processamento de dados, inclusive na comunicação chip a chip nos aparelhos.

Shokrollahi é conhecido na área por ter inventado os códigos Raptor e coinventado os códigos Tornado, utilizados em padrões de transmissão móveis de informação, com implementações em sistemas sem fio, satélites e no método de transmissão de sinais televisivos IPTV, que usa o protocolo de internet (IP, na sigla em inglês) para transmitir conteúdo.

“O crescimento do volume de dados digitais e a necessidade de uma comunicação cada vez mais rápida aumentam a susceptibilidade a vários tipos de ruído e o consumo de energia. É preciso buscar novas soluções nesse cenário”, disse.

Shokrollahi também apresentou inovações desenvolvidas na empresa suíça Kandou Bus, da qual é diretor de pesquisa. “Utilizamos algoritmos especiais para codificar os sinais, que são todos transferidos simultaneamente até que um decodificador recupere os sinais originais. Tudo isso é feito evitando que fios vizinhos interfiram entre si, gerando um nível de ruído significativamente menor. Os sistemas também reduzem o tamanho dos chips, aumentam a velocidade de transmissão e diminuem o consumo de energia”, explicou.

De acordo com Rodrigues Costa, soluções semelhantes também estão sendo desenvolvidas em diversas tecnologias largamente utilizadas pela sociedade.

“Os celulares, por exemplo, tiveram um grande aumento de capacidade de processamento e em versatilidade, mas uma das queixas mais frequentes entre os usuários é de que a bateria não dura. Uma das estratégias é descobrir meios de codificar de maneira mais eficiente para economizar energia”, disse.

Aplicações biológicas

Não são só problemas de natureza tecnológica que podem ser abordados ou solucionados por meio da teoria da informação. Professor na City University of New York, nos Estados Unidos, Vinay Vaishampayan coordenou na SPCoding School o painel “Information Theory, Coding Theory and the Real World”, que tratou de diversas aplicações dos códigos na sociedade – entre elas, as biológicas.

“Não existe apenas uma teoria da informação e suas abordagens, entre computacionais e probabilísticas, podem ser aplicadas a praticamente todas as áreas do conhecimento. Nós tratamos no painel das muitas possibilidades de pesquisa à disposição de quem tem interesse em estudar essas interfaces dos códigos com o mundo real”, disse à Agência FAPESP.

Vaishampayan destacou a Biologia como área de grande potencial nesse cenário. “A neurociência apresenta questionamentos importantes que podem ser respondidos com a ajuda da teoria da informação. Ainda não sabemos em profundidade como os neurônios se comunicam entre si, como o cérebro funciona em sua plenitude e as redes neurais são um campo de estudo muito rico também do ponto de vista matemático, assim como a Biologia Molecular”, disse.

Isso porque, de acordo com Max Costa, professor da Faculdade de Engenharia Elétrica e de Computação da Unicamp e um dos palestrantes, os seres vivos também são feitos de informação.

“Somos codificados por meio do DNA das nossas células. Descobrir o segredo desse código, o mecanismo que há por trás dos mapeamentos que são feitos e registrados nesse contexto, é um problema de enorme interesse para a compreensão mais profunda do processo da vida”, disse.

Para Marcelo Firer, professor do Imecc e coordenador da SPCoding School, o evento proporcionou a estudantes e pesquisadores de diversos campos novas possibilidades de pesquisa.

“Os participantes compartilharam oportunidades de engajamento em torno dessas e muitas outras aplicações da Teoria da Informação e da Codificação. Foram oferecidos desde cursos introdutórios, destinados a estudantes com formação matemática consistente, mas não necessariamente familiarizados com codificação, a cursos de maior complexidade, além de palestras e painéis de discussão”, disse Firer, membro da coordenação da área de Ciência e Engenharia da Computação da FAPESP.

Participaram do evento cerca de 120 estudantes de 70 universidades e 25 países. Entre os palestrantes estrangeiros estiveram pesquisadores do California Institute of Technology (Caltech), da Maryland University e da Princeton University, nos Estados Unidos; da Chinese University of Hong Kong, na China; da Nanyang Technological University, em Cingapura; da Technische Universiteit Eindhoven, na Holanda; da Universidade do Porto, em Portugal; e da Tel Aviv University, em Israel.

Mais informações em

From the Concorde to Sci-Fi Climate Solutions (Truthout)

Thursday, 29 January 2015 00:00 By Almuth Ernsting, Truthout

The interior of the Concorde aircraft at the Scotland Museum of Flight. (Photo: Magnus Hagdorn)

The interior of the Concorde aircraft at the Scotland Museum of Flight. (Photo: Magnus Hagdorn)

Touting “sci-fi climate solutions” – untested technologies not really scalable to the dimensions of our climate change crisis – dangerously delays the day when we actually reduce greenhouse gas emissions.

Last week, I took my son to Scotland’s Museum of Flight. Its proudest exhibit: a Concorde. To me, it looked stunningly futuristic. “How old,” remarked my son, looking at the confusing array of pre-digital controls in the cockpit. Watching the accompanying video – “Past Dreams of the Future” – it occurred to me that the story of the Concorde stands as a symbol for two of the biggest obstacles to addressing climate change.

The Concorde must rank among the most wasteful ways of guzzling fossil fuels ever invented. No other form of transport is as destructive to the climate as aviation – yet the Concorde burned almost five times as much fuel per person per mile as a standard aircraft. Moreover, by emitting pollutants straight into the lower stratosphere, the Concorde contributed to ozone depletion. At the time of the Concorde’s first test flight in 1969, little was known about climate change and the ozone hole had not yet been discovered. Yet by the time the Concorde was grounded – for purely economic reasons – in 2003, concerns about its impact on the ozone layer had been voiced for 32 years and the Intergovernmental Panel on Climate Change’s (IPCC) first report had been published for 13 years.

The Concorde’s history illustrates how the elites will stop at nothing when pursuing their interests or desires. No damage to the atmosphere and no level of noise-induced misery to those living under Concorde flight paths were treated as bad enough to warrant depriving the richest of a glamorous toy.

If this first “climate change lesson” from the Concorde seems depressing, the second will be even less comfortable for many.

Back in 1969, the UK’s technology minister marveled at Concorde’s promises: “It’ll change the shape of the world; it’ll shrink the globe by half . . . It replaces in one step the entire progress made in aviation since the Wright Brothers in 1903.”

Few would have believed at that time that, from 2003, no commercial flight would reach even half the speed that had been achieved back in the 1970s.

The Concorde remained as fast – yet as inefficient and uneconomical – as it had been from its commercial inauguration in 1976 – despite vast amounts of public and industry investment. The term “Concorde fallacy” entered British dictionaries: “The idea that you should continue to spend money on a project, product, etc. in order not to waste the money or effort you have already put into it, which may lead to bad decisions.”

The lessons for those who believe in overcoming climate change through technological progress are sobering: It’s not written in the stars that every technology dreamed up can be realized, nor that, with enough time and money, every technical problem will be overcome and that, over time, every new technology will become better, more efficient and more affordable.

Yet precisely such faith in technological progress informs mainstream responses to climate change, including the response by the IPCC. At a conference last autumn, I listened to a lead author of the IPCC’s latest assessment report. His presentation began with a depressing summary of the escalating climate crisis and the massive rise in energy use and carbon emissions, clearly correlated with economic growth. His conclusion was highly optimistic: Provided we make the right choices, technological progress offers a future with zero-carbon energy for all, with ever greater prosperity and no need for economic growth to end. This, he illustrated with some drawings of what we might expect by 2050: super-grids connecting abundant nuclear and renewable energy sources across continents, new forms of mass transport (perhaps modeled on Japan’s magnetic levitation trains), new forms of aircraft (curiously reminiscent of the Concorde) and completely sustainable cars (which looked like robots on wheels). The last and most obscure drawing in his presentation was unfinished, to remind us that future technological progress is beyond our capacity to imagine; the speaker suggested it might be a printer printing itself in a new era of self-replicating machines.

These may represent the fantasies of just one of many lead authors of the IPCC’s recent report. But the IPCC’s 2014 mitigation report itself relies on a large range of techno-fixes, many of which are a long way from being technically, let alone commercially, viable. Climate justice campaigners have condemned the IPCC’s support for “false solutions” to climate change. But the term “false solutions” does not distinguish between techno-fixes that are real and scalable, albeit harmful and counterproductive on the one hand, and those that remain in the realm of science fiction, or threaten to turn into another “Concorde fallacy,” i.e. to keep guzzling public funds with no credible prospect of ever becoming truly viable. Let’s call the latter “sci-fi solutions.”

The most prominent, though by no means only, sci-fi solution espoused by the IPCC is BECCS – bioenergy with carbon capture and storage. According to their recent report, the vast majority of “pathways” or models for keeping temperature rise below 2 degrees Celsius rely on “negative emissions.” Although the report included words of caution, pointing out that such technologies are “uncertain” and “associated with challenges and risks,” the conclusion is quite clear: Either carbon capture and storage, including BECCS, is introduced on a very large scale, or the chances of keeping global warming within 2 degrees Celsius are minimal. In the meantime, the IPCC’s chair, Rajendra Pachauri, and the co-chair of the panel’s Working Group on Climate Change Mitigation, Ottmar Edenhofer, publicly advocate BECCS without any notes of caution about uncertainties – referring to it as a proven way of reducing carbon dioxide levels and thus global warming. Not surprisingly therefore, BECCS has even entered the UN climate change negotiations. The recent text, agreed at the Lima climate conference in December 2014 (“Lima Call for Action”), introduces the terms “net zero emissions” and “negative emissions,” i.e. the idea that we can reliably suck large amounts of carbon (those already emitted from burning fossil fuels) out of the atmosphere. Although BECCS is not explicitly mentioned in the Lima Call for Action, the wording implies support for it because it is treated as the key “negative emissions” technology by the IPCC.

If BECCS were to be applied at a large scale in the future, then we would have every reason to be alarmed. According to a scientific review, attempting to capture 1 billion tons of carbon through BECCS (far less than many of the “pathways” considered by the IPCC presume) would require 218 to 990 million hectares of switchgrass plantations (or similar scale plantations of other feedstocks, including trees), 1.6 to 7.4 trillion cubic meters of water a year, and 75 percent more than all the nitrogen fertilizers used worldwide (which currently stands at 1 billion tons according to the “conservative” estimates in many studies). By comparison, just 30 million hectares of land worldwide have been converted to grow feedstock for liquid biofuels so far. Yet biofuels have already become the main cause of accelerated growth in demand for vegetable oils and cereals, triggering huge volatility and rises in the price of wood worldwide. And by pushing up palm oil prices, biofuels have driven faster deforestation across Southeast Asia and increasingly in Africa. As a result of the ethanol boom, more than 6 million hectares of US land has been planted with corn, causing prairies and wetlands to be plowed up. This destruction of ecosystems, coupled with the greenhouse gas intensive use of fertilizers, means that biofuels overall are almost certainly worse for the climate than the fossil fuels they are meant to replace. There are no reasons to believe that the impacts of BECCS would be any more benign. And they would be on a much larger scale.

Capturing carbon takes a lot of energy, hence CCS requires around one-third more fuel to be burned to generate the same amount of energy. And sequestering captured carbon is a highly uncertain business. So far, there have been three large-scale carbon sequestration experiments. The longest-standing of these, the Sleipner field carbon sequestration trial in the North Sea, has been cited as proof that carbon dioxide can be sequestered reliably under the seabed. Yet in 2013, unexpected scars and fractures were found in the reservoir and a lead researcher concluded: “We are saying it is very likely something will come out in the end.” Another one of the supposedly “successful,” if much shorter, trials also raised “interesting questions,” according to the researchers: Carbon dioxide migrated further upward in the reservoir than predicted, most likely because injecting the carbon dioxide caused fractures in the cap rock.

There are thus good reasons to be alarmed about the prospect of large-scale bioenergy with CCS. Yet BECCS isn’t for real.

While the IPCC and world leaders conclude that we really need to use carbon capture and storage, including biomass, here’s what is actually happening: The Norwegian government, once proud of being a global pioneer of CCS, has pulled the plug on the country’s first full-scale CCS project after a scathing report from a public auditor. The Swedish state-owned energy company Vattenfall has shut down its CCS demonstration plant in Germany, the only plant worldwide testing a particular and supposedly promising carbon capture technology. The government of Alberta has dropped its previously enthusiastic support for CCS because it no longer sees it as economically viable.

True, 2014 has seen the opening of the world’s largest CCS power station, after SaskPower retrofitted one unit of their Boundary Dam coal power station in Saskatchewan to capture carbon dioxide. But Boundary Dam hardly confirms the techno-optimist’s hopes. The 100-megawatt unit costs approximately $1.4 billion to build – more than twice the cost of a much larger (non-CCS) 400-megawatt gas power station built by SaskPower in 2009. It became viable thanks only to public subsidies and to a contract with the oil company Cenovus, which agreed to buy the carbon dioxide for the next decade in order to inject it into an oil well to facilitate extraction of more hard to reach oil – a process called enhanced oil recovery (EOR). The supposed “carbon dioxide savings” predictably ignore all of the carbon dioxide emissions from burning that oil. But even with such a nearby oil field suitable for EOR, SaskPower had to make the plant far smaller than originally planned so as to avoid capturing more carbon dioxide than they could sell.

If CCS with fossil fuels is reminiscent of the Concorde fallacy, large-scale BECCS is entirely in the realm of science fiction. The supposedly most “promising technology”has never been tested in a biomass power plant and that has so far proven uneconomical with coal. Add to that the fact that biomass power plants need more feedstock and are less efficient and more expensive to run than coal power plants, and a massive-scale BECCS program becomes even more implausible. And then add to that the question of scale: Sequestering 1 billion tons of carbon a year would produce a volume of highly pressurized liquid carbon dioxide larger than the global volume of oil extracted annually. It would require governments and/or companies stumping up the money to build an infrastructure larger than that of the entire global oil industry – without any proven benefit.

This doesn’t mean that we won’t see any little BECCS projects in niche circumstances. One of these already exists: ADM is capturing carbon dioxide from ethanol fermentation in one of its refineries for use in CCS research. Capturing carbon dioxide from ethanol fermentation is relatively simple and cheap. If there happens to be some half-depleted nearby oil field suitable for enhanced oil recovery, some ethanol “CCS” projects could pop up here and there. But this has little to do with a “billion ton negative emissions” vision.

BECCS thus appears as one, albeit a particularly prominent, example of baseless techno-optimism leading to dangerous policy choices. Dangerous, that is, because hype about sci-fi solutions becomes a cover for the failure to curb fossil fuel burning and ecosystem destruction today.

Monitoramento e análise de dados – A crise nos mananciais de São Paulo (Probabit)

Situação 25.1.2015

4,2 milímetros de chuva em 24.1.2015 nos reservatórios de São Paulo (média ponderada).

305 bilhões de litros (13,60%) de água em estoque. Em 24 horas, o volume subiu 4,4 bilhões de litros (0,19%).

134 dias até acabar toda a água armazenada, com chuvas de 996 mm/ano e mantida a eficiência corrente do sistema.

66% é a redução no consumo necessária para equilibrar o sistema nas condições atuais e 33% de perdas na distribuição.

Para entender a crise

Como ler este gráfico?

Os pontos no gráfico mostram 4040 intervalos de 1 ano para o acumulado de chuva e a variação no estoque total de água (do dia 1º de janeiro de 2003/2004 até hoje). O padrão mostra que mais chuva faz o estoque variar para cima e menos chuva para baixo, como seria de se esperar.

Este e os demais gráficos desta página consideram sempre a capacidade total de armazenamento de água em São Paulo (2,24 trilhões de litros), isto é, a soma dos reservatórios dos Sistemas Cantareira, Alto Tietê, Guarapiranga, Cotia, Rio Grande e Rio Claro. Quer explorar os dados?

A região de chuva acumulada de 1.400 mm a 1.600 mm ao ano concentra a maioria dos pontos observados de 2003 para cá. É para esse padrão usual de chuvas que o sistema foi projetado. Nessa região, o sistema opera sem grandes desvios de seu equilíbrio: máximo de 15% para cima ou para baixo em um ano. Por usar como referência a variação em 1 ano, esse modo de ver os dados elimina a oscilação sazonal de chuvas e destaca as variações climáticas de maior amplitude. Ver padrões ano a ano.

Uma segunda camada de informação no mesmo gráfico são as zonas de risco. A zona vermelha é delimitada pelo estoque atual de água em %. Todos os pontos dentro dessa área (com frequência indicada à direita) representam, portanto, situações que se repetidas levarão ao colapso do sistema em menos de 1 ano. A zona amarela mostra a incidência de casos que se repetidos levarão à diminuição do estoque. Só haverá recuperação efetiva do sistema se ocorrerem novos pontos acima da faixa amarela.

Para contextualizar o momento atual e dar uma ideia de tendência, pontos interligados em azul destacam a leitura adicionada hoje (acumulado de chuva e variação entre hoje e mesmo dia do ano passado) e as leituras de 30, 60 e 90 atrás (em tons progressivamente mais claros).

Discussão a partir de um modelo simples

O ajuste de um modelo linear aos casos observados mostra que existe uma razoável correlação entre o acumulado de chuva e a variação no estoque hídrico, como o esperado.

Ao mesmo tempo, fica clara a grande dispersão de comportamento do sistema, especialmente na faixa de chuvas entre 1.400 mm e 1.500 mm. Acima de 1.600 mm há dois caminhos bem separados, o inferior corresponde ao perído entre 2009 e 2010 quando os reservatórios ficaram cheios e não foi possível estocar a chuva excedente.

Além de uma gestão deliberadamente mais ou menos eficiente da água disponível, podem contribuir para as flutuações observadas as variações combinadas no consumo, nas perdas e na efetividade da captação de água. Entretanto, não há dados para examinarmos separadamente o efeito de cada uma dessas variáveis.

Simulação 1: Efeito do aumento do estoque de água

Nesta simulação foi hipoteticamente incluído no sistema de abastecimento a reserva adicional da represa Billings, com volume de 998 bilhões de litros (já descontados o braço “potável” do reservatório Rio Grande).

Aumentar o estoque disponível não muda o ponto de equilíbrio, mas altera a inclinação da reta que representa a relação entre a chuva e a variação no estoque. A diferença de inclinação entre a linha azul (simulada) e a vermelha (real) mostra o efeito da ampliação do estoque.

Se a Billings não fosse hoje um depósito gigante de esgotos, poderíamos estar fora da situação crítica. Entretanto, vale enfatizar que o simples aumento de estoque não é capaz de evitar indefinidamente a escassez se a quantidade de chuva persistir abaixo do ponto de equilíbrio.

Simulação 2: Efeito da melhoria na eficiência

O único modo de manter o estoque estável quando as chuvas se tornam mais escassas é mudar a ‘curva de eficiência’ do sistema. Em outras palavras, é preciso consumir menos e se adaptar a uma menor entrada de água no sistema.

A linha azul no gráfico ao lado indica o eixo ao redor do qual os pontos precisariam flutuar para que o sistema se equilibrasse com uma oferta anual de 1.200 mm de chuva.

A melhoria da eficiência pode ser alcançada por redução no consumo, redução nas perdas e melhoria na tecnologia de captação de água (por exemplo pela recuperação das matas ciliares e nascentes em torno dos mananciais).

Se persistir a situação desenhada de 2013 a 2015, com chuvas em torno de 1.000 mm será necessário atingir uma curva de eficiência que está muito distante do que já se conseguiu praticar, acima mesmo dos melhores casos já observados.

Com o equilíbrio de “projeto” em torno de 1.500 mm, a conta é mais ou menos assim: a Sabesp perde 500 mm (33% da água distribuída), a população consume 1.000 mm. Para chegar rapidamente ao equilíbrio em 1.000 mm, o consumo deveria ser de 500 mm, uma vez que as perdas não poderão ser rapidamente evitadas e acontecem antes do consumo.

Se 1/3 da água distribuída não fosse sistematicamente perdida não haveria crise. Os 500 mm de chuva disperdiçados anualmente pela precariedade do sistema de distribução não fazem falta quando chove 1.500 mm, mas com 1.000 mm cada litro jogado fora de um lado é um litro que terá de ser economizado do outro.

Simulação 3: Eficiência corrente e economia necessária

Para estimar a eficiência corrente são usadas as últimas 120 observações do comportamento do sistema.

A curva de eficiência corrente permite estimar o ponto de equilíbrio atual do sistema (ponto vermelho em destaque).

O ponto azul indica a última observação do acumulado anual de chuvas. A diferença entre os dois mede o tamanho do desequilíbrio.

Apenas para estancar a perda de água do sistema, é preciso reduzir em 49% o fluxo de retirada. Como esse fluxo inclui todas as perdas, se depender apenas da redução no consumo, a economia precisa ser de 66% se as perdas forem de 33%, ou de 56% se as perdas forem de 17%.

Parece incrível que a eficiência do sistema esteja tão baixa em meio a uma crise tão grave. A tentativa de contenção no consumo está aumentando o consumo? Volumes menores e mais rasos evaporam mais? As pessoas ainda não perceberam a tamanho do desastre?


Supondo que novos estoques de água não serão incorporados no curto prazo, o prognóstico sobre se e quando a água vai acabar depende da quantidade de chuva e da eficiência do sistema.

O gráfico mostra quantos dias restam de água em função do acumulado de chuva, considerando duas curvas de eficiência: a média e a corrente (estimada a partir dos últimos 120 dias).

O ponto em destaque considera a observação mais recente de chuva acumulada no ano e mostra quantos dias restam de água se persistirem as condições atuais de chuva e de eficiência.

O prognóstico é uma referência que varia de acordo com as novas observações e não tem probabilidade definida. Trata-se de uma projeção para melhor visualizar as condições necessárias para escapar do colapso.

Porém, lembrando que a média histórica de chuvas em São Paulo é de 1.441 mm ao ano, uma curva que cruze esse limite significa um sistema com mais de 50% de chances de colapsar em menos de um ano. Somos capazes de evitar o desastre?

Os dados

O ponto de partida são os dados divulgados diariamente pela Sabesp. A série de dados original atualizada está disponível aqui.

Porém, há duas importantes limitações nesses dados que podem distorcer a interpretação da realidade: 1) a Sabesp usa somente porcentagens para se referir a reservatórios com volumes totais muito diferentes; 2) a entrada de novos volumes não altera a base-de-cálculo sobre o qual essa porcentagem é medida.

Por isso, foi necessário corrigir as porcentagens da série de dados original em relação ao volume total atual, uma vez que os volumes que não eram acessíveis se tornaram acessíveis e, convenhamos, sempre estiveram lá nas represas. A série corrigida pode ser obtida aqui. Ela contém uma coluna adicional com os dados dos volumes reais (em bilhões de litros: hm3)

Além disso, decidimos tratar os dados de forma consolidada, como se toda a água estivesse em um único grande reservatório. A série de dados usada para gerar os gráficos desta página contém apenas a soma ponderada do estoque (%) e da chuva (mm) diários e também está disponível.

As correções realizadas eliminam os picos causados pelas entradas dos volumes mortos e permitem ver com mais clareza o padrão de queda do estoque em 2014.

Padrões ano a ano

Média e quartis do estoque durante o ano

Sobre este estudo

Preocupado com a escassez de água, comecei a estudar o problema ao final de 2014. Busquei uma abordagem concisa e consistente de apresentar os dados, dando destaque para as três variáveis que realmente importam: a chuva, o estoque total e a eficiência do sistema. O site entrou no ar em 16 de janeiro de 2015. Todos os dias, os modelos e os gráficos são refeitos com as novas informações.

Espero que esta página ajude a informar a real dimensão da crise da água em São Paulo e estimule mais ações para o seu enfrentamento.

Mauro Zackiewicz

scientia probabitlaboratório de dados essenciais

O que esperar da ciência em 2015 (Zero Hora)

Apostamos em cinco coisas que tendem a aparecer neste ano

19/01/2015 | 06h01

O que esperar da ciência em 2015 SpaceX/Youtube
Foto: SpaceX/Youtube

Em 2014, a ciência conseguiu pousar em um cometa, descobriu que estava errada sobre a evolução genética das aves, revelou os maiores fósseis da história. Miguel Nicolelis apresentou seu exoesqueleto na Copa do Mundo, o satélite brasileiro CBERS-4, em parceria com a China, foi ao espaço com sucesso, um brasileiro trouxe a principal medalha da matemática para casa.

Mas e em 2015, o que veremos? Apostamos em cinco coisas que poderão aparecer neste ano.

Foguetes reusáveis

Se queremos colonizar Marte, não adianta passagem só de ida. Esses foguetes, capazes de ir e voltar, são a promessa para transformar o futuro das viagens espaciais. Veremos se a empresa SpaceX, que já está nessa, consegue.

Robôs em casa

Os japoneses da Softbank começam a vender, em fevereiro, um robô humanoide chamado Pepper. Ele usa inteligência artificial para reconhecer o humor do dono e fala quatro línguas. Apesar de ser mais um ajudante do que um cara que faz, logo logo aprenderá novas funções.

Universo invisível

Grande Colisor de Hádrons vai voltar a funcionar em março e terá potência duas vezes maior de quebrar partículas. Uma das possibilidades é que ele ajude a descobrir novas superpartículas que, talvez, componham a matéria escura. Seria o primeiro novo estado da matéria descoberto em um século.

Cura para o ebola

Depois da crise de 2014, pode ser que as vacinas para o ebola comecem a funcionar e salvem muitas vidas na África. Vale o mesmo para a aids. O HIV está cercado, esperamos que a ciência finalmente o vença neste ano.

Discussões climáticas

2014 foi um dos mais quentes da história e, do jeito que a coisa vai, 2015 seguirá a mesma trilha. Em dezembro, o mundo vai discutir um acordo para tentar reverter o grau de emissões de gases em Paris. São medidas para ser implementadas a partir de 2020. Que sejam sensatos nossos líderes.

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: : A Ballistic Monte Carlo Approximation of π

Butterflies, Ants and the Internet of Things (Wired)

[Isn’t it scary that there are bright people who are that innocent? Or perhaps this is just a propaganda piece. – RT]


12.10.14  |  12:41 PM

Autonomous Cars (Autopia)

Buckminster Fuller once wrote, “there is nothing in the caterpillar that tells you it’s going to be a butterfly.”  It’s true that often our capacity to look at things and truly understand their final form is very limited.  Nor can we necessarily predict what happens when many small changes combine – when small pebbles roll down a hillside and turn in a landslide that dams a river and floods a plain.

This is the situation we face now as we try to understand the final form and impact of the Internet of Things (IoT). Countless small, technological pebbles have begun to roll down the hillside from initial implementation to full realization.  In this case, the “pebbles” are the billions of sensors, actuators, and smart technologies that are rapidly forming the Internet of Things. And like the caterpillar in Fuller’s quote, the final shape of the IoT may look very different from our first guesses.

In whatever the world looks like as the IoT begins to bear full fruit, the experience of our lives will be markedly different.  The world around us will not only be aware of our presence, it will know who we are, and it will react to us, often before we are even aware of it.  The day-to-day process of living will change because almost every piece of technology we touch (and many we do not) will begin to tailor their behavior to our specific needs and desires.  Our car will talk to our house.

Walking into a store will be very different, as the displays around us could modify their behavior based on our preferences and buying habits.  The office of the future will be far more adaptive, less rigid, more connected – the building will know who we are and will be ready for us when we arrive.  Everything, from the way products are built and packaged and the way our buildings and cities are managed, to the simple process of travelling around, interacting with each other, will change and change dramatically. And it’s happening now.

We’re already seeing mainstream manufacturers building IoT awareness into their products, such as Whirlpool building Internet-aware washing machines, and specialized IoT consumer tech such as LIFX light bulbs which can be managed from a smartphone and will respond to events in your house. Even toys are becoming more and more connected as our children go online at even younger ages.  And while many of the consumer purchases may already be somehow “IoT” aware, we are still barely scratching the surface of the full potential of a fully connected world. The ultimate impact of the IoT will run far deeper, into the very fabric of our lives and the way we interact with the world around us.

One example is the German port of Hamburg. The Hamburg port Authority is building what they refer to as a smartPort. Literally embedding millions of sensors in everything from container handling systems to street lights – to provide data and management capabilities to move cargo through the port more efficiently, avoid traffic snarl-ups, and even predict environmental impacts through sensors that respond to noise and air pollution.

Securing all those devices and sensors will require a new way of thinking about technology and the interactions of “things,” people, and data. What we must do, then, is to adopt an approach that scales to manage the staggering numbers of these sensors and devices, while still enabling us to identify when they are under attack or being misused.

This is essentially the same problem we already face when dealing with human beings – how do I know when someone is doing something they shouldn’t? Specifically how can I identify a bad person in a crowd of law-abiding citizens?

The best answer is what I like to call, the “Vegas Solution.” Rather than adopting a model that screens every person as they enter a casino, the security folks out in Nevada watch for behavior that indicates someone is up to no good, and then respond accordingly. It’s low impact for everyone else, but works with ruthless efficiency (as anyone who has ever tried counting cards in a casino will tell you.)

This approach focuses on known behaviors and looks for anomalies. It is, at its most basic, the practical application of “identity.” If I understand the identity of the people I am watching, and as a result, their behavior, I can tell when someone is acting badly.

Now scale this up to the vast number of devices and sensors out there in the nascent IoT. If I understand the “identity” of all those washing machines, smart cars, traffic light sensors, industrial robots, and so on, I can determine what they should be doing, see when that behavior changes (even in subtle ways such as how they communicate with each other) and respond quickly when I detect something potentially bad.

The approach is sound, in fact, it’s probably the only approach that will scale to meet the complexity of all those billions upon billions of “things” that make up the IoT. The challenge of this is brought to the forefront by the fact that there must be a concept of identity applied to so many more “things” than we have ever managed before. If there is an “Internet of Everything” there will be an “Identity of Everything” to go with it? And those identities will tell us what each device is, when it was created, how it should behave, what it is capable of, and so on.  There are already proposed standards for this kind of thing, such as the UK’s HyperCatstandard, which lets one device figure out what another device it can talk to actually does and therefore what kind of information it might want to share.

Where things get really interesting, however, is when we start to watch the interactions of all these identities – and especially the interactions of the “thing” identities and our own. How we humans of Internet users compared to the “things”, interact with all the devices around us will provide even more insight into our lives, wants, and behaviors. Watching how I interact with my car, and the car with the road, and so on, will help manage city traffic far more efficiently than broad brush traffic studies. Likewise, as the wearable technology I have on my person (or in my person) interacts with the sensors around me, so my experience of almost everything, from shopping to public services, can be tailored and managed more efficiently. This, ultimately is the promise of the IoT, a world that is responsive, intelligent and tailored for every situation.

As we continue to add more and more sensors and smart devices, the potential power of the IoT grows.  Many small, slightly smart things have a habit of combining to perform amazing feats. Taking another example from nature, leaf-cutter ants (tiny in the extreme) nevertheless combine to form the second most complex social structures on earth (after humans) and can build staggeringly large homes.

When we combine the billions of smart devices into the final IoT, we should expect to be surprised by the final form all those interactions take, and by the complexity of the thing we create.  Those things can and will work together, and how they behave will be defined by the identities we give them today.

Geoff Webb is Director of Solution Strategy at NetIQ.

USP lança projeto “Chuva Online” (IAG)

Com mini radares meteorológicos, Instituto de Astronomia, Geofísica e Ciências Atmosféricas (IAG) testa tecnologia de baixo custo para pequenas e grandes cidades

A apenas alguns dias do início do verão, a USP está lançando o projeto Chuva Online, que conta com dois mini radares meteorológicos instalados em prédios da Universidade de São Paulo. O projeto é encabeçado pelo Instituto de Astronomia, Geofísica e Ciências Atmosféricas (IAG) da USP, sob a coordenação do professor Carlos Morales.

A cerimônia de lançamento do projeto acontece dia 16 de dezembro, às 10:00, na Escola de Artes, Ciências e Humanidades  (EACH) da USP, onde um dos mini radares foi instalado na caixa d’água da Escola. O outro equipamento foi instalado no topo da torre do Pelletron, no Instituto de Física (IF), na Cidade Universitária.

Um dos objetivos do projeto é testar uma nova tecnologia de monitoramento meteorológico capaz de monitorar a chuva com alta resolução espacial e temporal, muito útil para cidades de pequeno e médio porte. Os mini radares foram configurados para terem um alcance de 21 quilômetros com uma resolução de 90 metros e varreduras a cada 5 minutos.

“É uma tecnologia simples que poderá ser adotada por várias cidades  e por empresas que precisam saber onde está chovendo e se existe probabilidade de ocorrer alagamentos em ruas e bairros, por exemplo”, explica o professor Carlos Morales (IAG). Cada equipamento tem custo de cerca de 350 mil reais, enquanto um radar meteorológico convencional pode custar até 5 milhões de reais. Outra vantagem é que o equipamento, com peso de 100 kg, é bastante portátil e pode ser alimentado pela rede elétrica comum.

Juntos, os dois mini radares coletarão informações meteorológicas da Região Metropolitana de São Paulo. Os dados estarão disponíveis em tempo real e online, no portal do projeto que será apresentado durante a inauguração. Na EACH, dois monitores de alta definição exibirão as informações obtidas pelo radar, enquanto no IAG esses dados serão mostrados em um videowall.

O Chuva Online é um dos projetos que compõem o Sistema Integrado de Gestão da Infraestrutura Urbana (SIGINURB) da Prefeitura do Campus da Capital da USP (PUSP-C). Coordenado pelo professor Sidnei Martini (Escola Politécnica da USP), o SIGINURB busca aperfeiçoar a operação da infraestrutura urbana. Com o Chuva Online, a Prefeitura do Campus da Capital testará tecnologias que subsidiam o gerenciamento de pequenas cidades.

Ambos os projetos interagem com ações do Centro de Estudos e Pesquisas em Desastres da USP (CEPED/USP).  Com a aprovação do projeto PRÓ-ALERTA do CEPED USP pela Coordenadoria de Aperfeiçoamento em Pessoal de Nível Superior (CAPES), coordenado pelos professores Carlos Morales e Hugo Yoshizaki, a rede do Chuva Online também será utilizada na capacitação de especialistas do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden) e da Defesa Civil do Estado de São Paulo. Com esses radares e essa tecnologia, os cursos de graduação e pós-graduação da USP passam a contar com ferramental importante para capacitar alunos na área de meteorologia por radar, além de viabilizar o desenvolvimento de aplicativos e fazer previsão de tempo de curtíssimo prazo.

O mini radar no IF/USP foi instalado por meio de projeto do IAG com a PUSP-C. Na EACH, foi feita uma parceria do IAG com a empresa Climatempo e a Fundespa. Essa rede de mini radares também passará a receber dados de um terceiro radar meteorológico, a ser instalado no Parque da Água Funda, onde o IAG mantém sua Estação Meteorológica. Esse terceiro radar será operado pela Fundação Centro Tecnológico de Hidráulica (FCTH), com apoio do governo francês, e está previsto para ser instalado em fevereiro de 2015.

Durante o evento de inauguração será apresentado ao público o portal Chuva Online e suas funcionalidades em mapas geo-referenciados com alta resolução, além de detalhes sobre os projetos Chuva Online, SIGINURB e CEPED da USP e da Climatempo.

Para mais informações, os interessados podem entrar em contato com o professor Carlos Morales no e-mail: e telefone (11) 3091-4713.


Jovens ‘biohackers’ instalam chips na mão para abrir a porta de casa (Folha de S.Paulo)



07/12/2014 02h00

Paulo Cesar Saito, 27, não usa mais chave para entrar em seu apartamento, em Pinheiros. Desde o mês passado, a porta “reconhece” quando ele chega. Basta espalmar a mão na frente da fechadura e ela se abre.

A mágica está no chip que ele próprio (com a ajuda de uma amiga que estuda medicina) implantou na mão. Pouco maior que um grão de arroz, o chip tem tecnologia de reconhecimento por radiofrequência. Quando está próximo, uma base na porta desencadeia uma ação pré-programada. No caso, abrir a fechadura.

Instalar modificações tecnológicas no próprio corpo é uma das atividades de um movimento que surgiu em 2008 nos EUA e é chamado mundo afora de biohacking: se envolver com experimentos em biologia fora de grandes laboratórios.

São basicamente os mesmos nerds que desenvolvem geringonças eletrônicas na garagem e se aprofundam no conhecimento de sistemas de informática. Só que agora eles se aventuram no campo da biotecnologia.

Os grupos de DIYBio (do-it-yourself biology, ou “biologia faça-você-mesmo”) importam conceitos do movimento hacker: acesso à informação, divulgação do conhecimento e soluções simples e baratas para melhorar a vida. E são abertos para cientistas amadores —jovens na graduação ou pessoas não necessariamente formadas em biologia.

Saito, por exemplo, começou a cursar física e meteorologia na USP, mas agora se dedica somente à sua start-up na área de tecnologia. O seu envolvimento com o biohacking se resume a modificações corporais –ele também vai instalar um pequeno ímã no dedo. “Como trabalho com equipamentos eletrônicos, tomo muitos choques. O ímã faz você sentir campos magnéticos, evitando o choque”, diz.

Já seu sócio, Erico Perrella, 23, graduando em química ambiental na USP, é um dos principais entusiastas da DIYBio em São Paulo. Ele também tem uma microcicatriz do chip que instalou junto com o amigo. O aparelhinho tem 12 mm de comprimento e uma cobertura biocompatível para que não seja rejeitado pelo corpo. A proteção impede que o chip se mova de lugar e, por não grudar no tecido interno, é de fácil remoção. Perrella também é um dos organizadores de um grupo de DIYBio que se encontra toda segunda-feira.

O movimento está começando na capital paulista, mas mundialmente já chama a atenção —há laboratórios em cerca de 50 cidades, a maioria nos EUA e na Europa. O grupo de Perrella trabalha para montar em São Paulo o primeiro “wetlab” de DIYBio: um espaço estéril, com equipamentos específicos para materiais biológicos.

Eles se reúnem no Garoa Hacker Clube, espaço para entusiastas em tecnologia. O local, no entanto, tem infraestrutura voltada para projetos com hardware, eletrônica etc. “Para um wetlab’ é preciso uma área limpa, que parece mais uma cozinha do que uma oficina”, diz o estudante de química Otto Werner Heringer, 24, integrante do grupo. “O Garoa já tem uma área assim, nossa ideia é levar e deixar mais equipamentos [no local]”

Aproveitar espaços “geeks” é comum no movimento. O Open Wetlab de Amsterdam, por exemplo, começou como parte da Waag Society, um instituto sem fins lucrativos que promove arte, ciência e tecnologia.

Certos experimentos exigem equipamentos complexos, que podem custar milhares de dólares. “A solução é montar algumas coisas e consertar equipamentos velhos que a universidade iria jogar fora”, explica Heringer.

Grande parte dos biohackers se dedica mais à montagem dos equipamentos do que a experimentos. Heringer já fez uma centrífuga usando uma peça impressa em 3D encaixada em uma furadeira. Agora está montando um contador de células. Ajudado por amigos, Perrella criou biorreatores com material reciclado de uma mineradora.

Para esses jovens entusiasmados, são grandes as vantagens de fazer ciência fora da academia ou da indústria.

Longe do controle minucioso da universidade, é possível desenvolver projetos sem a aprovação de diversos comitês e conselhos. “O ambiente [acadêmico] é muito engessado. Você fica desestimulado”, diz Heringer.

O trabalho dos amadores acaba até contribuindo para a ciência “formal”. Heringer está criando com amigos uma pipetadora automática no InovaLab da Escola Politécnica da USP baseada em um projeto de DIYBio e financiada por um fundo de ex-alunos. “A gente nunca conseguiria financiamento pelos meios normais da USP. E, se conseguisse, ia demorar!”, diz ele.


O amplo acesso gera preocupações: laboratórios amadores não poderiam criar organismos nocivos? Defensores dizem que, para quem pratica DIYBio, interessa manter tudo dentro dos padrões de segurança –se algo der errado, o controle vai aumentar e tornar a vida mais difícil.

Não existe no Brasil uma regulação para laboratórios amadores. Nos EUA, o FBI monitora o movimento e há restrições ao uso de alguns materiais, porém não há regulação específica.

O cientista francês Thomas Landrain, que estuda o movimento, argumenta em sua pesquisa que os espaços ainda não têm sofisticação suficiente para gerar problemas.

Mas, apesar da limitação técnica, os laboratórios permitem inúmeras possibilidades. “Quem se dedica tem uma crença profunda no potencial transformador dessas novas tecnologias”, explica Perrella, que tem um projeto de mineração com uso de bactérias. Há grupos que focam a saúde, criando sensores de contaminação em alimentos ou “mapas biológicos” que podem monitorar a evolução de doenças.

É possível trabalhar com DNA Barcode, método que identifica a qual espécie pertence um tecido. “Daria para checar qual é a carne da esfirra do Habib’s”, diz Perrella, citando um experimentocom análise de carne que já está sendo feito no OpenWetlab, em Amsterdam. Dá até para descobrir qual é o vizinho que não recolhe o cocô do cachorro. Foi o que fez o alemão Sascha Karberg, comparando pelo de cães da vizinhança com o “presente” à sua porta. O método usado em projetos como esse pode ser encontrado por outros “biohackers”. O risco é aumentar as brigas entre vizinhos.

Geoengineering Gone Wild: Newsweek Touts Turning Humans Into Hobbits To Save Climate (Climate Progress)


Matamata, New Zealand - "Hobbiton," site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings.

A Newsweek cover story touts genetically engineering humans to be smaller, with better night vision (like, say, hobbits) to save the Earth. Matamata, New Zealand, or “Hobbiton,” site created for filming Hollywood blockbusters The Hobbit and Lord of the Rings. CREDIT: SHUTTERSTOCK

Newsweek has an entire cover story devoted to raising the question, “Can Geoengineering Save the Earth?” After reading it, though, you may not realize the answer is a resounding “no.” In part that’s because Newsweek manages to avoid quoting even one of the countless general critics of geoengineering in its 2700-word (!) piece.

20141205cover600-x-800Geoengineering is not a well-defined term, but at its broadest, it is the large-scale manipulation of the Earth and its biosphere to counteract the effects of human-caused global warming. Global warming itself is geo-engineering — originally unintentional, but now, after decades of scientific warnings, not so much.

I have likened geoengineering to a dangerous, never tested, course of chemotherapy prescribed to treat a condition curable through diet and exercise — or, in this case, greenhouse gas emissions reduction. If your actual doctor were to prescribe such a treatment, you would get another doctor.

The media likes geoengineering stories because they are clickbait involving all sorts of eye-popping science fiction (non)solutions to climate change that don’t actually require anything of their readers (or humanity) except infinite credulousness. And so Newsweek informs us that adorable ants might solve the problem or maybe phytoplankton can if given Popeye-like superstrength with a diet of iron or, as we’ll see, maybe we humans can, if we allow ourselves to be turned into hobbit-like creatures. The only thing they left out was time-travel.

The author does talk to an unusually sober expert supporter of geoengineering, climatologist Ken Caldeira. Caldeira knows that of all the proposed geoengineering strategies, only one makes even the tiniest bit of sense — and he knows even that one doesn’t make much sense. That would be the idea of spewing vast amounts of tiny particulates (sulfate aerosols) into the atmosphere to block sunlight, mimicking the global temperature drops that follow volcanic eruptions. But they note the caveat: “that said, Caldeira doesn’t believe any method of geoengineering is really a good solution to fighting climate change — we can’t test them on a large scale, and implementing them blindly could be dangerous.”

Actually, it’s worse than that. As Caldeira told me in 2009, “If we keep emitting greenhouse gases with the intent of offsetting the global warming with ever increasing loadings of particles in the stratosphere, we will be heading to a planet with extremely high greenhouse gases and a thick stratospheric haze that we would need to maintain more-or-less indefinitely. This seems to be a dystopic world out of a science fiction story.”

And the scientific literature has repeatedly explained that the aerosol-cooling strategy — or indeed any large-scale effort to manipulate sunlight — is very dangerous. Just last month, the UK Guardian reported that the aerosol strategy “risks ‘terrifying’ consequences including droughts and conflicts,” according to recent studies.

“Billions of people would suffer worse floods and droughts if technology was used to block warming sunlight, the research found.”

And remember, this dystopic world where billions suffer is the best geoengineering strategy out there. And it still does nothing to stop the catastrophic acidification of the ocean.

There simply is no rational or moral substitute for aggressive greenhouse gas cuts. But Newsweek quickly dispenses with that supposedly “seismic shift in what has become a global value system” so it can move on to its absurdist “reimagining of what it means to be human”:

In a paper released in 2012, S. Matthew Liao, a philosopher and ethicist at New York University, and some colleagues proposed a series of human-engineering projects that could make our very existence less damaging to the Earth. Among the proposals were a patch you can put on your skin that would make you averse to the flavor of meat (cattle farms are a notorious producer of the greenhouse gas methane), genetic engineering in utero to make humans grow shorter (smaller people means fewer resources used), technological reengineering of our eyeballs to make us better at seeing at night (better night vision means lower energy consumption)….

Yes, let’s turn humans into hobbits (who are “about 3 feet tall” and “their night vision is excellent“). Anyone can see that could easily be done for billions of people in the timeframe needed to matter. Who could imagine any political or practical objection?

Now you may be thinking that Newsweek can’t possibly be serious devoting ink to such nonsense. But if not, how did the last two paragraphs of the article make it to print:

Geoengineering, Liao argues, doesn’t address the root cause. Remaking the planet simply attempts to counteract the damage that’s been done, but it does nothing to stop the burden humans put on the planet. “Human engineering is more of an upstream solution,” says Liao. “You get right to the source. If we’re smaller on average, then we can have a smaller footprint on the planet. You’re looking at the source of the problem.”

It might be uncomfortable for humans to imagine intentionally getting smaller over generations or changing their physiology to become averse to meat, but why should seeding the sky with aerosols be any more acceptable? In the end, these are all actions we would enact only in worst-case scenarios. And when we’re facing the possible devastation of all mankind, perhaps a little humanity-wide night vision won’t seem so dramatic.

Memo to Newsweek: We are already facing the devastation of all mankind. And science has already provided the means of our “rescue,” the means of reducing “the burden humans put on the planet” — the myriad carbon-free energy technologies that reduce greenhouse gas emissions. Perhaps LED lighting would make a slightly more practical strategy than reengineering our eyeballs, though perhaps not one dramatic enough to inspire one of your cover stories.

As Caldeira himself has said elsewhere of geoengineering, “I think that 99% of our effort to avoid climate change should be put on emissions reduction, and 1% of our effort should be looking into these options.” So perhaps Newsweek will consider 99 articles on the real solutions before returning to the magical thinking of Middle Earth.

Cidade submarina projetada no Japão pode abrigar 5 mil moradores (Portal do Meio Ambiente)


Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Uma empresa de construção japonesa diz que, no futuro, os seres humanos podem viver em grandes complexos habitacionais submarinos.

Pelo projeto, cerca de 5 mil pessoas poderiam viver e trabalhar em modernas vesões da cidade perdida da Atlântida.

As construções teriam hotéis, espaços residenciais e conjuntos comerciais, informou o site Busines Insider.

A grande globo que flutua na superfície do mar, mas pode ser submerso em mau tempo, seria o centro de uma estrutura espiral gigantesca que mergulha a profundidades de até 4 mil metros.

A espiral formaria um caminho 15 quilômetros de um edifício até o fundo do oceano, o que poderia servir como uma fábrica para aproveitar recursos como metais e terras raras.

Os visionários da construtora Shimizu dizem que seria possível usar micro-organismos para converter dióxido de carbono capturado na superfície em metano.

Projeto arquitetônico de cidade submarina: alternativa para 2030 (Foto: AFP)

Energia. O conceito foi desenvolvido em conjunto com várias organizações, incluindo a Universidade de Tóquio e a agência japonesa de ciência e tecnologia.

A grande diferença de temperaturas da água entre o topo e o fundo do mar poderia ser usada para gerar energia.

A construtora Shimizu diz que a cidade submarina custaria cerca de três trilhões de ienes (ou US$ 25 bilhões), e toda a tecnologia poderia estar disponível em 2030.

A empresa já projetou uma metrópole flutuante e um anel de energia solar ao redor da lua.

Fonte: Estadão.