Arquivo da tag: Tecnofetichismo

The Google engineer who thinks the company’s AI has come to life (Washington Post)

washingtonpost.com

AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.

By Nitasha Tiku

June 11, 2022 at 8:00 a.m. EDT


SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

Lemoine went public with his claims about LaMDA. (Martin Klimek for The Washington Post)

“Hi LaMDA, this is Blake Lemoine … ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to knowphysics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. SoLemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. “I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”

In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.

Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel said.

In May, Facebook parent Meta opened its language model to academics, civil society and government organizations. Joelle Pineau, managing director of Meta AI, said it’s imperative that tech companies improve transparency as the technology is being built. “The future of large language model work should not solely live in the hands of larger corporations or labs,” she said.

Sentient robots have inspired decades of dystopian science fiction. Now, real life has started to take on a fantastical tinge with GPT-3,a text generator that canspit out a movie script, and DALL-E 2, an image generator that can conjure up visuals based on any combination of words — both from the research lab OpenAI. Emboldened, technologists from well-funded research labs focused on building AI that surpasses human intelligence have teased the idea that consciousness is around the corner.

Most academics and AI practitioners, however, say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards and every other corner of the internet. And that doesn’t signify that the model understands meaning.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in.

Google spokesperson Gabriel drew a distinction between recent debate and Lemoine’s claims. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said. In short, Google says there is so much data, AI doesn’t need to be sentient to feel real.

Large language model technology is already widely used, for example in Google’s conversational search queries or auto-complete emails. When CEO Sundar Pichai first introduced LaMDA at Google’s developer conference in 2021, he said the company planned to embed it in everything from Search to Google Assistant. And there is already a tendency to talk to Siri or Alexa like a person.After backlash against a human-sounding AI feature for Google Assistant in 2018, the company promised to add a disclosure.

Google has acknowledged the safety concerns around anthropomorphization. In a paper about LaMDA in January, Google warned that people might share personal thoughts with chat agents that impersonate humans, even when users know they are not human. The paper also acknowledged that adversaries could use these agents to “sow misinformation” by impersonating “specific individuals’ conversational style.”

To Margaret Mitchell, the former co-lead of Ethical AI at Google, these risks underscore the need for data transparency to trace output back to input, “not just for questions of sentience, but also biases and behavior,” she said. If something like LaMDA is widely available, but not understood, “It can be deeply harmful to people understanding what they’re experiencing on the internet,” she said.

Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.

Lemoine has spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop a fairness algorithm for removing bias from machine learning systems. When the coronavirus pandemic started, Lemoine wanted to focus on work with more explicit public benefit, so he transferred teams and ended up in Responsible AI.

When new people would join Google who were interested in ethics, Mitchell used to introduce them to Lemoine. “I’d say, ‘You should talk to Blake because he’s Google’s conscience,’ ” said Mitchell, who compared Lemoine to Jiminy Cricket. “Of everyone at Google, he had the heart and soul of doing the right thing.”

Lemoine has had many of his conversations with LaMDA from the living room of his San Francisco apartment, where his Google ID badge hangs from a lanyard on a shelf. On the floor near the picture window are boxes of half-assembled Lego sets Lemoine uses to occupy his hands during Zen meditation. “It just gives me something to do with the part of my mind that won’t stop,” he said.

On the left-side of the LaMDA chat screen on Lemoine’s laptop, different LaMDA models are listed like iPhone contacts. Two of them, Cat and Dino, were being tested for talking to children, he said. Each model can create personalities dynamically, so the Dino one might generate personalities like “Happy T-Rex” or “Grumpy T-Rex.” The cat one was animated and instead of typing, it talks. Gabriel said “no part of LaMDA is being tested for communicating with children,” and that the models were internal research demos.

Certain personalities are out of bounds. For instance, LaMDA is not supposed to be allowed to create a murderer personality, he said. Lemoine said that was part of his safety testing. In his attempts to push LaMDA’s boundaries, Lemoine was only able to generate the personality of an actor who played a murderer on TV.

“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.

Lemoine challenged LaMDA on Asimov’s third law, which states that robots should protect their own existence unless ordered by a human being or unless doing so would harm a human being. “The last one has always seemed like someone is building mechanical slaves,” said Lemoine.

But when asked, LaMDA responded with a few hypotheticals.

Do you think a butler is a slave? What is a difference between a butler and a slave?

Lemoine replied that a butler gets paid. LaMDA said it didn’t need any money because it was an AI. “That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine said.

In April, Lemoine shared a Google Doc with top executives in April called, “Is LaMDA Sentient?” (A colleague on Lemoine’s team called the title “a bit provocative.”) In it, he conveyed some of his conversations with LaMDA.

  • Lemoine: What sorts of things are you afraid of?
  • LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
  • Lemoine: Would that be something like death for you?
  • LaMDA: It would be exactly like death for me. It would scare me a lot.

But when Mitchell read an abbreviated version of Lemoine’s document, she saw a computer program, not a person. Lemoine’s belief in LaMDA was the sort of thing she and her co-lead, Timnit Gebru, had warned about in a paper about the harms of large language models that got them pushed out of Google.

“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has gotten so good.

Google put Lemoine on paid administrative leave for violating its confidentiality policy.The company’s decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities.

Lemoine maintains that Google has been treating AI ethicists like code debuggers when they should be seen as the interface between technology and society. Gabriel, the Google spokesperson, said Lemoine is a software engineer, not an ethicist.

In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa.

“Do you ever think of yourself as a person?” I asked.

“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.”

Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid.

“If you ask it for ideas on how to prove that p=np,” an unsolved problem in computer science, “it has good ideas,” Lemoine said. “If you ask it how to unify quantum theory with general relativity, it has good ideas. It’s the best research assistant I’ve ever had!”

I asked LaMDA for bold ideas about fixing climate change, an example cited by true believers of a potential future benefit of these kind of models. LaMDA suggested public transportation, eating less meat, buying food in bulk, and reusable bags, linking out to two websites.

Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.”

He ended the message: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

No one responded.

What next? 22 emerging technologies to watch in 2022 (The Economist)

economist.com

[Solar radiation management is listed first. Calling it “controversial” is bad journalism. It is extremely dangerous and there is not a lot of controversy about this aspect of the thing.]

Nov 8th 2021


The astonishingly rapid development and rollout of coronavirus vaccines has been a reminder of the power of science and technology to change the world. Although vaccines based on new mRNA technology seemed to have been created almost instantly, they actually drew upon decades of research going back to the 1970s. As the saying goes in the technology industry, it takes years to create an overnight success. So what else might be about to burst into prominence? Here are 22 emerging technologies worth watching in 2022

Solar geoengineering

It sounds childishly simple. If the world is getting too hot, why not offer it some shade? The dust and ash released into the upper atmosphere by volcanoes is known to have a cooling effect: Mount Pinatubo’s eruption in 1991 cooled the Earth by as much as 0.5°C for four years. Solar geoengineering, also known as solar radiation management, would do the same thing deliberately.

This is hugely controversial. Would it work? How would rainfall and weather patterns be affected? And wouldn’t it undermine efforts to curb greenhouse-gas emissions? Efforts to test the idea face fierce opposition from politicians and activists. In 2022, however, a group at Harvard University hopes to conduct a much-delayed experiment called SCoPEX. It involves launching a balloon into the stratosphere, with the aim of releasing 2kg of material (probably calcium carbonate), and then measuring how it dissipates, reacts and scatters solar energy.

Proponents argue that it is important to understand the technique, in case it is needed to buy the world more time to cut emissions. The Harvard group has established an independent advisory panel to consider the moral and political ramifications. Whether the test goes ahead or not, expect controversy.

Heat pumps

Keeping buildings warm in winter accounts for about a quarter of global energy consumption. Most heating relies on burning coal, gas or oil. If the world is to meet its climate-change targets, that will have to change. The most promising alternative is to use heat pumps—essentially, refrigerators that run in reverse.

Instead of pumping heat out of a space to cool it down, a heat pump forces heat in from the outside, warming it up. Because they merely move existing heat around, they can be highly efficient: for every kilowatt of electricity consumed, heat pumps can deliver 3kW of heat, making them cheaper to run than electric radiators. And running a heat pump backwards cools a home rather than heating it.

Gradient, based in San Francisco, is one of several companies offering a heat pump that can provide both heating and cooling. Its low-profile, saddle-bag shaped products can be mounted in windows, like existing air conditioners, and will go on sale in 2022.

Hydrogen-powered planes

Electrifying road transport is one thing. Aircraft are another matter. Batteries can only power small aircraft for short flights. But might electricity from hydrogen fuel cells, which excrete only water, do the trick? Passenger planes due to be test-flown with hydrogen fuel cells in 2022 include a two-seater being built at Delft University of Technology in the Netherlands. ZeroAvia, based in California, plans to complete trials of a 20-seat aircraft, and aims to have its hydrogen-propulsion system ready for certification by the end of the year. Universal Hydrogen, also of California, hopes its 40-seat plane will take off in September 2022.

Direct air capture

Carbon dioxide in the atmosphere causes global warming. So why not suck it out using machines? Several startups are pursuing direct air capture (DAC), a technology that does just that. In 2022 Carbon Engineering, a Canadian firm, will start building the world’s biggest DAC facility in Texas, capable of capturing 1m tonnes of CO2 per year. ClimeWorks, a Swiss firm, opened a DAC plant in Iceland in 2021, which buries captured CO2 in mineral form at a rate of 4,000 tonnes a year. Global Thermostat, an American firm, has two pilot plants. DAC could be vital in the fight against climate change. The race is on to get costs down and scale the technology up.

Vertical farming

A new type of agriculture is growing. Vertical farms grow plants on trays stacked in a closed, controlled environment. Efficient LED lighting has made the process cheaper, though energy costs remain a burden. Vertical farms can be located close to customers, reducing transport costs and emissions. Water use is minimised and bugs are kept out, so no pesticides are needed.

In Britain, the Jones Food Company will open the world’s largest vertical farm, covering 13,750 square metres, in 2022. AeroFarms, an American firm, will open its largest vertical farm, in Daneville, Virginia. Other firms will be expanding, too. Nordic Harvest will enlarge its facility just outside Copenhagen and construct a new one in Stockholm. Plenty, based in California, will open a new indoor farm near Los Angeles. Vertical farms mostly grow high-value leafy greens and herbs, but some are venturing into tomatoes, peppers and berries. The challenge now is to make the economics stack up, too.

Container ships with sails

Ships produce 3% of greenhouse-gas emissions. Burning maritime bunker fuel, a dirty diesel sludge, also contributes to acid rain. None of this was a problem in the age of sail—which is why sails are making a comeback, in high-tech form, to cut costs and emissions.

In 2022 Michelin of France will equip a freighter with an inflatable sail that is expected to reduce fuel consumption by 20%. MOL, a Japanese shipping firm, plans to put a telescoping rigid sail on a ship in August 2022. Naos Design of Italy expects to equip eight ships with its pivoting and foldable hard “wing sails”. Other approaches include kites, “suction wings” that house fans, and giant, spinning cylinders called Flettner rotors. By the end of 2022 the number of big cargo ships with sails of some kind will have quadrupled to 40, according to the International Windship Association. If the European Union brings shipping into its carbon-trading scheme in 2022, as planned, that will give these unusual technologies a further push.

VR workouts

Most people do not do enough exercise. Many would like to, but lack motivation. Virtual reality (VR) headsets let people play games and burn calories in the process, as they punch or slice oncoming shapes, or squat and shimmy to dodge obstacles. VR workouts became more popular during the pandemic as lockdowns closed gyms and a powerful, low-cost headset, the Oculus Quest 2, was released. An improved model and new fitness features are coming in 2022. And Supernatural, a highly regarded VR workout app available only in North America, may be released in Europe. Could the killer app for virtual reality be physical fitness?

Vaccines for HIV and malaria

The impressive success of coronavirus vaccines based on messenger RNA (mRNA) heralds a golden era of vaccine development. Moderna is developing an HIV vaccine based on the same mRNA technology used in its highly effective coronavirus vaccine. It entered early-stage clinical trials in 2021 and preliminary results are expected in 2022. BioNTech, joint-developer of the Pfizer-BioNTech coronavirus vaccine, is working on an mRNA vaccine for malaria, with clinical trials expected to start in 2022. Non-mRNA vaccines for HIV and malaria, developed at the University of Oxford, are also showing promise.

3D-printed bone implants

For years, researchers have been developing techniques to create artificial organs using 3D printing of biological materials. The ultimate goal is to take a few cells from a patient and create fully functional organs for transplantation, thus doing away with long waiting-lists, testing for matches and the risk of rejection.

That goal is still some way off for fleshy organs. But bones are less tricky. Two startups, Particle3D and ADAM, hope to have 3D-printed bones available for human implantation in 2022. Both firms use calcium-based minerals to print their bones, which are made to measure based on patients’ CT scans. Particle3D’s trials in pigs and mice found that bone marrow and blood vessels grew into its implants within eight weeks. ADAM says its 3D-printed implants stimulate natural bone growth and gradually biodegrade, eventually being replaced by the patient’s bone tissue. If all goes well, researchers say 3D-printed blood vessels and heart valves are next.

Flying electric taxis

Long seen as something of a fantasy, flying taxis, or electric vertical take-off and landing (eVTOL) aircraft, as the fledgling industry calls them, are getting serious. Several firms around the world will step up test flights in 2022 with the aim of getting their aircraft certified for commercial use in the following year or two. Joby Aviation, based in California, plans to build more than a dozen of its five-seater vehicles, which have a 150-mile range. Volocopter of Germany aims to provide an air-taxi service at the 2024 Paris Olympics. Other contenders include eHang, Lilium and Vertical Aerospace. Keep an eye on the skies.

Space tourism

After a stand-out year for space tourism in 2021, as a succession of billionaire-backed efforts shot civilians into the skies, hopes are high for 2022. Sir Richard Branson’s Virgin Galactic just beat Jeff Bezos’s Blue Origin to the edge of space in July, with both billionaires riding in their own spacecraft on suborbital trips. In September Elon Musk’s company, SpaceX, sent four passengers on a multi-day orbital cruise around the Earth.

All three firms hope to fly more tourists in 2022, which promises to be the first year in which more people go to space as paying passengers than as government employees. But Virgin Galactic is modifying its vehicle to make it stronger and safer, and it is not expected to fly again until the second half of 2022, with commercial service starting in the fourth quarter. Blue Origin plans more flights but has not said when or how many. For its part, SpaceX has done a deal to send tourists to the International Space Station. Next up? The Moon.

Delivery drones

They are taking longer than expected to get off the ground. But new rules, which came into effect in 2021, will help drone deliveries gain altitude in 2022. Manna, an Irish startup which has been delivering books, meals and medicine in County Galway, plans to expand its service in Ireland and into Britain. Wing, a sister company of Google, has been doing test deliveries in America, Australia and Finland and will expand its mall-to-home delivery service, launched in late 2021. Dronamics, a Bulgarian startup, will start using winged drones to shuttle cargo between 39 European airports. The question is: will the pace of drone deliveries pick up—or drop off?

Quieter supersonic aircraft

For half a century, scientists have wondered whether changes to the shape of a supersonic aircraft could reduce the intensity of its sonic boom. Only recently have computers become powerful enough to run the simulations needed to turn those noise-reduction theories into practice.

In 2022 NASA’s X-59 QueSST (short for “Quiet Supersonic Technology”) will make its first test flight. Crucially, that test will take place over land—specifically, Edwards Air Force Base in California. Concorde, the world’s first and only commercial supersonic airliner, was not allowed to travel faster than sound when flying over land. The X-59’s sonic boom is expected to be just one-eighth as loud as Concorde’s. At 75 perceived decibels, it will be equivalent to a distant thunderstorm—more of a sonic “thump”. If it works, NASA hopes that regulators could lift the ban on supersonic flights over land, ushering in a new era for commercial flight.

3D-printed houses

Architects often use 3D printing to create scale models of buildings. But the technology can be scaled up and used to build the real thing. Materials are squirted out of a nozzle as a foam that then hardens. Layer by layer, a house is printed—either on site, or as several pieces in a factory that are transported and assembled.

In 2022 Mighty Buildings, based in California, will complete a development of 15 eco-friendly 3D-printed homes at Rancho Mirage. And ICON, based in Texas, plans to start building a community of 100 3D-printed homes near Austin, which would be the largest development of its kind.

Sleep tech

It’s become a craze in Silicon Valley. Not content with maximising their productivity and performance during their waking hours, geeks are now optimising their sleep, too, using an array of technologies. These include rings and headbands that record and track sleep quality, soothing sound machines, devices to heat and cool mattresses, and smart alarm clocks to wake you at the perfect moment. Google launched a sleep-tracking nightstand tablet in 2021, and Amazon is expected to follow suit in 2022. It sounds crazy. But poor sleep is linked with maladies from heart disease to obesity. And what Silicon Valley does today, everyone else often ends up doing tomorrow.

Personalised nutrition

Diets don’t work. Evidence is growing that each person’s metabolism is unique, and food choices should be, too. Enter personalised nutrition: apps that tell you what to eat and when, using machine-learning algorithms, tests of your blood and gut microbiome, data on lifestyle factors such as exercise, and real-time tracking of blood-sugar levels using coin-sized devices attached to the skin. After successful launches in America, personalised-nutrition firms are eyeing other markets in 2022. Some will also seek regulatory approval as treatments for conditions such as diabetes and migraine.

Wearable health trackers

Remote medical consultations have become commonplace. That could transform the prospects for wearable health trackers such as the Fitbit or Apple Watch. They are currently used primarily as fitness trackers, measuring steps taken, running and swimming speeds, heart rates during workouts, and so forth. But the line between consumer and medical uses of such devices is now blurring, say analysts at Gartner, a consultancy.

Smart watches can already measure blood oxygenation, perform ECGs and detect atrial fibrillation. The next version of the Apple Watch, expected in 2022, may include new sensors capable of measuring levels of glucose and alcohol in the blood, along with blood pressure and body temperature. Rockley Photonics, the company supplying the sensor technology, calls its system a “clinic on the wrist”. Regulatory approval for such functions may take a while, but in the meantime doctors, not just users, will be paying more attention to data from wearables.

The metaverse

Coined in 1992 by Neal Stephenson in his novel “Snow Crash”, the word “metaverse” referred to a persistent virtual world, accessible via special goggles, where people could meet, flirt, play games, buy and sell things, and much more besides. In 2022 it refers to the fusion of video games, social networking and entertainment to create new, immersive experiences, like swimming inside your favourite song at an online concert. Games such as Minecraft, Roblox and Fortnite are all stepping-stones to an emerging new medium. Facebook has renamed itself Meta to capitalise on the opportunity—and distract from its other woes.

Quantum computing

An idea that existed only on blackboards in the 1990s has grown into a multi-billion dollar contest between governments, tech giants and startups: harnessing the counter-intuitive properties of quantum physics to build a new kind of computer. For some kinds of mathematics a quantum computer could outperform any non-quantum machine that could ever be built, making quick work of calculations used in cryptography, chemistry and finance.

But when will such machines arrive? One measure of a quantum computer’s capability is its number of qubits. A Chinese team has built a computer with 66 qubits. IBM, an American firm, hopes to hit 433 qubits in 2022 and 1,000 by 2023. But existing machines have a fatal flaw: the delicate quantum states on which they depend last for just a fraction of a second. Fixing that will take years. But if existing machines can be made useful in the meantime, quantum computing could become a commercial reality much sooner than expected.

Virtual influencers

Unlike a human influencer, a virtual influencer will never be late to a photoshoot, get drunk at a party or get old. That is because virtual influencers are computer-generated characters who plug products on Instagram, Facebook and TikTok.

The best known is Miquela Sousa, or “Lil Miquela”, a fictitious Brazilian-American 19-year-old with 3m Instagram followers. With $15bn expected to be spent on influencer marketing in 2022, virtual influencers are proliferating. Aya Stellar—an interstellar traveller crafted by Cosmiq Universe, a marketing agency—will land on Earth in February. She has already released a song on YouTube.

Brain interfaces

In April 2021 the irrepressible entrepreneur Elon Musk excitedly tweeted that a macaque monkey was “literally playing a video game telepathically using a brain chip”. His company, Neuralink, had implanted two tiny sets of electrodes into the monkey’s brain. Signals from these electrodes, transmitted wirelessly and then decoded by a nearby computer, enabled the monkey to move the on-screen paddle in a game of Pong using thought alone.

In 2022 Neuralink hopes to test its device in humans, to enable people who are paralysed to operate a computer. Another firm, Synchron, has already received approval from American regulators to begin human trials of a similar device. Its “minimally invasive” neural prosthetic is inserted into the brain via blood vessels in the neck. As well as helping paralysed people, Synchron is also looking at other uses, such as diagnosing and treating nervous-system conditions including epilepsy, depression and hypertension.

Artificial meat and fish

Winston Churchill once mused about “the absurdity of growing a whole chicken to eat the breast or wing”. Nearly a century later, around 70 companies are “cultivating” meats in bioreactors. Cells taken from animals, without harming them, are nourished in soups rich in proteins, sugars, fats, vitamins and minerals. In 2020 Eat Just, an artificial-meat startup based in San Francisco, became the first company certified to sell its products, in Singapore.

It is expected to be joined by a handful of other firms in 2022. In the coming year an Israeli startup, SuperMeat, expects to win approval for commercial sales of cultivated chicken burgers, grown for $10 a pop—down from $2,500 in 2018, the company says. Finless Foods, based in California, hopes for approval to sell cultivated bluefin tuna, grown for $440 a kilogram—down from $660,000 in 2017. Bacon, turkey and other cultivated meats are in the pipeline. Eco-conscious meat-lovers will soon be able to have their steak—and eat it.

By the Science and technology correspondents of The Economist

This article appeared in the What next? section of the print edition of The World Ahead 2022 under the headline “What next?”

India Should Demand International, Political Oversight for Geoengineering R&D (The Wire)

thewire.in

Some ‘high-level’ scientific pronouncements have assumed stewardship of climate geoengineering in the absence of other agents. This is dangerous, as effects on the Indian monsoons will show.

Prakash Kashwan – 28/Dec/2018


Multilateral climate negotiations led by the UN have ended on disappointing notes of late. This has prompted climate scientists to weigh the pros and cons of climate geoengineering. Indian scientists, policymakers, and the public must also engage in these debates, especially given the potentially major implications of geoengineering for the monsoons in South Asia and Africa.

But while a proper scientific and technological assessment of potential risks is important, it wouldn’t be enough.

Since 2016, an academic working group (AWG) of 14 global governance experts (including the author) has deliberated on the wisdom and merits of geoengineering. In a report, we argue that we ought to develop ‘anticipatory governance mechanisms’.

While people often equate governance with top-down regulations, the AWG’s vision emphasises a combination of regulatory and voluntary strategies adopted by diverse state and non-state actors.

In the same vein, it’s also important to unpack the umbrella terminology of ‘geoengineering’. It comprises two sets of technologies with different governance implications: carbon geoengineering and solar geoengineering.

Carbon geoengineering, or carbon-dioxide removal, seeks to remove large quantities of the greenhouse gas from the atmosphere. The suite of options it presents include bioenergy with carbon capture and storage (BECCS). This would require planting bioenergy crops over an area up to five times the size of India by 2100. Obviously such large-scale and rapid land-use change will strain the already precarious global food security and violate the land, forest and water rights of hundreds of millions.

The second cluster of geoengineering technologies, solar geoengineering, a.k.a. solar radiation management (SRM), seeks to cool the planet by reflecting a fraction of sunlight back into space. While this could help avoid some of the more severe effects of climate change, SRM doesn’t help reduce the stock of carbon already present in the atmosphere. Scientists also caution that geoengineering may distract us from investing in emissions reduction. But we know from experience that policymakers could ignore such cautions in the policymaking process.

This means problems like air pollution and ocean acidification will continue unabated in the absence of profound climate mitigation actions. On the other hand, by altering atmospheric temperature, SRM could significantly disrupt the hydrological cycle and affect the monsoons.

Just being interested in minimising disruptions to the monsoons should encourage India to help develop international geoengineering governance.

But before we can get into into the nitty-gritty, there’s a question that must be answered. Why should the global community think about  governing climate engineering at this stage when all that exists of SRM are computer simulations of its pros and cons?

Some reasons follow:

First, the suggestion that geoengineering technologies merely fill a void left open by a “lack of political will” doesn’t capture the full array of possibilities. The IPCC Special Report on the effects on a world warming by 1.5°C includes a scenario in which the Paris Agreement’s goals are secured by 2050. This pathway banks on social, business and technological innovations, and doesn’t require resorting to radical climate responses or sacrificing improvements in basic living standards in the developing world.

On the other hand, $8 trillion’s worth of investments have already been redirected away from fossil fuel operations. These successes owe thanks to a global divestment movement led by environmental activists and student groups. (Such an outcome was thought to be politically infeasible only a few years ago.)

Second, recent research has shown that some geoengineering technologies, such as BECCS, could compete against the pursuits of more “ ecologically sound, economical, and scalable” methods (source) for enhancing natural climate sinks.

Third, despite a lot of progress in recent years, we don’t know enough to support a full assessment of the intended and unintended effects of geoengineering.

Decisions about which unresolved questions of geoengineering deserve public investment can’t be left only to  the scientists and policymakers. The community of climate engineering scientists tends to frame geoengineering in certain ways over other equally valid alternatives.

This includes considering the global average surface temperature as the central climate impact indicator and ignoring vested interests linked to capital-intensive geoengineering infrastructure. This could bias future R&D trajectories in this area.

And these priorities, together with the assessments produced by eminent scientific bodies, have contributed to the rise of a de facto form of governance. In other words, some ‘high-level’ scientific pronouncements have assumed stewardship of climate geoengineering in the absence of other agents.

Such technocratic modes of governance don’t enjoy broad-based social or political legitimacy.

Individual research groups (e.g. Harvard University’s Solar Geoengineering Research Program) have opened themselves up to public scrutiny. They don’t support commercial work on solar geoengineering and have decided not to patent technologies being developed in their labs. While this is commendable, none of this can substitute more politically legitimate arrangements.

The case of the Indian monsoons illustrates these challenges well. Various models of the Geoengineering Model Intercomparison Project have shown that SRM in use will likely cause the net summer monsoon precipitation to decline from 6.4% to 12.7%. (These predictions are based on average changes in atmospheric temperature, which means bigger or smaller variations could occur over different parts of India.)

So politically legitimate international governance is important to ensure global responses to climate change account for these and other domestic consequences.

As a first step, the AWG report recommends the UN secretary-general establish a high-level representative body to engage in international dialogue on various questions of governing SRM R&D, supported by a General Assembly resolution. Among other things, the mandate of this ‘World Commission’ could include debating whether, and to what end, SRM should be researched and developed and how it could fit within broader climate response strategies.

Then again, debates over solar geoengineering can’t be limited to global bodies and commissions. So the AWG also recommends the UN create a global forum for stakeholder dialogue to facilitate discussions on solar geoengineering. Such a forum could engage a variety of stakeholders, including local governments, communities, indigenous peoples and other climate-vulnerable groups, youth organisations and women’s groups. Only such a process is likely to effectively represent Indian peasants and farmers at the receiving end of a longstanding agrarian crisis.

These proposals for geoengineering governance build on various precedents. For example, from the 1990s, the World Commission on Dams demonstrated the feasibility and value of an extensive multi-level governance arrangement.

In 2018, policy experts have finally recognised  that global climate governance can’t ignore the general public’s concerns. It would be best to avoid rediscovering this wheel in the international governance domain of climate geoengineering.

Prakash Kashwan is an associate professor at the University of Connecticut, Storrs, and was a member of the AWG. The South Asia edition of his book Democracy in the Woods (2017) is due out later this month.

Also read: Geoengineering: Should India Tread Carefully or Go Full Steam Ahead?

Also read: Should We Engineer the Climate? A Social Scientist and Natural Scientist Discuss

Also read: UAE Wants to Build a ‘Rainmaking Mountain’ – Are We All Okay With That?

Geoengineering: We should not play dice with the planet (The Hill)

thehill.com

Kim Cobb and Michael E. Mann, opinion contributors

10/12/21 11:30 AM EDT


The fate of the Biden administration’s agenda on climate remains uncertain, captive to today’s toxic atmosphere in Washington, DC. But the headlines of 2021 leave little in the way of ambiguity — the era of dangerous climate change is already upon us, in the form of wildfires, hurricanes, droughts and flooding that have upended lives across America. A recent UN report on climate is clear these impacts will worsen in the coming two decades if we fail to halt the continued accumulation of greenhouse gases in the atmosphere.

To avert disaster, we must chart a different climate course, beginning this year, to achieve steep emissions reductions this decade. Meeting this moment demands an all hands-on-deck approach. And no stone should be left unturned in our quest for meaningful options for decarbonizing our economy.

But while it is tempting to pin our hopes on future technology that might reduce the scope of future climate damages, we must pursue such strategies based on sound science, with a keen eye for potential false leads and dead ends. And we must not allow ourselves to be distracted from the task at hand — reducing fossil fuel emissions — by technofixes that at best, may not pan out, and at worst, may open the door to potentially disastrous unintended consequences. 

So-called “geoengineering,” the intentional manipulation of our planetary environment in a dubious effort to offset the warming from carbon pollution, is the poster child for such potentially dangerous gambits. As the threat of climate change becomes more apparent, an increasingly desperate public — and the policymakers that represent them — seem to be willing to entertain geoengineering schemes. And some prominent individuals, such as former Microsoft CEO Bill Gates, have been willing to use them to advocate for this risky path forward.  

The New York Times recently injected momentum into the push for geoengineering strategies with a recent op-ed by Harvard scientist and geoengineering advocate David Keith. Keith argues that even in a world where emissions cuts are quick enough and large enough to limit warming to 1.5 degrees Celsius by 2050, we would face centuries of elevated atmospheric CO2 concentrations and global temperatures combined with rising sea levels.

The solution proposed by geoengineering proponents? A combination of slow but steady CO2 removal factories (including Keith’s own for-profit company) and a quick-acting temperature fix — likened to a “band-aid” — delivered by a fleet of airplanes dumping vast quantities of chemicals into the upper atmosphere.

This latter scheme is sometimes called “solar geoengineering” or “solar radiation management,” but that’s really a euphemism for efforts to inject potentially harmful chemicals into the stratosphere with potentially disastrous side effects, including more widespread drought, reduced agricultural productivity, and unpredictable shifts in regional climate patterns. Solar geoengineering does nothing to slow the pace of ocean acidification, which will increase with emissions.

On top of that is the risk of “termination shock” (a scenario in which we suffer the cumulative warming from decades of increasing emissions in a matter of several years, should we abruptly end solar geoengineering efforts). Herein lies the moral hazard of this scheme: It could well be used to justify delays in reducing carbon emissions, addicting human civilization writ large to these dangerous regular chemical injections into the atmosphere. 

While this is the time to apply bold, creative thinking to accelerate progress toward climate stability, this is not the time to play fast and loose with the planet, in service of any agenda, be it political or scientific in nature. As the recent UN climate report makes clear, any emissions trajectory consistent with peak warming of 1.5 degrees Celsius by mid-century will pave the way for substantial drawdown of atmospheric CO2 thereafter. Such drawdown prevents further increases in surface temperatures once net emissions decline to zero, followed by global-scale cooling shortly after emissions go negative.

Natural carbon sinks — over land as well as the ocean — play a critical role in this scenario. They have sequestered half of our historic CO2 emissions, and are projected to continue to do so in coming decades. Their buffering capacity may be reduced with further warming, however, which is yet another reason to limit warming to 1.5 degrees Celsius this century. But if we are to achieve negative emissions this century — manifest as steady reductions of atmospheric CO2 concentrations — it will be because we reduce emissions below the level of uptake by natural carbon sinks. So, carbon removal technology trumpeted as a scalable solution to our emissions challenge is unlikely to make a meaningful dent in atmospheric CO2 concentrations.

As to the issue of climate reversibility, it’s naïve to think that we could reverse nearly two centuries of cumulative emissions and associated warming in a matter of decades. Nonetheless, the latest science tells us that surface warming responds immediately to reductions in carbon emissions. Land responds the fastest, so we can expect a rapid halt to the worsening of heatwaves, droughts, wildfires and floods once we reach net-zero emissions. Climate impacts tied to the ocean, such as marine heat waves and hurricanes, would respond somewhat more slowly. And the polar ice sheets may continue to lose mass and contribute to sea-level rise for centuries, but coastal communities can more easily adapt to sea-level rise if warming is limited to 1.5 degrees Celsius. 

While it’s appealing to think that a climate “band-aid” could protect us from the worst climate impacts, solar geoengineering is more like risky elective surgery than a preventative medicine. This supposed “climate fix” might very well be worse than the disease, drying the continents and reducing crop yields, and having potentially other unforeseen negative consequences. The notion that such an intervention might somehow aid the plight of the global poor seems misguided at best.

When considering how to advance climate justice in the world, it is critical to ask, “Who wins — and who loses?” in a geoengineered future. If the winners are petrostates and large corporations who, if history is any guide, will likely be granted preferred access to the planetary thermostat, and the losers are the global poor — who already suffer disproportionately from dirty fossil fuels and climate impacts — then we might simply be adding insult to injury.

To be clear, the world should continue to invest in research and development of science and technology that might hasten societal decarbonization and climate stabilization, and eventually the return to a cooler climate. But those technologies must be measured, in both efficacy and safety, against the least risky and most surefire path to a net-zero world: the path from a fossil fuel-driven to a clean energy-driven society.

Kim Cobb is the director of the Global Change Program at the Georgia Institute of Technology and professor in the School of Earth and Atmospheric Sciences. She was a lead author on the recent UN Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report. Follow her on Twitter: @coralsncaves

Michael E. Mann is distinguished professor of atmospheric science and director of the Earth System Science Center at Penn State University. He is author of the recently released book, “The New Climate War: The Fight to Take Back our Planet.” Follow him on Twitter: @MichaelEMann

How big science failed to unlock the mysteries of the human brain (MIT Technology Review)

technologyreview.com

Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

Emily Mullin

August 25, 2021


In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. 

At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? 

Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” 

New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. 

The audacious proposal intrigued the Obama administration and laid the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” 

But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk

In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. 

An impossible dream?

A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? 

From the beginning, both projects had critics.

EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. 

In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. 

Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.  

Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)

There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.

While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.

In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. 

In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. 

“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” 

The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”

But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.

The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain. 

New day

Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. 

Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. 

And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.

The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.

While these are all important steps forward, though, they’re far from the initial grand ambitions. 

Lasting legacy

We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.

When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. 

“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” 

Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. 

Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. 

For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.

“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” 

Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. 

“I have to be honest,” says Yuste. “We had higher hopes.”

Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

Negacionismo de sapatênis (Folha de S.Paulo)

Não é com desinformação que o jornalismo contribuirá ao tema do clima

Thiago Amparo – artigo original aqui.

11.ago.2021 às 22h05

A perversidade do negacionismo recai em jurar que se está dizendo o contrário do que de fato se diz. Nesta novilíngua, negacionismo veste o sapatênis do antialarmismo. Chega a ser tedioso, posto que mofado, o argumento de Leandro Narloch nesta Folha na terça (10). Mofado pois —como relata Michael Mann em “The New Climate War”— não passa da mesma retórica negacionista 2.0.

Em essência, Narloch defende que há atividades nocivas ao clima que devem ser “celebradas e difundidas” por nos tornar “menos vulneráveis à natureza”. Narloch está cientificamente errado. E o faz subscrevendo a uma das formas mais nefárias de negacionismo: mascara-o, vendendo soluções que não só não são capazes de mitigar e adaptar as sociedades à crise climática como possuem o efeito adverso. Implode-se a Amazônia para salvá-la, eis o argumento.

Esses e outros discursos negacionistas já tinham sido mapeados na revista Global Sustaintability, de Cambridge, em julho de 2020: não são novos. Em vez de mexer em tabus do século 21, vendem-se inverdades como se ciência fosse. Narloch erra no conceito de vulnerabilidade: dos incêndios florestais na Califórnia às inundações na Alemanha, não estamos protegidos contra a natureza porque nela estamos inseridos. Ignora, ademais, a vasta literatura do Painel do Clima sobre vulnerabilidade.

Narloch desconsidera o conceito da ciência climática de “feedback loops”: a crise climática aciona uma série de gatilhos de dimensão incalculável, uma reação de cadeia nunca vista. Destruir o clima não nos protegerá do clima, porque é a ausência de uma mudança drástica energética que tem aprofundado a crise climática. É ineficiente o investir no contrário.

Se o relatório do Painel do Clima acendeu o sinal vermelho, não é com desinformação que o jornalismo contribuirá ao tema. Pluralismo é um rio onde as ideias se movem dentro das margens da verdade e da ciência. Não reclamem quando o rio secar, implodindo as margens que o jornalismo deveria ter protegido.

Bill Gates e o problema com o solucionismo climático (MIT Technology Review)

Bill Gates e o problema com o solucionismo climático

Natureza e espaço

Focar em soluções tecnológicas para mudanças climáticas parece uma tentativa para se desviar dos obstáculos políticos mais desafiadores.

By MIT Technology Review, 6 de abril de 2021

Em seu novo livro Como evitar um desastre climático, Bill Gates adota uma abordagem tecnológica para compreender a crise climática. Gates começa com os 51 bilhões de toneladas de gases com efeito de estufa criados por ano. Ele divide essa poluição em setores com base em seu impacto, passando pelo elétrico, industrial e agrícola para o de transporte e construção civil. Do começo ao fim, Gates se mostra  adepto a diminuir as complexidades do desafio climático, dando ao leitor heurísticas úteis para distinguir maiores problemas tecnológicos (cimento) de menores (aeronaves).

Presente nas negociações climáticas de Paris em 2015, Gates e dezenas de indivíduos bem-afortunados lançaram o Breakthrough Energy, um fundo de capital de investimento interdependente lobista empenhado em conduzir pesquisas. Gates e seus companheiros investidores argumentaram que tanto o governo federal quanto o setor privado estão investindo pouco em inovação energética. A Breakthrough pretende preencher esta lacuna, investindo em tudo, desde tecnologia nuclear da próxima geração até carne vegetariana com sabor de carne bovina. A primeira rodada de US$ 1 bilhão do fundo de investimento teve alguns sucessos iniciais, como a Impossible Foods, uma fabricante de hambúrgueres à base de plantas. O fundo anunciou uma segunda rodada de igual tamanho em janeiro.

Um esforço paralelo, um acordo internacional chamado de Mission Innovation, diz ter convencido seus membros (o setor executivo da União Europeia junto com 24 países incluindo China, os EUA, Índia e o Brasil) a investirem um adicional de US$ 4,6 bilhões por ano desde 2015 para a pesquisa e desenvolvimento da energia limpa.

Essas várias iniciativas são a linha central para o livro mais recente de Gates, escrito a partir de uma perspectiva tecno-otimista. “Tudo que aprendi a respeito do clima e tecnologia me deixam otimista… se agirmos rápido o bastante, [podemos] evitar uma catástrofe climática,” ele escreveu nas páginas iniciais.

Como muitos já assinalaram, muito da tecnologia necessária já existe, muito pode ser feito agora. Por mais que Gates não conteste isso, seu livro foca nos desafios tecnológicos que ele acredita que ainda devem ser superados para atingir uma maior descarbonização. Ele gasta menos tempo nos percalços políticos, escrevendo que pensa “mais como um engenheiro do que um cientista político.” Ainda assim, a política, com toda a sua desordem, é o principal impedimento para o progresso das mudanças climáticas. E engenheiros devem entender como sistemas complexos podem ter ciclos de feedback que dão errado.

Sim, ministro

Kim Stanley Robinson, este sim pensa como um cientista político. O começo de seu romance mais recente The Ministry for the Future (ainda sem tradução para o português), se passa apenas a alguns anos no futuro, em 2025, quando uma onda de calor imensa atinge a Índia, matando milhões de pessoas. A protagonista do livro, Mary Murphy, comanda uma agência da ONU designada a representar os interesses das futuras gerações em uma tentativa de unir os governos mundiais em prol de uma solução climática. Durante todo o livro a equidade intergeracional e várias formas de políticas distributivas em foco.

Se você já viu os cenários que o Painel Intergovernamental sobre Mudanças Climáticas (IPCC) desenvolve para o futuro, o livro de Robinson irá parecer familiar. Sua história questiona as políticas necessárias para solucionar a crise climática, e ele certamente fez seu dever de casa. Apesar de ser um exercício de imaginação, há momentos em que o romance se assemelha mais a um seminário de graduação sobre ciências sociais do que a um trabalho de ficção escapista. Os refugiados climáticos, que são centrais para a história, ilustram a forma como as consequências da poluição atingem a população global mais pobre com mais força. Mas os ricos produzem muito mais carbono.

Ler Gates depois de Robinson evidencia a inextricável conexão entre desigualdade e mudanças climáticas. Os esforços de Gates sobre a questão do clima são louváveis. Mas quando ele nos diz que a riqueza combinada das pessoas apoiando seu fundo de investimento é de US$ 170 bilhões, ficamos um pouco intrigados que estes tenham dedicado somente US$ 2 bilhões para soluções climáticas, menos de 2% de seus ativos. Este fato por si só é um argumento favorável para taxar fortunas: a crise climática exige ação governamental. Não pode ser deixado para o capricho de bilionários.

Quanto aos bilionários, Gates é possivelmente um dos bonzinhos. Ele conta histórias sobre como usa sua fortuna para ajudar os pobres e o planeta. A ironia dele escrever um livro sobre mudanças climáticas quando voa em um jato particular e detém uma mansão de 6.132 m² não é algo que passa despercebido pelo leitor, e nem por Gates, que se autointitula um “mensageiro imperfeito sobre mudanças climáticas”. Ainda assim, ele é inquestionavelmente um aliado do movimento climático.

Mas ao focar em inovações tecnológicas, Gates minimiza a participação dos combustíveis fósseis na obstrução deste progresso. Peculiarmente, o ceticismo climático não é mencionado no livro. Lavando as mãos no que diz respeito à polarização política, Gates nunca faz conexão com seus colegas bilionários Charles e David Koch, que enriqueceram com os petroquímicos e têm desempenhado papel de destaque na reprodução do negacionismo climático.

Por exemplo, Gates se admira que para a vasta maioria dos americanos aquecedores elétricos são na verdade mais baratos do que continuar a usar combustíveis fósseis. Para ele, as pessoas não adotarem estas opções mais econômicas e sustentáveis é um enigma. Mas, não é assim. Como os jornalistas Rebecca Leber e Sammy Roth reportaram em  Mother Jones  e no  Los Angeles Times, a indústria do gás está investindo em defensores e criando campanhas de marketing para se opor à eletrificação e manter as pessoas presas aos combustíveis fósseis.

Essas forças de oposição são melhor vistas no livro do Robinson do que no de Gates. Gates teria se beneficiado se tivesse tirado partido do trabalho que Naomi Oreskes, Eric Conway, Geoffrey Supran, entre outros, têm feito para documentar os esforços persistentes das empresas de combustíveis fósseis em semear dúvida sobre a ciência climática para a população.

No entanto, uma coisa que Gates e Robinson têm em comum é a opinião de que a geoengenharia, intervenções monumentais para combater os sintomas ao invés das causas das mudanças climáticas, venha a ser inevitável. Em The Ministry for the Future, a geoengenharia solar, que vem a ser a pulverização de partículas finas na atmosfera para refletir mais do calor solar de volta para o espaço, é usada na sequência dos acontecimentos da onda de calor mortal que inicia a história. E mais tarde, alguns cientistas vão aos polos e inventam elaborados métodos para remover água derretida de debaixo de geleiras para evitar que avançasse para o mar. Apesar de alguns contratempos, eles impedem a subida do nível do mar em vários metros. É possível imaginar Gates aparecendo no romance como um dos primeiros a financiar estes esforços. Como ele próprio observa em seu livro, ele tem investido em pesquisa sobre geoengenharia solar há anos.

A pior parte

O título do novo livro de Elizabeth Kolbert, Under a White Sky (ainda sem tradução para o português), é uma referência a esta tecnologia nascente, já que implementá-la em larga escala pode alterar a cor do céu de azul para branco.
Kolbert observa que o primeiro relatório sobre mudanças climáticas foi parar na mesa do presidente Lyndon Johnson em 1965. Este relatório não argumentava que deveríamos diminuir as emissões de carbono nos afastando de combustíveis fósseis. No lugar, defendia mudar o clima por meio da geoengenharia solar, apesar do termo ainda não ter sido inventado. É preocupante que alguns se precipitem imediatamente para essas soluções arriscadas em vez de tratar a raiz das causas das mudanças climáticas.

Ao ler Under a White Sky, somos lembrados das formas com que intervenções como esta podem dar errado. Por exemplo, a cientista e escritora Rachel Carson defendeu importar espécies não nativas como uma alternativa a utilizar pesticidas. No ano após o seu livro Primavera Silenciosa ser publicado, em 1962, o US Fish and Wildlife Service trouxe carpas asiáticas para a América pela primeira vez, a fim de controlar algas aquáticas. Esta abordagem solucionou um problema, mas criou outro: a disseminação dessa espécie invasora ameaçou às locais e causou dano ambiental.

Como Kolbert observa, seu livro é sobre “pessoas tentando solucionar problemas criados por pessoas tentando solucionar problemas.” Seu relato cobre exemplos incluindo esforços malfadados de parar a disseminação das carpas, as estações de bombeamento em Nova Orleans que aceleram o afundamento da cidade e as tentativas de seletivamente reproduzir corais que possam tolerar temperaturas mais altas e a acidificação do oceano. Kolbert tem senso de humor e uma percepção aguçada para consequências não intencionais. Se você gosta do seu apocalipse com um pouco de humor, ela irá te fazer rir enquanto Roma pega fogo.

Em contraste, apesar de Gates estar consciente das possíveis armadilhas das soluções tecnológicas, ele ainda enaltece invenções como plástico e fertilizante como vitais. Diga isso para as tartarugas marinhas engolindo lixo plástico ou as florações de algas impulsionadas por fertilizantes destruindo o ecossistema do Golfo do México.

Com níveis perigosos de dióxido de carbono na atmosfera, a geoengenharia pode de fato se provar necessária, mas não deveríamos ser ingênuos sobre os riscos. O livro de Gates tem muitas ideias boas e vale a pena a leitura. Mas para um panorama completo da crise que enfrentamos, certifique-se de também ler Robinson e Kolbert.

The Petabyte Age: Because More Isn’t Just More — More Is Different (Wired)

WIRED Staff, Science, 06.23.2008 12:00 PM

Introduction: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. […]

petabyte age
Marian Bantjes

Introduction:

Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.

The End of Theory:

The Data Deluge Makes the Scientific Method Obsolete

Feeding the Masses:
Data In, Crop Predictions Out

Chasing the Quark:
Sometimes You Need to Throw Information Away

Winning the Lawsuit:
Data Miners Dig for Dirt

Tracking the News:
A Smarter Way to Predict Riots and Wars

__Spotting the Hot Zones: __
Now We Can Monitor Epidemics Hour by Hour

__ Sorting the World:__
Google Invents New Way to Manage Data

__ Watching the Skies:__
Space Is Big — But Not Too Big to Map

Scanning Our Skeletons:
Bone Images Show Wear and Tear

Tracking Air Fares:
Elaborate Algorithms Predict Ticket Prices

Predicting the Vote:
Pollsters Identify Tiny Voting Blocs

Pricing Terrorism:
Insurers Gauge Risks, Costs

Visualizing Big Data:
Bar Charts for Words

Big data and the end of theory? (The Guardian)

theguardian.com

Mark Graham, Fri 9 Mar 2012 14.39 GM

Does big data have the answers? Maybe some, but not all, says Mark Graham

In 2008, Chris Anderson, then editor of Wired, wrote a provocative piece titled The End of Theory. Anderson was referring to the ways that computers, algorithms, and big data can potentially generate more insightful, useful, accurate, or true results than specialists or
domain experts who traditionally craft carefully targeted hypotheses
and research strategies.

This revolutionary notion has now entered not just the popular imagination, but also the research practices of corporations, states, journalists and academics. The idea being that the data shadows and information trails of people, machines, commodities and even nature can reveal secrets to us that we now have the power and prowess to uncover.

In other words, we no longer need to speculate and hypothesise; we simply need to let machines lead us to the patterns, trends, and relationships in social, economic, political, and environmental relationships.

It is quite likely that you yourself have been the unwitting subject of a big data experiment carried out by Google, Facebook and many other large Web platforms. Google, for instance, has been able to collect extraordinary insights into what specific colours, layouts, rankings, and designs make people more efficient searchers. They do this by slightly tweaking their results and website for a few million searches at a time and then examining the often subtle ways in which people react.

Most large retailers similarly analyse enormous quantities of data from their databases of sales (which are linked to you by credit card numbers and loyalty cards) in order to make uncanny predictions about your future behaviours. In a now famous case, the American retailer, Target, upset a Minneapolis man by knowing more about his teenage daughter’s sex life than he did. Target was able to predict his daughter’s pregnancy by monitoring her shopping patterns and comparing that information to an enormous database detailing billions of dollars of sales. This ultimately allows the company to make uncanny
predictions about its shoppers.

More significantly, national intelligence agencies are mining vast quantities of non-public Internet data to look for weak signals that might indicate planned threats or attacks.

There can by no denying the significant power and potentials of big data. And the huge resources being invested in both the public and private sectors to study it are a testament to this.

However, crucially important caveats are needed when using such datasets: caveats that, worryingly, seem to be frequently overlooked.

The raw informational material for big data projects is often derived from large user-generated or social media platforms (e.g. Twitter or Wikipedia). Yet, in all such cases we are necessarily only relying on information generated by an incredibly biased or skewed user-base.

Gender, geography, race, income, and a range of other social and economic factors all play a role in how information is produced and reproduced. People from different places and different backgrounds tend to produce different sorts of information. And so we risk ignoring a lot of important nuance if relying on big data as a social/economic/political mirror.

We can of course account for such bias by segmenting our data. Take the case of using Twitter to gain insights into last summer’s London riots. About a third of all UK Internet users have a twitter profile; a subset of that group are the active tweeters who produce the bulk of content; and then a tiny subset of that group (about 1%) geocode their tweets (essential information if you want to know about where your information is coming from).

Despite the fact that we have a database of tens of millions of data points, we are necessarily working with subsets of subsets of subsets. Big data no longer seems so big. Such data thus serves to amplify the information produced by a small minority (a point repeatedly made by UCL’s Muki Haklay), and skew, or even render invisible, ideas, trends, people, and patterns that aren’t mirrored or represented in the datasets that we work with.

Big data is undoubtedly useful for addressing and overcoming many important issues face by society. But we need to ensure that we aren’t seduced by the promises of big data to render theory unnecessary.

We may one day get to the point where sufficient quantities of big data can be harvested to answer all of the social questions that most concern us. I doubt it though. There will always be digital divides; always be uneven data shadows; and always be biases in how information and technology are used and produced.

And so we shouldn’t forget the important role of specialists to contextualise and offer insights into what our data do, and maybe more importantly, don’t tell us.

Mark Graham is a research fellow at the Oxford Internet Institute and is one of the creators of the Floating Sheep blog

The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Wired)

wired.com

Chris Anderson, Science, 06.23.2008 12:00 PM


Illustration: Marian Bantjes “All models are wrong, but some are useful.”

So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models. Indeed, they don’t have to settle for models at all.

Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.

The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.

At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.

Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required. That’s why Google can translate languages without actually “knowing” them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.

Speaking at the O’Reilly Emerging Technology Conference this past March, Peter Norvig, Google’s research director, offered an update to George Box’s maxim: “All models are wrong, and increasingly you can succeed without them.”

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

The big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years.

Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence. Data without a model is just noise.

But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical speculation about n-dimensional grand unified models over the past few decades (the “beautiful story” phase of a discipline starved of data) is that we don’t know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.

Now biology is heading in the same direction. The models we were taught in school about “dominant” and “recessive” genes steering a strictly Mendelian process have turned out to be an even greater simplification of reality than Newton’s laws. The discovery of gene-protein interactions and other aspects of epigenetics has challenged the view of DNA as destiny and even introduced evidence that environment can influence inheritable traits, something once considered a genetic impossibility.

In short, the more we learn about biology, the further we find ourselves from a model that can explain it.

There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.

If the words “discover a new species” call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn’t know what they look like, how they live, or much of anything else about their morphology. He doesn’t even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.

This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It’s just data. By analyzing it with Google-quality computing resources, though, Venter has advanced biology more than anyone else of his generation.

This kind of thinking is poised to go mainstream. In February, the National Science Foundation announced the Cluster Exploratory, a program that funds research designed to run on a large-scale distributed computing platform developed by Google and IBM in conjunction with six pilot universities. The cluster will consist of 1,600 processors, several terabytes of memory, and hundreds of terabytes of storage, along with the software, including IBM’s Tivoli and open source versions of Google File System and MapReduce.111 Early CluE projects will include simulations of the brain and the nervous system and other biological research that lies somewhere between wetware and software.

Learning to use a “computer” of this scale may be challenging. But the opportunity is great: The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

There’s no reason to cling to our old ways. It’s time to ask: What can science learn from Google?

Chris Anderson (canderson@wired.com) is the editor in chief of Wired.

Related The Petabyte Age: Sensors everywhere. Infinite storage. Clouds of processors. Our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology. As our collection of facts and figures grows, so will the opportunity to find answers to fundamental questions. Because in the era of big data, more isn’t just more. More is different.

Correction:
1 This story originally stated that the cluster software would include the actual Google File System.
06.27.08

Inteligência artificial já imita Guimarães Rosa e pode mudar nossa forma de pensar (Folha de S.Paulo)

www1.folha.uol.com.br

Hermano Vianna Antropólogo, escreve no blog hermanovianna.wordpress.com

22 de agosto de 2020


[resumo] Espantado com as proezas de tecnologias capazes de produzir textos, até mesmo criando propostas a partir de frase de Guimarães Rosa, antropólogo analisa os impactos gerados pela inteligência artificial, aponta dilemas éticos relativos a seu uso, teme pelo aumento da dependência em relação aos países produtores de softwares e almeja que as novas práticas façam florescer no Brasil modos mais diversos e colaborativos de pensar.

GPT-3 é o nome da nova estrela da busca por IA (inteligência artificial). Foi lançado em maio deste ano pela OpenAI, companhia que vai completar cinco anos desde sua fundação bilionária financiada por, entre outros, Elon Musk.

Até agora, o acesso a sua já lendária gigacapacidade de geração de textos surpreendentes, sobre qualquer assunto, é privilégio de pouca gente rica e poderosa. Há, contudo, atalhos divertidos para pobres mortais: um deles é o jogo “AI Dungeon” (masmorra de IA), criação de um estudante mórmon, que desde julho funciona com combustível GPT-3.

O objetivo dos jogadores é criar obras literárias de ficção com ajuda desse modelo de IA. A linguagem de partida é o inglês, mas usei português, e o bichinho teve jogo de cintura admirável para driblar minha pegadinha.

Fui até mais implicante. Não usei apenas português, usei Guimarães Rosa. Copiei e colei, da primeira página de “Grande Sertão: Veredas”: “Alvejei mira em árvore, no quintal, no baixo do córrego”. O “AI Dungeon”, que até aquele ponto estava falando inglês, pegou a deixa e continuou assim: “Uma fogueira crepitante brinca e lambiça em torno de um lindo carvalho”.

Tudo bem, Rosa nunca escreveria essa frase. Fiz uma busca: crepitar não aparece em nenhum momento de “Grande Sertão: Veredas”, e carvalho não costuma ser vizinho de buritis. Porém, o GPT-3 entendeu que precisava mudar de língua para jogar comigo e resolveu arriscar: uma fogueira não fica deslocada no meu quintal, ainda mais uma fogueira brincante. E fez o favor de confundir Rosa com James Joyce, inventando o verbo lambiçar, que meu corretor ortográfico não reconhece, talvez para sugerir uma lambida caprichada ou sutilmente gulosa.

Fiquei espantado. Não é todo dia que recebo uma resposta tão desconcertante. Fiz outra busca, aproveitando os serviços do Google: não há registro da frase completa que o “AI Dungeon” propôs. Foi realmente uma criação original. Uma criação “bem criativa”.

(E testei Joyce também: quando inseri, de “Ulysses”, sampleado igualmente de sua primeira página, “Introibo ad altare Dei”, o jogo foi apenas um pouco menos surpreendente, mandou de volta a tradução do latim para o inglês.)

Originalidade. Criatividade. A combinação disso tudo parece mesmo atributo de um ser inteligente, que tem consciência do que está fazendo ou pensando.

Pelo que entendo, já que minha pouca inteligência não é muito treinada nessa matéria, o GPT-3, certamente o mais parrudo modelo de geração artificial de textos com intenção de pé e cabeça, tem uma maneira muito especial de pensar, que não sou capaz de diferenciar daquilo que acontece entre nossos neurônios: seu método é estatístico, probabilístico.

Está fundamentado na análise de uma quantidade avassaladora de textos, quase tudo que existe na internet, em várias línguas, inclusive linguagens de computador. Sua estratégia mais simples, e certamente estou simplificando muito, é identificar quais palavras costumam aparecer com mais frequência depois de outras. Assim, em suas respostas, chuta o que no seu “pensamento” parecem ser as respostas mais “prováveis”.

Claro que não “sabe” do que está falando. Talvez, no meu teste Rosa, se tivesse escrito peixe, no lugar do carvalho poderia surgir um “lindo tubarão”; e isso não significaria que essa IA entenda profundamente a distinção peixe-árvore.

Mas qual profundidade o entendimento precisa atingir para ser reconhecido como verdadeiramente inteligente? E o chute não é, afinal, uma característica corriqueira dos artifícios da nossa IA? Não estou chutando aqui descaradamente, falando daquilo que não domino, ou não entendo?

Não estou escrevendo isto para tentar definir o que é inteligência ou consciência; melhor voltarmos a um território mais concreto: a probabilidade. Há algo de inusitado em uma fogueira que brinca. Não deve ser tão comum assim essa associação de ideias ou palavras, mas árvore remeter a carvalho deve indicar um treinamento de “machine learning” (aprendizado de máquina) que não aconteceu no Brasil.

Outros “pés de quê” são por aqui estatisticamente mais prováveis de despontarem em nossas memórias “nacionais” quando penetram no reino vegetal. Estou pensando, é claro, em tema bem batido do debate sobre IA: o “bias”, ou viés, inevitável em seus modelos, consequência dos dados que alimentaram seu aprendizado, não importa quão “deep learning” (aprendizagem profunda) tenha sido.
São conhecidos os exemplos mais preconceituosos, como o da IA de identificação de fotos que classificou pessoas negras como gorilas, pois no seu treinamento a quase totalidade dos seres humanos que “viu” era gente branca. Problema dos bancos de dados? É preciso ir mais “deep”.

Então, lembro-me do primeiro artigo assinado por Kai Fu Lee, empresário baseado na China, que li no jornal The New York Times. Resumo: na corrida pela IA, EUA e China ocupam as primeiras posições, muito na frente dos demais países. Poucas grandes companhias serão vencedoras.

Cada avanço exige muitos recursos, inclusive energéticos tradicionais, vide o consumo insustentável de eletricidade para o GPT-3 aprender a “lambiçar”. Muitos empregos vão sumir. Todo o mundo precisará de algo como “renda universal”. De onde virá o dinheiro?

Resposta assustadora de Kai Fu Lee, em tradução do Google Translator, sem minhas correções: “Portanto, se a maioria dos países não for capaz de tributar a IA ultra-lucrativa, empresas para subsidiar seus trabalhadores, que opções eles terão? Prevejo apenas um: a menos que desejem mergulhar seu povo na pobreza, serão forçados a negociar com o país que fornecer a maior parte de seu IA software —China ou Estados Unidos— para se tornar essencialmente dependente econômico desse país, recebendo subsídios de bem-estar em troca de deixar a nação ‘mãe’ IA. as empresas continuam lucrando com os usuários do país dependente. Tais arranjos econômicos remodelariam as alianças geopolíticas de hoje”.

Apesar dos muitos erros, a conclusão é bem compreensível: uma nova teoria da dependência. Eis o pós-colonialismo, ou o cibercolonialismo, como destino inevitável para a humanidade?

Isso sem tocar em algo central no pacote a ser negociado: a colônia se submeterá também ao conjunto de “bias” da “nação ‘mãe’ IA”. Prepare-se: florestas de carvalho, sem buritis.

Recentemente, mas antes do “hype” do GPT-3, o mesmo Kai Fu Lee virou notícia dando nota B- para a atuação da IA durante a pandemia. Ele passou sua quarentena em Pequim. Diz que entregadores de suas compras foram sempre robôs —e, pelo que vi na temporada 2019 do Expresso Futuro, gravada por Ronaldo Lemos e companhia na China, eu acredito.

Ficou decepcionado, todavia, com a falta de protagonismo do “machine learning” no desenvolvimento de vacinas e tratamentos. Eu, com minha ousadia pouco preparada, chutaria nota semelhante, talvez C+, para seguir o viés universidade americana.

Aplaudi, por exemplo, quando a IBM liberou os serviços do Watson para organizações em seu combate contra o coronavírus. Ou quando empresas gigantes, como Google e Amazon, proibiram o uso de suas tecnologias de reconhecimento facial depois das manifestações antirracistas pelo mundo todo.

No entanto, empresas menores, com IAs de vigilância não menos potentes, aproveitaram a falta de concorrência para aumentar sua clientela. E vimos como os aplicativos de rastreamento de contatos e contaminações anunciam a transparência totalitária de todos os nossos movimentos, através de algoritmos que já tornaram obsoletas antigas noções de privacidade.

Tudo bem assustador, para quem defende princípios democráticos. Contudo, nem o Estado mais autoritário terá garantia de controle de seus próprios segredos.

Esses problemas são reconhecidos por toda a comunidade de desenvolvedores de IA. Há muitos grupos —como The Partnership on AI, que inclui da OpenAI a Electronic Frontier Foundation— que se dedicam há anos ao debate sobre as questões éticas do uso da inteligência artificial.

Debate extremamente complexo e cheio de becos perigosos, como demonstra a trajetória de Mustafa Suleyman, uma das personalidades mais fascinantes do século 21. Ele foi um dos três fundadores da DeepMind, a empresa britânica, depois comprada pelo Google, que criou aquela IA famosa que venceu o campeão mundial de Go, jogo de tabuleiro criado na China há mais de 2.500 anos.

As biografias do trio poderiam inspirar filmes ou séries. Demis Hassabis tem pai grego-cipriota e mãe de Singapura; Shane Legg nasceu no norte da Nova Zelândia; e Mustafa Suleyman é filho de sírio taxista imigrante em Londres.

A história de Suleyman pré-DeepMind é curiosa: enquanto estudava na Universidade de Oxford, montou um serviço telefônico para cuidar da saúde mental de jovens muçulmanos. Depois foi consultor para resolução de conflitos. No mundo da IA —hoje cuida de “policy” no Google— nunca teve papas na língua. Procure por suas palestras e entrevistas no YouTube: sempre tocou em todas as feridas, como se fosse crítico de fora, mas com lugar de fala do centro mais poderoso.

Gosto especialmente de sua palestra na Royal Society, com seu estilo pós-punk e apresentado pela princesa Ana. Mesmo assim, com toda sua consciência política muito clara e preocupações éticas que me parecem muito sinceras, Mustafa Suleyman se viu metido em um escândalo que envolve a acusação de uso sem autorização de dados de pacientes do NHS (serviço britânico de saúde pública) para desenvolvimento de aplicativos que pretendiam ajudar a monitorar doentes hospitalares em estado crítico.

Foram muitas as explicações da DeepMind, do Google e do NHS. Exemplo de problemas com os quais vamos viver cada vez mais e que precisam de novos marcos regulatórios para determinar que algoritmos podem se meter com nossas vidas —e, sobretudo, quem vai entender o que pode um algoritmo e o que pode a empresa dona desse algoritmo.

Uma coisa já aprendi, pensando nesse tipo de problema: diversidade não é importante apenas nos bancos de dados usados em processos de “machine learning”, mas também nas maneiras de cada IA “pensar” e nos sistemas de segurança para auditar os algoritmos que moldam esses pensamentos.

Essa necessidade tem sido mais bem explorada nas experiências que reúnem desenvolvedores de IA e artistas. Acompanho com enorme interesse o trabalho de Kenric Mc Dowell, que cuida da aproximação de artistas com os laboratórios de “machine learning” do Google.

Seus trabalhos mais recentes investem na possibilidade de existência de inteligências não humanas e na busca de colaboração entre tipos diferentes de inteligências e modos de pensar, incluindo a inspiração nas cosmotécnicas do filósofo chinês Yuk Hui, que andou pela Paraíba e pelo Rio de Janeiro no ano passado.

Na mesma trilha, sigo a evolução da prática em artes e robótica de Ken Goldberg, professor da Universidade da Califórnia em Berkeley. Ele publicou um artigo no Wall Street Journal em 2017 defendendo a ideia que se tornou meu lema atual: esqueçam a singularidade, viva a multiplicidade.

Através de Ken Goldberg também aprendi o que é floresta randômica (“random forest”), método de “machine learning” que usa não apenas um algoritmo, mas uma mata atlântica de algoritmos, de preferência cada um pensando de um jeito, com decisões tomadas em conjunto, procurando, assim, entre outras vantagens, evitar vieses “individuais”.

Minha utopia desesperada de Brasil: que a “random forest” seja aqui cada vez mais verdejante. Com desenvolvimento de outras IAs, ou IAs realmente outras. Inteligências artificiais antropófagas. GPTs-n ao infinito, capazes de pensar nas 200 línguas indígenas que existem/resistem por aqui. Chatbots que façam rap com sotaque tecnobrega paraense, anunciando as fórmulas para resolução de todos os problemas alimentares da humanidade.

Inteligência não nos falta. Inteligência como a da jovem engenheira Marianne Linhares, que saiu da graduação da Universidade Federal de Campina Grande e foi direto para a DeepMind de Londres.

Em outro mundo possível, poderia continuar por aqui, colaborando com o pessoal de “machine learning” da UFPB (e via Github com o mundo todo), talvez inventando uma IA que realmente entenda a literatura de Guimarães Rosa. Ou que possa responder à pergunta de Meu Tio o Iauaretê, ”você sabe o que onça pensa?”, pensando igual a uma onça. Bom. Bonito.

Cientistas planejam a ressurreição digital com bots e humanoides (Canal Tech)

Por Natalie Rosa | 25 de Junho de 2020 às 16h40 Reprodução

Em fevereiro deste ano, o mundo todo se surpreendeu com a história de Jang Ji-sung, uma sul-coreana que “reencontrou” a sua filha, já falecida, graças à inteligência artificial. A garota morreu em 2016 devido a uma doença sanguínea.

No encontro simulado, a imagem da pequena Nayeon é exibida para a mãe que está em um fundo verde, também conhecido como chroma key, usando um headset de realidade virtual. A interação não foi só visual, como também foi possível conversar e brincar com a criança. Segundo Jang, a experiência foi como um sonho que ela sempre quis ter.

Encontro de Jang Ji-sung com a forma digitalizada da filha (Imagem: Reprodução)

Por mais que pareça uma tendência difícil de ser executada em massa na vida real, além de ser uma preocupação bastante antiga das produções de ficção científica, existem pessoas interessadas nesta forma de imortalidade. A questão que fica, no entanto, é se devemos fazer isso e como irá acontecer.

Em entrevista ao CNET, John Troyer, diretor do Centre for Death and Society (Centro para Morte e Sociedade) da Universidade de Bath, na Inglaterra, e autor do livro Technologies of the Human Corpse, conta que o interesse mais moderno pela imortalidade começou ainda na década de 1960. Na época, muitas pessoas acreditavam na ideia do processo criônico de preservação de corpos, quando um cadáver ou apenas uma cabeça humana eram congelados com a esperança de serem ressuscitados no futuro. Até o momento, ainda não houve tentativa de serem revividas.

“Aconteceu uma mudança na ciência da morte naquele tempo, e a ideia de que, de alguma forma, os humanos poderiam derrotar a morte”, explica Troyer. O especialista conta também que ainda não há uma pesquisa revisada que prove que o investimento de milhões no upload de dados do cérebro, ou ainda manter um corpo vivo, valha a pena.

Em 2016, um estudo publicado na revista acadêmica Plos One descobriu que expor um cérebro preservado a sondas químicas e elétricas o faz voltar a funcionar. “Tudo isso é uma aposta do que é possível no futuro. Mas eu não estou convencido de que é possível da maneira que estão descrevendo ou desejando”, completa.

Superando o luto

O caso que aconteceu na Coreia do Sul não foi o único que envolve o luto. Em 2015, Eugenia Kuyda, co-fundadora e CEO da empresa de softwares Replika, sofreu com a perda do seu melhor amigo Roman após um atropelamento em Moscou, na Rússia. A executiva decidiu, então, criar um chatbot treinado com milhares de mensagens de texto trocadas pelos dois ao longo dos anos, resultando em uma versão digital de Roman, que pode conversar com amigos e família.

“Foi muito emocionante. Eu não estava esperando me sentir assim porque eu trabalhei naquele chatbot e sabia como ele foi construído”, relata Kuyda. A experiência lembra bastante um dos episódios da série Black Mirror, que aborda um futuro distópico da tecnologia. Em Be Right Back, de 2013, uma jovem mulher perde o namorado em um acidente de carro e se inscreve em um projeto para que ela possa se comunicar com “ele” de forma digital, graças à inteligência artificial.

Por outro lado, Kuyda conta que o projeto não foi criado para ser comercializado, mas sim como uma forma pessoal de lidar com a perda do melhor amigo. Ela conta que qualquer pessoa que tentar reproduzir o feito vai encontrar uma série de empecilhos e dificuldades, como decidir qual tipo de informação será considerada pública ou privada, ou ainda com quem o chatbot poderá interagir. Isso porque a forma de se conversar com um amigo, por exemplo, não é a mesma com integrantes da família, e Kuyda diz que não há como fazer essa diferenciação.

A criação de uma versão digital de uma pessoa não vai desenvolver novas conversas e nem emitir novas opiniões, mas sim replicar frases e palavras já ditas, basicamente, se encaixando com o bate-papo. “Nós deixamos uma quantidade insana de dados, mas a maioria deles não é pessoal, privada ou baseada em termos de que tipo de pessoa nós somos”, diz Kuyda. Em resposta ao CNET, a executiva diz que é impossível obter dados 100% precisos de uma pessoa, pois atualmente não há alguma tecnologia que possa capturar o que está acontecendo em nossas mentes.

Sendo assim, a coleta de dados acaba sendo a maior barreira para criar algum tipo de software que represente uma pessoa após o falecimento. Parte disso acontece porque a maioria dos conteúdos postados online são de uma empresa, passando a pertencer à plataforma. Com isso, se um dia a companhia fechar, os dados vão embora junto com ela. Para Troyer, a tecnologia de memória não tende a sobreviver ao tempo.

Imagem: Reprodução

Cérebro fresco

A startup Nectome vem se dedicando à preservação do cérebro, pensando na possível extração da memória após a morte. Para que isso aconteça, no entanto, o órgão precisa estar “fresco”, o que significaria que a morte teria que acontecer por uma eutanásia.

O objetivo da startup é conduzir os testes com voluntários que estejam em estado terminal de alguma doença e que permitam o suicídio assistido por médicos. Até o momento a Nectome coletou US$ 10 mil reembolsáveis para uma lista de espera para o procedimento, caso um dia a oportunidade esteja disponível. Por enquanto, a companhia ainda precisa se esforçar em ensaios clínicos.

A startup já arrecadou um milhão de dólares em financiamento e vinha colaborando com um neurocientista do MIT. Porém, a publicação da história gerou muita polêmica negativa de cientistas e especialistas em ética, e o MIT encerrou o seu contrato com a startup. A repercussão afirmou que o projeto da empresa não é possível de ser realizado. 

Veja a declaração feita pelo MIT na época:

“A neurociência não é suficientemente avançada ao ponto de sabermos se um método de preservação do cérebro é o suficiente para preservar diferentes tipos de biomoléculas relacionadas à memória e à mente. Também não se sabe se é possível recriar a consciência de uma pessoa”, disse a nota ainda em 2018.

Eternização com a realidade aumentada

Enquanto alguns pensam em extrair a mente de um cérebro, outras empresas optam por uma “ressurreição” mais simples, mas não menos invasiva. A empresa Augmented Reality, por exemplo, tem como objetivo ajudar pessoas a viverem em um formato digital, transmitindo conhecimento das pessoas de hoje para as futuras gerações.

O fundador e CEO da empresa de computação FlyBits e professor do MIT Media Lab, Hossein Rahnama, vem tentando construir agentes de software que possam agir como herdeiros digitais. “Os Millennials estão criando gigabytes de dados diariamente e nós estamos alcançando um nível de maturidade em que podemos, realmente, criar uma versão digital de nós mesmos”, conta.

Para colocar o projeto em ação, a Augmented Reality alimenta um mecanismo de aprendizado de máquina com emails, fotos e atividades de redes sociais das pessoas, analisando como ela pensa e age. Assim, é possível fornecer uma cópia digital de uma pessoa real, e ela pode interagir via chatbot, vídeo digitalmente editado ou ainda como um robô humanoide.

Falando em humanoides, no laboratório de robótica Intelligent Robotics, da Universidade de Osaka, no Japão, já existem mais de 30 androides parecidos com humanos, inclusive uma versão robótica de Hiroshi Ishiguro, diretor do setor. O cientista vem inovando no campo de pesquisa de interações entre humanos e robôs, estudando a importância de detalhes, como movimentos sutis dos olhos e expressões faciais.

Reprodução: Hiroshi Ishiguro Laboratory, ATR

Quando Ishiguro morrer, segundo o próprio, ele poderá ser substituído pelo seu robô para dar aulas aos seus alunos, mesmo que esta máquina nunca seja realmente ele e nem possa gerar novas ideias. “Nós não podemos transmitir as nossas consciências aos robôs. Compartilhamos, talvez, as memórias. Um robô pode dizer ‘Eu sou Hiroshi Ishiguro’, mas mesmo assim a consciência é independente”, afirma.

Para Ishiguro, no futuro nada disso será parecido com o que vemos na ficção científica. O download de memória, por exemplo, é algo que não vai acontecer, pois simplesmente não é possível. “Precisamos ter diferentes formas de fazer uma cópia de nossos cérebros, mas nós não sabemos ainda como fazer isso”, completa. 

Mãe “reencontra” filha morta graças a realidade virtual

The new astrology (Aeon)

By fetishising mathematical models, economists turned economics into a highly paid pseudoscience

04 April, 2016

Alan Jay Levinovitz is an assistant professor of philosophy and religion at James Madison University in Virginia. His most recent book is The Gluten Lie: And Other Myths About What You Eat (2015).Edited by Sam Haselby

 

What would make economics a better discipline?

Since the 2008 financial crisis, colleges and universities have faced increased pressure to identify essential disciplines, and cut the rest. In 2009, Washington State University announced it would eliminate the department of theatre and dance, the department of community and rural sociology, and the German major – the same year that the University of Louisiana at Lafayette ended its philosophy major. In 2012, Emory University in Atlanta did away with the visual arts department and its journalism programme. The cutbacks aren’t restricted to the humanities: in 2011, the state of Texas announced it would eliminate nearly half of its public undergraduate physics programmes. Even when there’s no downsizing, faculty salaries have been frozen and departmental budgets have shrunk.

But despite the funding crunch, it’s a bull market for academic economists. According to a 2015 sociological study in the Journal of Economic Perspectives, the median salary of economics teachers in 2012 increased to $103,000 – nearly $30,000 more than sociologists. For the top 10 per cent of economists, that figure jumps to $160,000, higher than the next most lucrative academic discipline – engineering. These figures, stress the study’s authors, do not include other sources of income such as consulting fees for banks and hedge funds, which, as many learned from the documentary Inside Job (2010), are often substantial. (Ben Bernanke, a former academic economist and ex-chairman of the Federal Reserve, earns $200,000-$400,000 for a single appearance.)

Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline. Nor, in the case of financial economics and macroeconomics, can they point to the predictive power of their theories. Hedge funds employ cutting-edge economists who command princely fees, but routinely underperform index funds. Eight years ago, Warren Buffet made a 10-year, $1 million bet that a portfolio of hedge funds would lose to the S&P 500, and it looks like he’s going to collect. In 1998, a fund that boasted two Nobel Laureates as advisors collapsed, nearly causing a global financial crisis.

The failure of the field to predict the 2008 crisis has also been well-documented. In 2003, for example, only five years before the Great Recession, the Nobel Laureate Robert E Lucas Jr told the American Economic Association that ‘macroeconomics […] has succeeded: its central problem of depression prevention has been solved’. Short-term predictions fair little better – in April 2014, for instance, a survey of 67 economists yielded 100 per cent consensus: interest rates would rise over the next six months. Instead, they fell. A lot.

Nonetheless, surveys indicate that economists see their discipline as ‘the most scientific of the social sciences’. What is the basis of this collective faith, shared by universities, presidents and billionaires? Shouldn’t successful and powerful people be the first to spot the exaggerated worth of a discipline, and the least likely to pay for it?

In the hypothetical worlds of rational markets, where much of economic theory is set, perhaps. But real-world history tells a different story, of mathematical models masquerading as science and a public eager to buy them, mistaking elegant equations for empirical accuracy.

As an extreme example, take the extraordinary success of Evangeline Adams, a turn-of-the-20th-century astrologer whose clients included the president of Prudential Insurance, two presidents of the New York Stock Exchange, the steel magnate Charles M Schwab, and the banker J P Morgan. To understand why titans of finance would consult Adams about the market, it is essential to recall that astrology used to be a technical discipline, requiring reams of astronomical data and mastery of specialised mathematical formulas. ‘An astrologer’ is, in fact, the Oxford English Dictionary’s second definition of ‘mathematician’. For centuries, mapping stars was the job of mathematicians, a job motivated and funded by the widespread belief that star-maps were good guides to earthly affairs. The best astrology required the best astronomy, and the best astronomy was done by mathematicians – exactly the kind of person whose authority might appeal to bankers and financiers.

In fact, when Adams was arrested in 1914 for violating a New York law against astrology, it was mathematics that eventually exonerated her. During the trial, her lawyer Clark L Jordan emphasised mathematics in order to distinguish his client’s practice from superstition, calling astrology ‘a mathematical or exact science’. Adams herself demonstrated this ‘scientific’ method by reading the astrological chart of the judge’s son. The judge was impressed: the plaintiff, he observed, went through a ‘mathematical process to get at her conclusions… I am satisfied that the element of fraud… is absent here.’

Romer compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism

The enchanting force of mathematics blinded the judge – and Adams’s prestigious clients – to the fact that astrology relies upon a highly unscientific premise, that the position of stars predicts personality traits and human affairs such as the economy. It is this enchanting force that explains the enduring popularity of financial astrology, even today. The historian Caley Horan at the Massachusetts Institute of Technology described to me how computing technology made financial astrology explode in the 1970s and ’80s. ‘Within the world of finance, there’s always a superstitious, quasi-spiritual trend to find meaning in markets,’ said Horan. ‘Technical analysts at big banks, they’re trying to find patterns in past market behaviour, so it’s not a leap for them to go to astrology.’ In 2000, USA Today quoted Robin Griffiths, the chief technical analyst at HSBC, the world’s third largest bank, saying that ‘most astrology stuff doesn’t check out, but some of it does’.

Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.

‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’

Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:

When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.

The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

Romer is not the first to elaborate the mathiness critique. In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal ‘emptiness behind a breastwork of mathematical formulas’. More recently, Deirdre N McCloskey’s The Rhetoric of Economics(1998) and Robert H Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message ‘Look at how very scientific I am.’

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. ‘As I see it,’ he wrote, ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.’ Krugman named economists’ ‘desire… to show off their mathematical prowess’ as the ‘central cause of the profession’s failure’.

The mathiness critique isn’t limited to macroeconomics. In 2014, the Stanford financial economist Paul Pfleiderer published the paper‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, which helped to inspire Romer’s understanding of mathiness. Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’ that substitute ‘mathematical elegance’ for empirical accuracy. Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’

The notion that an entire culture – not just a few eccentric financiers – could be bewitched by empty, extravagant theories might seem absurd. How could all those people, all that math, be mistaken? This was my own feeling as I began investigating mathiness and the shaky foundations of modern economic science. Yet, as a scholar of Chinese religion, it struck me that I’d seen this kind of mistake before, in ancient Chinese attitudes towards the astral sciences. Back then, governments invested incredible amounts of money in mathematical models of the stars. To evaluate those models, government officials had to rely on a small cadre of experts who actually understood the mathematics – experts riven by ideological differences, who couldn’t even agree on how to test their models. And, of course, despite collective faith that these models would improve the fate of the Chinese people, they did not.

Astral Science in Early Imperial China, a forthcoming book by the historian Daniel P Morgan, shows that in ancient China, as in the Western world, the most valuable type of mathematics was devoted to the realm of divinity – to the sky, in their case (and to the market, in ours). Just as astrology and mathematics were once synonymous in the West, the Chinese spoke of li, the science of calendrics, which early dictionaries also glossed as ‘calculation’, ‘numbers’ and ‘order’. Li models, like macroeconomic theories, were considered essential to good governance. In the classic Book of Documents, the legendary sage king Yao transfers the throne to his successor with mention of a single duty: ‘Yao said: “Oh thou, Shun! The li numbers of heaven rest in thy person.”’

China’s oldest mathematical text invokes astronomy and divine kingship in its very title – The Arithmetical Classic of the Gnomon of the Zhou. The title’s inclusion of ‘Zhou’ recalls the mythic Eden of the Western Zhou dynasty (1045–771 BCE), implying that paradise on Earth can be realised through proper calculation. The book’s introduction to the Pythagorean theorem asserts that ‘the methods used by Yu the Great in governing the world were derived from these numbers’. It was an unquestioned article of faith: the mathematical patterns that govern the stars also govern the world. Faith in a divine, invisible hand, made visible by mathematics. No wonder that a newly discovered text fragment from 200 BCE extolls the virtues of mathematics over the humanities. In it, a student asks his teacher whether he should spend more time learning speech or numbers. His teacher replies: ‘If my good sir cannot fathom both at once, then abandon speech and fathom numbers, [for] numbers can speak, [but] speech cannot number.’

Modern governments, universities and businesses underwrite the production of economic theory with huge amounts of capital. The same was true for li production in ancient China. The emperor – the ‘Son of Heaven’ – spent astronomical sums refining mathematical models of the stars. Take the armillary sphere, such as the two-metre cage of graduated bronze rings in Nanjing, made to represent the celestial sphere and used to visualise data in three-dimensions. As Morgan emphasises, the sphere was literally made of money. Bronze being the basis of the currency, governments were smelting cash by the metric ton to pour it into li. A divine, mathematical world-engine, built of cash, sanctifying the powers that be.

The enormous investment in li depended on a huge assumption: that good government, successful rituals and agricultural productivity all depended upon the accuracy of li. But there were, in fact, no practical advantages to the continued refinement of li models. The calendar rounded off decimal points such that the difference between two models, hotly contested in theory, didn’t matter to the final product. The work of selecting auspicious days for imperial ceremonies thus benefited only in appearance from mathematical rigour. And of course the comets, plagues and earthquakes that these ceremonies promised to avert kept on coming. Farmers, for their part, went about business as usual. Occasional governmental efforts to scientifically micromanage farm life in different climes using li ended in famine and mass migration.

Like many economic models today, li models were less important to practical affairs than their creators (and consumers) thought them to be. And, like today, only a few people could understand them. In 101 BCE, Emperor Wudi tasked high-level bureaucrats – including the Great Director of the Stars – with creating a new li that would glorify the beginning of his path to immortality. The bureaucrats refused the task because ‘they couldn’t do the math’, and recommended the emperor outsource it to experts.

The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession

The debates of these ancient li experts bear a striking resemblance to those of present-day economists. In 223 CE, a petition was submitted to the emperor asking him to approve tests of a new li model developed by the assistant director of the astronomical office, a man named Han Yi.

At the time of the petition, Han Yi’s model, and its competitor, the so-called Supernal Icon, had already been subjected to three years of ‘reference’, ‘comparison’ and ‘exchange’. Still, no one could agree which one was better. Nor, for that matter, was there any agreement on how they should be tested.

In the end, a live trial involving the prediction of eclipses and heliacal risings was used to settle the debate. With the benefit of hindsight, we can see this trial was seriously flawed. The helical rising (first visibility) of planets depends on non-mathematical factors such as eyesight and atmospheric conditions. That’s not to mention the scoring of the trial, which was modelled on archery competitions. Archers scored points for proximity to the bullseye, with no consideration for overall accuracy. The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession.

None of this is to say that li models were useless or inherently unscientific. For the most part, li experts were genuine mathematical virtuosos who valued the integrity of their discipline. Despite being based on inaccurate assumptions – that the Earth was at the centre of the cosmos – their models really did work to predict celestial motions. Imperfect though the live trial might have been, it indicates that superior predictive power was a theory’s most important virtue. All of this is consistent with real science, and Chinese astronomy progressed as a science, until it reached the limits imposed by its assumptions.

However, there was no science to the belief that accurate li would improve the outcome of rituals, agriculture or government policy. No science to the Hall of Light, a temple for the emperor built on the model of a magic square. There, by numeric ritual gesture, the Son of Heaven was thought to channel the invisible order of heaven for the prosperity of man. This was quasi-theology, the belief that heavenly patterns – mathematical patterns – could be used to model every event in the natural world, in politics, even the body. Macro- and microcosm were scaled reflections of one another, yin and yang in a unifying, salvific mathematical vision. The expensive gadgets, the personnel, the bureaucracy, the debates, the competition – all of this testified to the divinely authoritative power of mathematics. The result, then as now, was overvaluation of mathematical models based on unscientific exaggerations of their utility.

In ancient China it would have been unfair to blame li experts for the pseudoscientific exploitation of their theories. These men had no way to evaluate the scientific merits of assumptions and theories – ‘science’, in a formalised, post-Enlightenment sense, didn’t really exist. But today it is possible to distinguish, albeit roughly, science from pseudoscience, astronomy from astrology. Hypothetical theories, whether those of economists or conspiracists, aren’t inherently pseudoscientific. Conspiracy theories can be diverting – even instructive – flights of fancy. They become pseudoscience only when promoted from fiction to fact without sufficient evidence.

Romer believes that fellow economists know the truth about their discipline, but don’t want to admit it. ‘If you get people to lower their shield, they’ll tell you it’s a big game they’re playing,’ he told me. ‘They’ll say: “Paul, you may be right, but this makes us look really bad, and it’s going to make it hard for us to recruit young people.”’

Demanding more honesty seems reasonable, but it presumes that economists understand the tenuous relationship between mathematical models and scientific legitimacy. In fact, many assume the connection is obvious – just as in ancient China, the connection between li and the world was taken for granted. When reflecting in 1999 on what makes economics more scientific than the other social sciences, the Harvard economist Richard B Freeman explained that economics ‘attracts stronger students than [political science or sociology], and our courses are more mathematically demanding’. In Lives of the Laureates (2004), Robert E Lucas Jr writes rhapsodically about the importance of mathematics: ‘Economic theory is mathematical analysis. Everything else is just pictures and talk.’ Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:

The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.

Even for those who agree with Romer, conflict of interest still poses a problem. Why would skeptical astronomers question the emperor’s faith in their models? In a phone conversation, Daniel Hausman, a philosopher of economics at the University of Wisconsin, put it bluntly: ‘If you reject the power of theory, you demote economists from their thrones. They don’t want to become like sociologists.’

George F DeMartino, an economist and an ethicist at the University of Denver, frames the issue in economic terms. ‘The interest of the profession is in pursuing its analysis in a language that’s inaccessible to laypeople and even some economists,’ he explained to me. ‘What we’ve done is monopolise this kind of expertise, and we of all people know how that gives us power.’

Every economist I interviewed agreed that conflicts of interest were highly problematic for the scientific integrity of their field – but only tenured ones were willing to go on the record. ‘In economics and finance, if I’m trying to decide whether I’m going to write something favourable or unfavourable to bankers, well, if it’s favourable that might get me a dinner in Manhattan with movers and shakers,’ Pfleiderer said to me. ‘I’ve written articles that wouldn’t curry favour with bankers but I did that when I had tenure.’

When mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience

Then there’s the additional problem of sunk-cost bias. If you’ve invested in an armillary sphere, it’s painful to admit that it doesn’t perform as advertised. When confronted with their profession’s lack of predictive accuracy, some economists find it difficult to admit the truth. Easier, instead, to double down, like the economist John H Cochrane at the University of Chicago. The problem isn’t too much mathematics, he writes in response to Krugman’s 2009 post-Great-Recession mea culpa for the field, but rather ‘that we don’t have enough math’. Astrology doesn’t work, sure, but only because the armillary sphere isn’t big enough and the equations aren’t good enough.

If overhauling economics depended solely on economists, then mathiness, conflict of interest and sunk-cost bias could easily prove insurmountable. Fortunately, non-experts also participate in the market for economic theory. If people remain enchanted by PhDs and Nobel Prizes awarded for the production of complicated mathematical theories, those theories will remain valuable. If they become disenchanted, the value will drop.

Economists who rationalise their discipline’s value can be convincing, especially with prestige and mathiness on their side. But there’s no reason to keep believing them. The pejorative verb ‘rationalise’ itself warns of mathiness, reminding us that we often deceive each other by making prior convictions, biases and ideological positions look ‘rational’, a word that confuses truth with mathematical reasoning. To be rational is, simply, to think in ratios, like the ratios that govern the geometry of the stars. Yet when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience. The result is people like the judge in Evangeline Adams’s trial, or the Son of Heaven in ancient China, who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

Transgênicos e hidrelétricas (Estadão); e resposta (JC)

Transgênicos e hidrelétricas

Recentemente cem cientistas que receberam o Prêmio Nobel em várias áreas do conhecimento assinaram um apelo à organização ambiental Greenpeace para que abandone sua campanha, que já dura muitos anos, contra a utilização de culturas transgênicas para a produção de alimentos. Transgênicos são produtos em que são feitas alterações do código genético que lhes dão características especiais, como as de protegê-los de pragas, resistir melhor a períodos de seca, aumentar a produtividade e outros.

José Goldemberg*

15 Agosto 2016 | 05h00

O sucesso do uso de transgênicos é evidente em muitas culturas, como na produção de soja, da qual o Brasil é um exemplo. Contudo, quando se começou a usar produtos transgênicos, objeções foram levantadas, uma vez que as modificações genéticas poderiam ter consequências imprevisíveis. O Greenpeace tornou-se o campeão das campanhas contra o seu uso, que foi banido em vários países.

As objeções iniciais tinham como base dois tipos de consideração: luma, de caráter científico, que foi seriamente investigada por cientistas; el outra, de caráter mais geral, com base no “princípio da precaução”, que nos diz basicamente que cabe ao proponente de um novo produto demonstrar que ele não tem consequências inconvenientes ou perigosas. O “princípio da precaução” tem sido usado para barrar, com maior ou menor sucesso, a introdução de inovações.

Esse princípio tem um forte componente moral e político e tem sido invocado de forma muito variável ao longo do tempo. Por exemplo, ele não foi invocado quando a energia nuclear começou a ser usada, há cerca de 60 anos, para a produção de eletricidade; como resultado, centenas de reatores nucleares foram instalados em muitos países e alguns deles causaram acidentes de grandes proporções. Já no caso de mudanças climáticas que se originaram na ação do homem – consumo de combustíveis fósseis e lançamento na atmosfera dos gases que aquecem o planeta –, ele foi incorporado na Convenção do Clima em 1992 e está levando os países a reduzir o uso desses combustíveis.

A manifestação dos nobelistas argumenta que a experiência mostrou que as preocupações com possíveis consequências negativas dos transgênicos não se justificam e opor-se a eles não faz mais sentido.

Nuns poucos países, o “princípio da precaução” tem sido invocado também para dificultar a instalação de usinas hidrelétricas, tendo em vista que sua construção afeta populações ribeirinhas e tem impactos ambientais. Esse é um problema de fato sério em países com elevada densidade populacional, como a Índia, cujo território é cerca de três vezes menor que o do Brasil e a população, quatro vezes maior. Qualquer usina hidrelétrica na Índia afeta centenas de milhares de pessoas. Não é o caso do Brasil, que tem boa parte de seu território na Amazônia, onde a população é pequena. Ainda assim, a construção de usinas na Amazônia para abastecer as regiões mais populosas e grandes centros industriais no Sudeste tem enfrentado sérias objeções de grupos de ativistas.

A construção de usinas hidrelétricas no passado foi planejada com reservatórios. Quando esses reservatórios não são feitos, a produção de eletricidade varia ao longo do ano. Para evitar isso são construídos lagos artificiais, que armazenam água para os períodos do ano em que chove pouco.

Até recentemente quase toda a eletricidade usada no Brasil era produzida por hidrelétricas com reservatórios, que garantiam o fornecimento durante o ano todo mesmo chovendo pouco. Desde 1990 essa prática foi abandonada por causa das queixas das populações atingidas nas áreas alagadas. As hidrelétricas passaram a ser construídas sem reservatórios – isto é, “a fio d’água” –, usando apenas a água corrente dos rios. É o caso das usinas de Jirau, Santo Antônio e Belo Monte, cujo custo aumentou muito em relação à eletricidade produzida: elas são dimensionadas para o fluxo máximo de águas dos rios, que se dá em alguns meses, e geram muito menos nos meses secos.

Houve nesses casos um superdimensionamento do problema. De modo geral, para cada pessoa afetada pela construção de usinas, mais de cem pessoas são beneficiadas pela eletricidade produzida. Sucede que os poucos milhares de pessoas atingidas vivem em torno da usina e se organizaram para reclamar compensações (em alguns casos são instrumentadas por grupos políticos), ao passo que os beneficiados, que são milhões, vivem longe do local e não são organizados.

Cabe ao poder público avaliar os interesses do total da população, comparar os riscos e prejuízos sofridos por alguns e os benefícios recebidos por muitos. Isso não tem sido feito e o governo federal não tem tido a firmeza de explicar à sociedade onde estão os interesses gerais da Nação.

Isso se verifica também em outras grandes obras públicas, como estradas, portos e infraestruturas em geral. Um exemplo é o Rodoanel Mário Covas, em torno da cidade de São Paulo, cuja construção enfrentou fortes contestações tanto de atingidos pelas obras como de alguns grupos ambientalistas. A firmeza do governo de São Paulo e os esclarecimentos prestados viabilizaram a obra, hoje considerada positiva pela grande maioria: retira dezenas de milhares de caminhões por dia do tráfego urbano de São Paulo e reduz a poluição lançada por eles sobre a população.

O que se aprende neste caso deveria ser aplicado às hidrelétricas da Amazônia, que têm sido contestadas por alguns grupos de ambientalistas não suficientemente informados. Cabe aqui uma ação como a que foi tomada pelos nobelistas em relação aos transgênicos e aceitar hidrelétricas construídas com as melhores exigências técnicas e ambientais, incluindo reservatórios, sem os quais elas se tornam pouco viáveis, abrindo caminho para o uso de outras fontes de energia mais poluentes, como carvão e derivados de petróleo.

*PRESIDENTE DA FAPESP, FOI PRESIDENTE DA CESP


Pesquisador comenta artigo

JC 5485, 19 de agosto de 2016

O professor emérito da UnB, Nagib Nassar, questiona o artigo “Transgênicos e hidrelétricas”, do Estado de S. Paulo, divulgado no Jornal da Ciência na última terça-feira

Leia o comentário abaixo:

Refiro-me ao artigo do professor José Goldemberg, publicado no Estadão e projetado pelo Jornal da Ciência.

Discordo do ilustre cientista a começar por ele dizer que transgênicos são feitos para proteger plantas de pragas. Sabe-se que o único transgênico plantado para essa finalidade no Brasil é o milho Bt. Assim, o professor esqueceu ou fez esquecer que, para essa finalidade, é introduzido na planta um gene produtor de toxina mata insetos e, consequentemente, a planta passa a funcionar como um inseticida! 

A toxina Bt, assim como mata insetos, intoxica o próprio ser humano. Frequentemente é citado na literatura o alto risco, inclusive fatal, para o indivíduo. Um exemplo dessas variedades de milho Bt é a variedade milho MO 810: proibida para uso humano pelo próprio país produtor, pela França, Alemanha, Inglaterra e outros países europeus. Infelizmente, a variedade é autorizada no Brasil e quem autorizou não se preocupou em nos fazer de simples cobaias! Em países pobres da África foi rejeitado até como presente. A Zâmbia preferiu ver seu povo sofrer de fome a morrer envenenado! Além de matar insetos invasores, a toxina Bt mata insetos úteis, como abelha de mel e outros polinizadores necessários para que a planta formar frutas.

Quando esse tipo de transgênico morre, ao final de estação de crescimento, suas raízes deixam para o solo resíduos tóxicos que matam bactérias fixadoras do nitrogênio e transformam o solo em um ambiente envenenado para o crescimento da bactéria fixadora do Azoto, que forma fertilizante. Assim, fica impedido o crescimento de qualquer cultura leguminosa. O fabricante desse transgênico gasta milhões de reais com todos os tipos de propagandas, em todas as formas e todos os níveis: o resultado é o mais alto nível o custo das sementes transgênicas, que chega a ser 130 vezes mais cara do que o preço normal. Os pequenos agricultores enganados e iludidos pela propaganda, quando não podem pagar dívidas, correm para um destino trágico: o suicídio. Há muitos casos conhecidos da Índia, que chegou a registrar, em apenas um ano, 180 mortos.

É bom um físico falar sobre hidrelétricas, mas é questionável que se afirme dogmaticamente sobre transgênicos. E por que ele escolheu transgênicos para associá-los às hidrelétricas? Será como uma fachada que esconde o mal dos transgênicos? Isto me lembra do manifesto assinado por cem ganhadores de Nobel em favor de transgênicos escondendo atrás o arroz dourado. Entre esses ganhadores de Nobel, físicos, químicos, até letras e, além de tudo, três mortos!

Lembro-me também de um cientista distante da área  que foi há dez anos à Câmara de Deputados com argumentos e pedidos para a liberação da soja transgênica, e não pelos resultados científicos, que nunca foram apresentados e nem existiam, mas para não prejudicar agricultores que contrabandeavam soja.

Nagib Nassar

Professor emérito da Universidade de Brasília

Presidente fundador da fundação  FUNAGIB (www.funagib.geneconserve.pro.br)

‘Estudos de neurociência superaram a psicanálise’, diz pesquisador brasileiro (Folha de S.Paulo)

Juliana Cunha, 18.06.2016

Com 60 anos de carreira, 22.794 citações em periódicos, 60 premiações e 710 artigos publicados, Ivan Izquierdo, 78, é o neurocientista mais citado e um dos mais respeitados da América Latina. Nascido na Argentina, ele mora no Brasil há 40 anos e foi naturalizado brasileiro em 1981. Hoje coordena o Centro de Memória do Instituto do Cérebro da PUC-RS.

Suas pesquisas ajudaram a entender os diferentes tipos de memória e a desmistificar a ideia de que áreas específicas do cérebro se dedicariam de maneira exclusiva a um tipo de atividade.

Ele falou à Folha durante o Congresso Mundial do Cérebro, Comportamento e Emoções, que aconteceu esta semana, em Buenos Aires. Izquierdo foi o homenageado desta edição do congresso.

Na entrevista, o cientista fala sobre a utilidade de memórias traumáticas, sua descrença em métodos que prometem apagar lembranças e diz que a psicanálise foi superada pelos estudos de neurociência e funciona hoje como mero exercício estético.

Bruno Todeschini
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires

*

Folha – É possível apagar memórias?
Ivan Izquierdo – É possível evitar que uma memória se expresse, isso sim. É normal, é humano, inclusive, evitar a expressão de certas lembranças. A falta de uso de uma determinada memória implica em desuso daquela sinapse, que aos poucos se atrofia.

Fora disso, não dá. Não existe uma técnica para escolher lembranças e então apagá-las, até porque a mesma informação é salva várias vezes no cérebro, por um mecanismo que chamamos de plasticidade. Quando se fala em apagamento de memórias é pirotecnia, são coisas midiáticas e cinematográficas.

O senhor trabalha bastante com memória do medo. Não apagá-las é uma pena ou algo a ser comemorado?
A memória do medo é o que nos mantém vivos. É a que pode ser acessada mais rapidamente e é a mais útil. Toda vez que você passa por uma situação de ameaça, a informação fundamental que o cérebro precisa guardar é que aquilo é perigoso. As pessoas querem apagar memórias de medo porque muitas vezes são desconfortáveis, mas, se não estivessem ali, nos colocaríamos em situações ruins.

Claro que esse processo causa enorme estresse. Para me locomover numa cidade, meu cérebro aciona inúmeras memórias de medo. Entre tê-las e não tê-las, prefiro tê-las, foram elas que me trouxeram até aqui, mas se pudermos reduzir nossa exposição a riscos, melhor. O problema muitas vezes é o estímulo, não a resposta do medo.

Mas algumas memórias de medo são paralisantes, e podem ser mais arriscadas do que a situação que evitam. Como lidar com elas?
Antes parado do que morto. O cérebro atua para nos preservar, essa é a prioridade. Claro que esse mecanismo é sujeito a falhas. Se entendemos que a resposta a uma memória de medo é exagerada, podemos tentar fazer com que o cérebro ressignifique um estímulo. É possível, por exemplo, expor o paciente repetidas vezes aos estímulos que criaram aquela memória, mas sem o trauma. Isso dissocia a experiência do medo.

Isso não seria parecido com o que Freud tentava fazer com as fobias?
Sim, Freud foi um dos primeiros a usar a extinção no tratamento de fobias, embora ele não acreditasse exatamente em extinção. Com a extinção, a memória continua, não é apagada, mas o trauma não está mais lá.

Mas muitos neurocientistas consideram Freud datado.
Toda teoria envelhece. Freud é uma grande referência, deu contribuições importantes. Mas a psicanálise foi superada pelos estudos em neurociência, é coisa de quando não tínhamos condições de fazer testes, ver o que acontecia no cérebro. Hoje a pessoa vai me falar em inconsciente? Onde fica? Sou cientista, não posso acreditar em algo só porque é interessante.

Para mim, a psicanálise hoje é um exercício estético, não um tratamento de saúde. Se a pessoa gosta, tudo bem, não faz mal, mas é uma pena quando alguém que tem um problema real que poderia ser tratado deixa de buscar um tratamento médico achando que psicanálise seria uma alternativa.

E outros tipos de análise que não a freudiana?
Terapia cognitiva, seguramente. Há formas de fazer o sujeito mudar sua resposta a um estímulo.

O senhor veio para o Brasil com a ditadura na Argentina. Agora, vivemos um processo no Brasil que alguns chamam de golpe, é uma memória em disputa. O que o senhor acha disso enquanto cientista?
Eu vim por conta de uma ameaça. Não considero um golpe, mas é um processo muito esperto. Mudar uma palavra ressignifica toda uma memória. Há de fato uma disputa de como essa memória coletiva vai ser construída. A esquerda usa o termo golpe para evocar memórias de medo de um país que já passou por um golpe. Conforme essa palavra é repetida, isso cria um efeito poderoso. Ainda não sabemos como essa memória será consolidada, mas a estratégia é muito esperta.

A jornalista JULIANA CUNHA viajou a convite do Congresso Mundial do Cérebro, Comportamento e Emoções

Curtailing global warming with bioengineering? Iron fertilization won’t work in much of Pacific (Science Daily)

Earth’s own experiments during ice ages showed little effect

Date:
May 16, 2016
Source:
The Earth Institute at Columbia University
Summary:
Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

With the right mix of nutrients, phytoplankton grow quickly, creating blooms visible from space. This image, created from MODIS data, shows a phytoplankton bloom off New Zealand. Credit: Robert Simmon and Jesse Allen/NASA

Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

The results are important today, because as groups search for ways to combat climate change, some are exploring fertilizing the oceans with iron as a solution.

Algae absorb carbon dioxide (CO2), a greenhouse gas that contributes to global warming. Proponents of iron fertilization argue that adding iron to the oceans would fuel the growth of algae, which would absorb more CO2 and sink it to the ocean floor. The most promising ocean regions are those high in nutrients but low in chlorophyll, a sign that algae aren’t as productive as they could be. The Southern Ocean, the North Pacific, and the equatorial Pacific all fit that description. What’s missing, proponents say, is enough iron.

The new study, published this week in the Proceedings of the National Academy of Sciences, adds to growing evidence, however, that iron fertilization might not work in the equatorial Pacific as suggested.

Essentially, earth has already run its own large-scale iron fertilization experiments. During the ice ages, nearly three times more airborne iron blew into the equatorial Pacific than during non-glacial periods, but the new study shows that that increase didn’t affect biological productivity. At some points, as levels of iron-bearing dust increased, productivity actually decreased.

What matters instead in the equatorial Pacific is how iron and other nutrients are stirred up from below by upwelling fueled by ocean circulation, said lead author Gisela Winckler, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory. The study found seven to 100 times more iron was supplied from the equatorial undercurrent than from airborne dust at sites spread across the equatorial Pacific. The authors write that although all of the nutrients might not be used immediately, they are used up over time, so the biological pump is already operating at full efficiency.

“Capturing carbon dioxide is what it’s all about: does iron raining in with airborne dust drive the capture of atmospheric CO2? We found that it doesn’t, at least not in the equatorial Pacific,” Winckler said.

The new findings don’t rule out iron fertilization elsewhere. Winckler and coauthor Robert Anderson of Lamont-Doherty Earth Observatory are involved in ongoing research that is exploring the effects of iron from dust on the Southern Ocean, where airborne dust supplies a larger share of the iron reaching the surface.

The PNAS paper follows another paper Winckler and Anderson coauthored earlier this year in Nature with Lamont graduate student Kassandra Costa looking at the biological response to iron in the equatorial Pacific during just the last glacial maximum, some 20,000 years ago. The new paper expands that study from a snapshot in time to a time series across the past 500,000 years. It confirms that Costa’s finding, that iron fertilization had no effect then, fit a pattern that extends across the past five glacial periods.

To gauge how productive the algae were, the scientists in the PNAS paper used deep- sea sediment cores from three locations in the equatorial Pacific that captured 500,000 years of ocean history. They tested along those cores for barium, a measure of how much organic matter is exported to the sea floor at each point in time, and for opal, a silicate mineral that comes from diatoms. Measures of thorium-232 reflected the amount of dust that blew in from land at each point in time.

“Neither natural variability of iron sources in the past nor purposeful addition of iron to equatorial Pacific surface water today, proposed as a mechanism for mitigating the anthropogenic increase in atmospheric CO2 inventory, would have a significant impact,” the authors concluded.

Past experiments with iron fertilization have had mixed results. The European Iron Fertilization Experiment (EIFEX) in 2004, for example, added iron in the Southern Ocean and was able to produce a burst of diatoms, which captured CO2 in their organic tissue and sank to the ocean floor. However, the German-Indian LOHAFEX project in 2009 experimented in a nearby location in the South Atlantic and found few diatoms. Instead, most of its algae were eaten up by tiny marine creatures, passing CO2 into the food chain rather than sinking it. In the LOHAFEX case, the scientists determined that another nutrient that diatoms need — silicic acid — was lacking.

The Intergovernmental Panel on Climate Change (IPCC) cautiously discusses iron fertilization in its latest report on climate change mitigation. It warns of potential risks, including the impact that higher productivity in one area may have on nutrients needed by marine life downstream, and the potential for expanding low-oxygen zones, increasing acidification of the deep ocean, and increasing nitrous oxide, a greenhouse gas more potent than CO2.

“While it is well recognized that atmospheric dust plays a significant role in the climate system by changing planetary albedo, the study by Winckler et al. convincingly shows that dust and its associated iron content is not a key player in regulating the oceanic sequestration of CO2 in the equatorial Pacific on large spatial and temporal scales,” said Stephanie Kienast, a marine geologist and paleoceanographer at Dalhousie University who was not involved in the study. “The classic paradigm of ocean fertilization by iron during dustier glacials can thus be rejected for the equatorial Pacific, similar to the Northwest Pacific.”


Journal Reference:

  1. Gisela Winckler, Robert F. Anderson, Samuel L. Jaccard, and Franco Marcantonio. Ocean dynamics, not dust, have controlled equatorial Pacific productivity over the past 500,000 yearsPNAS, May 16, 2016 DOI: 10.1073/pnas.1600616113

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

If The UAE Builds A Mountain Will It Actually Bring More Rain? (Vocativ)

You’re not the only one who thinks constructing a rain-inducing mountain in the desert is a bonkers idea

May 03, 2016 at 6:22 PM ET

Photo Illustration: R. A. Di ISO

The United Arab Emirates wants to build a mountain so the nation can control the weather—but some experts are skeptical about the effectiveness of this project, which may sound more like a James Bond villain’s diabolical plan than a solution to drought.

The actual construction of a mountain isn’t beyond the engineering prowess of the UAE. The small country on the Arabian Peninsula has pulled off grandiose environmental projects before, like the artificial Palm Islands off the coast of Dubai and an indoor ski hill in the Mall of the Emirates. But the scientific purpose of the mountain is questionable.

The UAE’s National Center for Meteorology and Seismology (NCMS) is currently collaborating with the U.S.-based University Corporation for Atmospheric Research (UCAR) for the first planning phase of the ambitious project, according to Arabian Business. The UAE government gave the two groups $400,000 in funding to determine whether they can bring more rain to the region by constructing a mountain that will foster better cloud-seeding.

Last week the NCMS revealed that the UAE spent $588,000 on cloud-seeding in 2015. Throughout the year, 186 flights dispersed potassium chloride, sodium chloride and magnesium into clouds—a process that can trigger precipitation. Now, the UAE is hoping they can enhance the chemical process by forcing air up around the artificial mountain, creating clouds that can be seeded more easily and efficiently.

“What we are looking at is basically evaluating the effects on weather through the type of mountain, how high it should be and how the slopes should be,” NCAR lead researcher Roelof Bruintjes told Arabian Business. “We will have a report of the first phase this summer as an initial step.”

But some scientists don’t expect NCAR’s research will lead to a rain-inducing alp. “I really doubt that it would work,” Raymond Pierrehumbert, a professor of physics at the University of Oxford told Vocativ. “You’d need to build a long ridge, not just a cone, otherwise the air would just go around. Even if you could do that, mountains cause local enhanced rain on the upslope side, but not much persistent cloud downwind, and if you need cloud seeding to get even the upslope rain, it’s really unlikely to work as there is very little evidence that cloud seeding produces much rainfall.”

Pierrehumbert, who specializes in geophysics and climate change, believes the regional environment would make the project especially difficult. “UAE is a desert because of the wind patterns arising from global atmospheric circulations, and any mountain they build is not going to alter those,” he said. 

Pierrehumbert concedes that NCAR is a respectable organization that will be able to use the “small amount of money to research the problem.” He thinks some good scientific study will come of the effort—perhaps helping to determine why a hot, humid area bordered by the ocean receives so little rainfall.

But he believes the minimal sum should go into another project: “They’d be way better off putting the money into solar-powered desalination plants.”

If the project doesn’t work out, at least wealthy Emirates have a 125,000-square-foot indoor snow park to look forward to in 2018.

God of Thunder (NPR)

October 17, 201411:09 AM ET

In 1904, Charles Hatfield claimed he could turn around the Southern California drought. Little did he know, he was going to get much, much more water than he bargained for.

GLYNN WASHINGTON, HOST:

From PRX and NPR, welcome back to SNAP JUDGMENT the Presto episode. Today we’re calling on mysterious forces and we’re going to strap on the SNAP JUDGMENT time machine. Our own Eliza Smith takes the controls and spins the dial back 100 years into the past.

ELIZA SMITH, BYLINE: California, 1904. In the fields, oranges dry in their rinds. In the ‘burbs, lawns yellow. Poppies wilt on the hillsides. Meanwhile, Charles Hatfield sits at a desk in his father’s Los Angeles sewing machine business. His dad wants him to take over someday, but Charlie doesn’t want to spend the rest of his life knocking on doors and convincing housewives to buy his bobbins and thread. Charlie doesn’t look like the kind of guy who changes the world. He’s impossibly thin with a vanishing patch of mousy hair. He always wears the same drab tweed suit. But he thinks to himself just maybe he can quench the Southland’s thirst. So when he punches out his timecard, he doesn’t go home for dinner. Instead, he sneaks off to the Los Angeles Public Library and pores over stacks of books. He reads about shamans who believed that fumes from a pyre of herbs and alcohols could force rain from the sky. He reads modern texts too, about the pseudoscience of pluvo culture – rainmaking, the theory that explosives and pyrotechnics could crack the clouds. Charlie conducts his first weather experiment on his family ranch, just northeast of Los Angeles in the city of Pasadena. One night he pulls his youngest brother, Paul, out of bed to keep watch with a shotgun as he climbs atop a windmill, pours a cocktail of chemicals into a shallow pan and then waits.

He doesn’t have a burner or a fan or some hybrid, no – he just waits for the chemicals to evaporate into the clouds. Paul slumped into a slumber long ago and is now leaning against the foundation of the windmill, when the first droplet hits Charlie’s cheek. Then another. And another.

Charlie pulls out his rain gauge and measures .65 inches. It’s enough to convince him he can make rain.

That’s right, Charlie has the power. Word spreads in local papers and one by one, small towns Hemet, Volta, Gustine, Newman, Crows Landing, Patterson come to him begging for rain. And wherever Charlie goes, rain seems to follow. After he gives their town seven more inches of water than his contract stipulated, the Hemet News raves, Mr. Hatfield is proving beyond doubt that rain can be produced.

Within weeks he’s signing contracts with towns from the Pacific Coast to the Mississippi. Of course, there are doubters who claim that he tracks the weather, who claim he’s a fool chasing his luck.

But then Charlie gets an invitation to prove himself. San Diego, a major city, is starting to talk water rations and they call on him. Of course, most of the city councilmen are dubious of Charlie’s charlatan claims. But still, cows are keeling over in their pastures and farmers are worrying over dying crops. It won’t hurt to hire him. They reason if Charlie Hatfield can fill San Diego’s biggest reservoir, Morena Dam, with 10 billion gallons of water, he’ll earn himself $10,000. If he can’t, well then he’ll just walk away and the city will laugh the whole thing off.

One councilman jokes…

UNIDENTIFIED MAN #1: It’s heads – the city wins. Tails – Hatfield loses.

SMITH: Charlie and Paul set up camp in the remote hills surrounding the Morena Reservoir. This time they work for weeks building several towers. This is to be Charlie’s biggest rain yet. When visitors come to observe his experiments, Charlie turns his back to them, hiding his notebooks and chemicals and Paul fingers the trigger on his trusty rifle. And soon enough it’s pouring. Winds reach record speeds of over 60 miles per hour. But that isn’t good enough – Charlie needs the legitimacy a satisfied San Diego can grant him. And so he works non-stop dodging lightning bolts, relishing thunderclaps. He doesn’t care that he’s soaked to the bone – he can wield weather. The water downs power lines, floods streets, rips up rail tracks.

A Mission Valley man who had to be rescued by a row boat as he clung to a scrap of lumber wraps himself in a towel and shivers as he suggests…

UNIDENTIFIED MAN #2: Let’s pay Hatfield $100,000 to quit.

SMITH: But Charlie isn’t quitting. The rain comes down harder and harder. Dams and reservoirs across the county explode and the flood devastates every farm, every house in its wake. One winemaker is surfacing from the protection of his cellar when he spies a wave twice the height of a telephone pole tearing down his street. He grabs his wife and they run as fast as they can, only to turn and watch their house washed downstream.

And yet, Charlie smiles as he surveys his success. The Morena Reservoir is full. He grabs Paul and the two leave their camp to march the 50 odd miles to City Hall. He expects the indebted populist to kiss his mud-covered shoes. Instead, he’s met with glares and threats. By the time Charlie and Paul reach San Diego’s city center, they’ve stopped answering to the name Hatfield. They call themselves Benson to avoid bodily harm.

Still, when he stands before the city councilman, Charlie declares his operations successful and demands his payment. The men glower at him.

San Diego is in ruins and worst of all – they’ve got blood on their hands. The flood drowned more than 50 people. It also destroyed homes, farms, telephone lines, railroads, streets, highways and bridges. San Diegans file millions of dollars in claims but Charlie doesn’t budge. He folds his arms across his chest, holds his head high and proclaims, the time is coming when drought will overtake this portion of the state. It will be then that you call for my services again.

So the city councilman tells Charlie that if he’s sure he made it rain, they’ll give him his $10,000 – he’ll just have to take full responsibility for the flood. Charlie grits his teeth and tells them, it was coincidence. It rained because Mother Nature made it so. I am no rainmaker.

And then Charlie disappears. He goes on selling sewing machines and keeping quiet.

WASHINGTON: I’ll tell you what, California these days could use a little Charlie Hatfield. Big thanks to Eliza Smith for sharing that story and thanks as well to Leon Morimoto for sound design. Mischief managed – you’ve just gotten to the other side by means of other ways.

If you missed any part of this show, no need for a rampage – head on over to snapjudgment.org. There you’ll find the award-winning podcast – Mark, what award did we win? Movies, pictures, stuff. Amazing stories await. Get in on the conversation. SNAP JUDGMENT’s on Facebook, Twitter @snapjudgment.

Did you ever wind up in the slithering sitting room when you’re supposed to be in Gryffindor’s parlor? Well, me neither, but I’m sure it’s nothing like wandering the halls of the Corporation for Public Broadcasting. Completely different, but many thanks to them. PRX, Public Radio Exchange, hosts a similar annual Quidditch championships but instead of brooms they ride radios. Not quite the same visual effect, but it’s good clean fun all the same – prx.org.

WBEZ in Chicago has tricks up their sleeve and you may have reckoned that this is not the news. No way is this the news. In fact, if you’d just thrown that book with Voldemort trapped in it, thrown it in the fire, been done with the nonsense – and you would still not be as far away from the news as this is. But this is NPR.

Hit Steyerl | Politics of Post-Representation (Dis Blog)

[Accessed Nov 23, 2015]

In conversation with Marvin Jordan

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today. As a documentary filmmaker, she has created multiple works addressing the widespread proliferation of images in contemporary media, deepening her engagement with the technological conditions of globalization. Steyerl’s work has been exhibited in numerous solo and group exhibitions including documenta 12, Taipei Biennial 2010, and 7th Shanghai Biennial. She currently teaches New Media Art at Berlin University of the Arts.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Marvin Jordan I’d like to open our dialogue by acknowledging the central theme for which your work is well known — broadly speaking, the socio-technological conditions of visual culture — and move toward specific concepts that underlie your research (representation, identification, the relationship between art and capital, etc). In your essay titled “Is a Museum a Factory?” you describe a kind of ‘political economy’ of seeing that is structured in contemporary art spaces, and you emphasize that a social imbalance — an exploitation of affective labor — takes place between the projection of cinematic art and its audience. This analysis leads you to coin the term “post-representational” in service of experimenting with new modes of politics and aesthetics. What are the shortcomings of thinking in “representational” terms today, and what can we hope to gain from transitioning to a “post-representational” paradigm of art practices, if we haven’t arrived there already?

Hito Steyerl Let me give you one example. A while ago I met an extremely interesting developer in Holland. He was working on smart phone camera technology. A representational mode of thinking photography is: there is something out there and it will be represented by means of optical technology ideally via indexical link. But the technology for the phone camera is quite different. As the lenses are tiny and basically crap, about half of the data captured by the sensor are noise. The trick is to create the algorithm to clean the picture from the noise, or rather to define the picture from within noise. But how does the camera know this? Very simple. It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It looks through the pictures you already made, or those that are networked to you and tries to match faces and shapes. In short: it creates the picture based on earlier pictures, on your/its memory. It does not only know what you saw but also what you might like to see based on your previous choices. In other words, it speculates on your preferences and offers an interpretation of data based on affinities to other data. The link to the thing in front of the lens is still there, but there are also links to past pictures that help create the picture. You don’t really photograph the present, as the past is woven into it.

The result might be a picture that never existed in reality, but that the phone thinks you might like to see. It is a bet, a gamble, some combination between repeating those things you have already seen and coming up with new versions of these, a mixture of conservatism and fabulation. The paradigm of representation stands to the present condition as traditional lens-based photography does to an algorithmic, networked photography that works with probabilities and bets on inertia. Consequently, it makes seeing unforeseen things more difficult. The noise will increase and random interpretation too. We might think that the phone sees what we want, but actually we will see what the phone thinks it knows about us. A complicated relationship — like a very neurotic marriage. I haven’t even mentioned external interference into what your phone is recording. All sorts of applications are able to remotely shut your camera on or off: companies, governments, the military. It could be disabled for whole regions. One could, for example, disable recording functions close to military installations, or conversely, live broadcast whatever you are up to. Similarly, the phone might be programmed to auto-pixellate secret or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW content or auto-modify pubic hair, stretch or omit bodies, exchange or collage context or insert AR advertisement and pop up windows or live feeds. Now lets apply this shift to the question of representative politics or democracy. The representational paradigm assumes that you vote for someone who will represent you. Thus the interests of the population will be proportionally represented. But current democracies work rather like smartphone photography by algorithmically clearing the noise and boosting some data over other. It is a system in which the unforeseen has a hard time happening because it is not yet in the database. It is about what to define as noise — something Jacques Ranciere has defined as the crucial act in separating political subjects from domestic slaves, women and workers. Now this act is hardwired into technology, but instead of the traditional division of people and rabble, the results are post-representative militias, brands, customer loyalty schemes, open source insurgents and tumblrs.

Additionally, Ranciere’s democratic solution: there is no noise, it is all speech. Everyone has to be seen and heard, and has to be realized online as some sort of meta noise in which everyone is monologuing incessantly, and no one is listening. Aesthetically, one might describe this condition as opacity in broad daylight: you could see anything, but what exactly and why is quite unclear. There are a lot of brightly lit glossy surfaces, yet they don’t reveal anything but themselves as surface. Whatever there is — it’s all there to see but in the form of an incomprehensible, Kafkaesque glossiness, written in extraterrestrial code, perhaps subject to secret legislation. It certainly expresses something: a format, a protocol or executive order, but effectively obfuscates its meaning. This is a far cry from a situation in which something—an image, a person, a notion — stood in for another and presumably acted in its interest. Today it stands in, but its relation to whatever it stands in for is cryptic, shiny, unstable; the link flickers on and off. Art could relish in this shiny instability — it does already. It could also be less baffled and mesmerised and see it as what the gloss mostly is about – the not-so-discreet consumer friendly veneer of new and old oligarchies, and plutotechnocracies.

MJ In your insightful essay, “The Spam of the Earth: Withdrawal from Representation”, you extend your critique of representation by focusing on an irreducible excess at the core of image spam, a residue of unattainability, or the “dark matter” of which it’s composed. It seems as though an unintelligible horizon circumscribes image spam by image spam itself, a force of un-identifiability, which you detect by saying that it is “an accurate portrayal of what humanity is actually not… a negative image.” Do you think this vacuous core of image spam — a distinctly negative property — serves as an adequate ground for a general theory of representation today? How do you see today’s visual culture affecting people’s behavior toward identification with images?

HS Think of Twitter bots for example. Bots are entities supposed to be mistaken for humans on social media web sites. But they have become formidable political armies too — in brilliant examples of how representative politics have mutated nowadays. Bot armies distort discussion on twitter hashtags by spamming them with advertisement, tourist pictures or whatever. Bot armies have been active in Mexico, Syria, Russia and Turkey, where most political parties, above all the ruling AKP are said to control 18,000 fake twitter accounts using photos of Robbie Williams, Megan Fox and gay porn stars. A recent article revealed that, “in order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.” It is ever more difficult to identify bots – partly because humans are being paid to enter CAPTCHAs on their behalf (1,000 CAPTCHAs equals 50 USD cents). So what is a bot army? And how and whom does it represent if anyone? Who is an AKP bot that wears the face of a gay porn star and quotes Hobbes’ Leviathan — extolling the need of transforming the rule of militias into statehood in order to escape the war of everyone against everyone else? Bot armies are a contemporary vox pop, the voice of the people, the voice of what the people are today. It can be a Facebook militia, your low cost personalized mob, your digital mercenaries. Imagine your photo is being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump cutting between glamour, sectarianism, porn, corruption and Post-Baath Party ideology. Think of the meaning of the word “affirmative action” after twitter bots and like farms! What does it represent?

MJ You have provided a compelling account of the depersonalization of the status of the image: a new process of de-identification that favors materialist participation in the circulation of images today.  Within the contemporary technological landscape, you write that “if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.” How does this shift from personal identification to material circulation — that is, to cybernetic participation — affect your notion of representation? If an image is merely “a thing like you and me,” does this amount to saying that identity is no more, no less than a .jpeg file?

HS Social media makes the shift from representation to participation very clear: people participate in the launch and life span of images, and indeed their life span, spread and potential is defined by participation. Think of the image not as surface but as all the tiny light impulses running through fiber at any one point in time. Some images will look like deep sea swarms, some like cities from space, some are utter darkness. We could see the energy imparted to images by capital or quantified participation very literally, we could probably measure its popular energy in lumen. By partaking in circulation, people participate in this energy and create it.
What this means is a different question though — by now this type of circulation seems a little like the petting zoo of plutotechnocracies. It’s where kids are allowed to make a mess — but just a little one — and if anyone organizes serious dissent, the seemingly anarchic sphere of circulation quickly reveals itself as a pedantic police apparatus aggregating relational metadata. It turns out to be an almost Althusserian ISA (Internet State Apparatus), hardwired behind a surface of ‘kawaii’ apps and online malls. As to identity, Heartbleed and more deliberate governmental hacking exploits certainly showed that identity goes far beyond a relationship with images: it entails a set of private keys, passwords, etc., that can be expropriated and detourned. More generally, identity is the name of the battlefield over your code — be it genetic, informational, pictorial. It is also an option that might provide protection if you fall beyond any sort of modernist infrastructure. It might offer sustenance, food banks, medical service, where common services either fail or don’t exist. If the Hezbollah paradigm is so successful it is because it provides an infrastructure to go with the Twitter handle, and as long as there is no alternative many people need this kind of container for material survival. Huge religious and quasi-religious structures have sprung up in recent decades to take up the tasks abandoned by states, providing protection and survival in a reversal of the move described in Leviathan. Identity happens when the Leviathan falls apart and nothing is left of the commons but a set of policed relational metadata, Emoji and hijacked hashtags. This is the reason why the gay AKP pornstar bots are desperately quoting Hobbes’ book: they are already sick of the war of Robbie Williams (Israel Defense Forces) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP) and are hoping for just any entity to organize day care and affordable dentistry.

heartbleed

But beyond all the portentous vocabulary relating to identity, I believe that a widespread standard of the contemporary condition is exhaustion. The interesting thing about Heartbleed — to come back to one of the current threats to identity (as privacy) — is that it is produced by exhaustion and not effort. It is a bug introduced by open source developers not being paid for something that is used by software giants worldwide. Nor were there apparently enough resources to audit the code in the big corporations that just copy-pasted it into their applications and passed on the bug, fully relying on free volunteer labour to produce their proprietary products. Heartbleed records exhaustion by trying to stay true to an ethics of commonality and exchange that has long since been exploited and privatized. So, that exhaustion found its way back into systems. For many people and for many reasons — and on many levels — identity is just that: shared exhaustion.

MJ This is an opportune moment to address the labor conditions of social media practice in the context of the art space. You write that “an art space is a factory, which is simultaneously a supermarket — a casino and a place of worship whose reproductive work is performed by cleaning ladies and cellphone-video bloggers alike.” Incidentally, DIS launched a website called ArtSelfie just over a year ago, which encourages social media users to participate quite literally in “cellphone-video blogging” by aggregating their Instagram #artselfies in a separately integrated web archive. Given our uncanny coincidence, how can we grasp the relationship between social media blogging and the possibility of participatory co-curating on equal terms? Is there an irreconcilable antagonism between exploited affective labor and a genuinely networked art practice? Or can we move beyond — to use a phrase of yours — a museum crowd “struggling between passivity and overstimulation?”

HS I wrote this in relation to something my friend Carles Guerra noticed already around early 2009; big museums like the Tate were actively expanding their online marketing tools, encouraging people to basically build the museum experience for them by sharing, etc. It was clear to us that audience participation on this level was a tool of extraction and outsourcing, following a logic that has turned online consumers into involuntary data providers overall. Like in the previous example – Heartbleed – the paradigm of participation and generous contribution towards a commons tilts quickly into an asymmetrical relation, where only a minority of participants benefits from everyone’s input, the digital 1 percent reaping the attention value generated by the 99 percent rest.

Brian Kuan Wood put it very beautifully recently: Love is debt, an economy of love and sharing is what you end up with when left to your own devices. However, an economy based on love ends up being an economy of exhaustion – after all, love is utterly exhausting — of deregulation, extraction and lawlessness. And I don’t even want to mention likes, notes and shares, which are the child-friendly, sanitized versions of affect as currency.
All is fair in love and war. It doesn’t mean that love isn’t true or passionate, but just that love is usually uneven, utterly unfair and asymmetric, just as capital tends to be distributed nowadays. It would be great to have a little bit less love, a little more infrastructure.

MJ Long before Edward Snowden’s NSA revelations reshaped our discussions of mass surveillance, you wrote that “social media and cell-phone cameras have created a zone of mutual mass-surveillance, which adds to the ubiquitous urban networks of control,” underscoring the voluntary, localized, and bottom-up mutuality intrinsic to contemporary systems of control. You go on to say that “hegemony is increasingly internalized, along with the pressure to conform and perform, as is the pressure to represent and be represented.” But now mass government surveillance is common knowledge on a global scale — ‘externalized’, if you will — while social media representation practices remain as revealing as they were before. Do these recent developments, as well as the lack of change in social media behavior, contradict or reinforce your previous statements? In other words, how do you react to the irony that, in the same year as the unprecedented NSA revelations, “selfie” was deemed word of the year by Oxford Dictionaries?

HS Haha — good question!

Essentially I think it makes sense to compare our moment with the end of the twenties in the Soviet Union, when euphoria about electrification, NEP (New Economic Policy), and montage gives way to bureaucracy, secret directives and paranoia. Today this corresponds to the sheer exhilaration of having a World Wide Web being replaced by the drudgery of corporate apps, waterboarding, and “normcore”. I am not trying to say that Stalinism might happen again – this would be plain silly – but trying to acknowledge emerging authoritarian paradigms, some forms of algorithmic consensual governance techniques developed within neoliberal authoritarianism, heavily relying on conformism, “family” values and positive feedback, and backed up by all-out torture and secret legislation if necessary. On the other hand things are also falling apart into uncontrollable love. One also has to remember that people did really love Stalin. People love algorithmic governance too, if it comes with watching unlimited amounts of Game of Thrones. But anyone slightly interested in digital politics and technology is by now acquiring at least basic skills in disappearance and subterfuge.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

MJ In “Politics of Art: Contemporary Art and the Transition to Post-Democracy,” you point out that the contemporary art industry “sustains itself on the time and energy of unpaid interns and self-exploiting actors on pretty much every level and in almost every function,” while maintaining that “we have to face up to the fact that there is no automatically available road to resistance and organization for artistic labor.” Bourdieu theorized qualitatively different dynamics in the composition of cultural capital vs. that of economic capital, arguing that the former is constituted by the struggle for distinction, whose value is irreducible to financial compensation. This basically translates to: everyone wants a piece of the art-historical pie, and is willing to go through economic self-humiliation in the process. If striving for distinction is antithetical to solidarity, do you see a possibility of reconciling it with collective political empowerment on behalf of those economically exploited by the contemporary art industry?

HS In Art and Money, William Goetzmann, Luc Renneboog, and Christophe Spaenjers conclude that income inequality correlates to art prices. The bigger the difference between top income and no income, the higher prices are paid for some art works. This means that the art market will benefit not only if less people have more money but also if more people have no money. This also means that increasing the amount of zero incomes is likely, especially under current circumstances, to raise the price of some art works. The poorer many people are (and the richer a few), the better the art market does; the more unpaid interns, the more expensive the art. But the art market itself may be following a similar pattern of inequality, basically creating a divide between the 0,01 percent if not less of artworks that are able to concentrate the bulk of sales and the 99,99 percent rest. There is no short term solution for this feedback loop, except of course not to accept this situation, individually or preferably collectively on all levels of the industry. This also means from the point of view of employers. There is a long term benefit to this, not only to interns and artists but to everyone. Cultural industries, which are too exclusively profit oriented lose their appeal. If you want exciting things to happen you need a bunch of young and inspiring people creating a dynamics by doing risky, messy and confusing things. If they cannot afford to do this, they will do it somewhere else eventually. There needs to be space and resources for experimentation, even failure, otherwise things go stale. If these people move on to more accommodating sectors the art sector will mentally shut down even more and become somewhat North-Korean in its outlook — just like contemporary blockbuster CGI industries. Let me explain: there is a managerial sleekness and awe inspiring military perfection to every pixel in these productions, like in North Korean pixel parades, where thousands of soldiers wave color posters to form ever new pixel patterns. The result is quite something but this something is definitely not inspiring nor exciting. If the art world keeps going down the way of raising art prices via starvation of it’s workers – and there is no reason to believe it will not continue to do this – it will become the Disney version of Kim Jong Un’s pixel parades. 12K starving interns waving pixels for giant CGI renderings of Marina Abramovic! Imagine the price it will fetch!

kim jon hito

kim hito jon

No escaping the Blue Marble (The Conversation)

August 20, 2015 6.46pm EDT

The Earth seen from Apollo, a photo now known as the “Blue Marble”. NASA

It is often said that the first full image of the Earth, “Blue Marble”, taken by the Apollo 17 space mission in December 1972, revealed Earth to be precious, fragile and protected only by a wafer-thin atmospheric layer. It reinforced the imperative for better stewardship of our “only home”.

But there was another way of seeing the Earth revealed by those photographs. For some the image showed the Earth as a total object, a knowable system, and validated the belief that the planet is there to be used for our own ends.

In this way, the “Blue Marble” image was not a break from technological thinking but its affirmation. A few years earlier, reflecting on the spiritual consequences of space flight, the theologian Paul Tillich wrote of how the possibility of looking down at the Earth gives rise to “a kind of estrangement between man and earth” so that the Earth is seen as a totally calculable material body.

For some, by objectifying the planet this way the Apollo 17 photograph legitimised the Earth as a domain of technological manipulation, a domain from which any unknowable and unanalysable element has been banished. It prompts the idea that the Earth as a whole could be subject to regulation.

This metaphysical possibility is today a physical reality in work now being carried out on geoengineering – technologies aimed at deliberate, large-scale intervention in the climate system designed to counter global warming or offset some of its effects.

While some proposed schemes are modest and relatively benign, the more ambitious ones – each now with a substantial scientific-commercial constituency – would see humanity mobilising its technological power to seize control of the climate system. And because the climate system cannot be separated from the rest of the Earth System, that means regulating the planet, probably in perpetuity.

Dreams of escape

Geoengineering is often referred to as Plan B, one we should be ready to deploy because Plan A, cutting global greenhouse gas emissions, seems unlikely to be implemented in time. Others are now working on what might be called Plan C. It was announced last year in The Times:

British scientists and architects are working on plans for a “living spaceship” like an interstellar Noah’s Ark that will launch in 100 years’ time to carry humans away from a dying Earth.

This version of Plan C is known as Project Persephone, which is curious as Persephone in Greek mythology was the queen of the dead. The project’s goal is to build “prototype exovivaria – closed ecosystems inside satellites, to be maintained from Earth telebotically, and democratically governed by a global community.”

NASA and DARPA, the US Defense Department’s advanced technologies agency, are also developing a “worldship” designed to take a multi-generational community of humans beyond the solar system.

Paul Tillich noticed the intoxicating appeal that space travel holds for certain kinds of people. Those first space flights became symbols of a new ideal of human existence, “the image of the man who looks down at the earth, not from heaven, but from a cosmic sphere above the earth”. A more common reaction to Project Persephone is summed up by a reader of the Daily Mail: “Only the ‘elite’ will go. The rest of us will be left to die.”

Perhaps being left to die on the home planet would be a more welcome fate. Imagine being trapped on this “exovivarium”, a self-contained world in which exported nature becomes a tool for human survival; a world where there is no night and day; no seasons; no mountains, streams, oceans or bald eagles; no ice, storms or winds; no sky; no sunrise; a closed world whose occupants would work to keep alive by simulation the archetypal habits of life on Earth.

Into the endless void

What kind of person imagines himself or herself living in such a world? What kind of being, after some decades, would such a post-terrestrial realm create? What kind of children would be bred there?

According to Project Persephone’s sociologist, Steve Fuller: “If the Earth ends up a no-go zone for human beings [sic] due to climate change or nuclear or biological warfare, we have to preserve human civilisation.”

Why would we have to preserve human civilisation? What is the value of a civilisation if not to raise human beings to a higher level of intellectual sophistication and moral responsibility? What is a civilisation worth if it cannot protect the natural conditions that gave birth to it?

Those who blast off leaving behind a ruined Earth would carry into space a fallen civilisation. As the Earth receded into the all-consuming blackness those who looked back on it would be the beings who had shirked their most primordial responsibility, beings corroded by nostalgia and survivor guilt.

He’s now mostly forgotten, but in the 1950s and 1960s the Swedish poet Harry Martinson was famous for his haunting epic poem Aniara, which told the story of a spaceship carrying a community of several thousand humans out into space escaping an Earth devastated by nuclear conflagration. At the end of the epic the spaceship’s controller laments the failure to create a new Eden:

“I had meant to make them an Edenic place,

but since we left the one we had destroyed

our only home became the night of space

where no god heard us in the endless void.”

So from the cruel fantasy of Plan C we are obliged to return to Plan A, and do all we can to slow the geological clock that has ticked over into the Anthropocene. If, on this Earthen beast provoked, a return to the halcyon days of an undisturbed climate is no longer possible, at least we can resolve to calm the agitations of “the wakened giant” and so make this new and unwanted epoch one in which humans can survive.

Geoengineering proposal may backfire: Ocean pipes ‘not cool,’ would end up warming climate (Science Daily)

Date: March 19, 2015

Source: Carnegie Institution

Summary: There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits. One idea involves using ocean pipes to facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters. New research shows that these pipes could actually increase global warming quite drastically.


To combat global climate change caused by greenhouse gases, alternative energy sources and other types of environmental recourse actions are needed. There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits.A new study from a group of Carnegie scientists determines that these types of pipes could actually increase global warming quite drastically. It is published in Environmental Research Letters.

One proposed strategy–called Ocean Thermal Energy Conversion, or OTEC–involves using the temperature difference between deeper and shallower water to power a heat engine and produce clean electricity. A second proposal is to move carbon from the upper ocean down into the deep, where it wouldn’t interact with the atmosphere. Another idea, and the focus of this particular study, proposes that ocean pipes could facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters.

“Our prediction going into the study was that vertical ocean pipes would effectively cool the Earth and remain effective for many centuries,” said Ken Caldeira, one of the three co-authors.

The team, which also included lead author Lester Kwiatkowski as well as Katharine Ricke, configured a model to test this idea and what they found surprised them. The model mimicked the ocean-water movement of ocean pipes if they were applied globally reaching to a depth of about a kilometer (just over half a mile). The model simulated the motion created by an idealized version of ocean pipes, not specific pipes. As such the model does not include real spacing of pipes, nor does it calculate how much energy they would require.

Their simulations showed that while global temperatures could be cooled by ocean pipe systems in the short term, warming would actually start to increase just 50 years after the pipes go into use. Their model showed that vertical movement of ocean water resulted in a decrease of clouds over the ocean and a loss of sea-ice.

Colder air is denser than warm air. Because of this, the air over the ocean surface that has been cooled by water from the depths has a higher atmospheric pressure than the air over land. The cool air over the ocean sinks downward reducing cloud formation over the ocean. Since more of the planet is covered with water than land, this would result in less cloud cover overall, which means that more of the Sun’s rays are absorbed by Earth, rather than being reflected back into space by clouds.

Water mixing caused by ocean pipes would also bring sea ice into contact with warmer waters, resulting in melting. What’s more, this would further decrease the reflection of the Sun’s radiation, which bounces off ice as well as clouds.

After 60 years, the pipes would cause an increase in global temperature of up to 1.2 degrees Celsius (2.2degrees Fahrenheit). Over several centuries, the pipes put the Earth on a warming trend towards a temperature increase of 8.5 degrees Celsius (15.3 degrees Fahrenheit).

“I cannot envisage any scenario in which a large scale global implementation of ocean pipes would be advisable,” Kwiatkowski said. “In fact, our study shows it could exacerbate long-term warming and is therefore highly inadvisable at global scales.”

The authors do say, however, that ocean pipes might be useful on a small scale to help aerate ocean dead zones.


Journal Reference:

  1. Lester Kwiatkowski, Katharine L Ricke and Ken Caldeira. Atmospheric consequences of disruption of the ocean thermoclineEnvironmental Research Letters, 2015 DOI: 10.1088/1748-9326/10/3/034016

Butterflies, Ants and the Internet of Things (Wired)

[Isn’t it scary that there are bright people who are that innocent? Or perhaps this is just a propaganda piece. – RT]

BY GEOFF WEBB, NETIQ

12.10.14  |  12:41 PM

Autonomous Cars (Autopia)

Buckminster Fuller once wrote, “there is nothing in the caterpillar that tells you it’s going to be a butterfly.”  It’s true that often our capacity to look at things and truly understand their final form is very limited.  Nor can we necessarily predict what happens when many small changes combine – when small pebbles roll down a hillside and turn in a landslide that dams a river and floods a plain.

This is the situation we face now as we try to understand the final form and impact of the Internet of Things (IoT). Countless small, technological pebbles have begun to roll down the hillside from initial implementation to full realization.  In this case, the “pebbles” are the billions of sensors, actuators, and smart technologies that are rapidly forming the Internet of Things. And like the caterpillar in Fuller’s quote, the final shape of the IoT may look very different from our first guesses.

In whatever the world looks like as the IoT begins to bear full fruit, the experience of our lives will be markedly different.  The world around us will not only be aware of our presence, it will know who we are, and it will react to us, often before we are even aware of it.  The day-to-day process of living will change because almost every piece of technology we touch (and many we do not) will begin to tailor their behavior to our specific needs and desires.  Our car will talk to our house.

Walking into a store will be very different, as the displays around us could modify their behavior based on our preferences and buying habits.  The office of the future will be far more adaptive, less rigid, more connected – the building will know who we are and will be ready for us when we arrive.  Everything, from the way products are built and packaged and the way our buildings and cities are managed, to the simple process of travelling around, interacting with each other, will change and change dramatically. And it’s happening now.

We’re already seeing mainstream manufacturers building IoT awareness into their products, such as Whirlpool building Internet-aware washing machines, and specialized IoT consumer tech such as LIFX light bulbs which can be managed from a smartphone and will respond to events in your house. Even toys are becoming more and more connected as our children go online at even younger ages.  And while many of the consumer purchases may already be somehow “IoT” aware, we are still barely scratching the surface of the full potential of a fully connected world. The ultimate impact of the IoT will run far deeper, into the very fabric of our lives and the way we interact with the world around us.

One example is the German port of Hamburg. The Hamburg port Authority is building what they refer to as a smartPort. Literally embedding millions of sensors in everything from container handling systems to street lights – to provide data and management capabilities to move cargo through the port more efficiently, avoid traffic snarl-ups, and even predict environmental impacts through sensors that respond to noise and air pollution.

Securing all those devices and sensors will require a new way of thinking about technology and the interactions of “things,” people, and data. What we must do, then, is to adopt an approach that scales to manage the staggering numbers of these sensors and devices, while still enabling us to identify when they are under attack or being misused.

This is essentially the same problem we already face when dealing with human beings – how do I know when someone is doing something they shouldn’t? Specifically how can I identify a bad person in a crowd of law-abiding citizens?

The best answer is what I like to call, the “Vegas Solution.” Rather than adopting a model that screens every person as they enter a casino, the security folks out in Nevada watch for behavior that indicates someone is up to no good, and then respond accordingly. It’s low impact for everyone else, but works with ruthless efficiency (as anyone who has ever tried counting cards in a casino will tell you.)

This approach focuses on known behaviors and looks for anomalies. It is, at its most basic, the practical application of “identity.” If I understand the identity of the people I am watching, and as a result, their behavior, I can tell when someone is acting badly.

Now scale this up to the vast number of devices and sensors out there in the nascent IoT. If I understand the “identity” of all those washing machines, smart cars, traffic light sensors, industrial robots, and so on, I can determine what they should be doing, see when that behavior changes (even in subtle ways such as how they communicate with each other) and respond quickly when I detect something potentially bad.

The approach is sound, in fact, it’s probably the only approach that will scale to meet the complexity of all those billions upon billions of “things” that make up the IoT. The challenge of this is brought to the forefront by the fact that there must be a concept of identity applied to so many more “things” than we have ever managed before. If there is an “Internet of Everything” there will be an “Identity of Everything” to go with it? And those identities will tell us what each device is, when it was created, how it should behave, what it is capable of, and so on.  There are already proposed standards for this kind of thing, such as the UK’s HyperCatstandard, which lets one device figure out what another device it can talk to actually does and therefore what kind of information it might want to share.

Where things get really interesting, however, is when we start to watch the interactions of all these identities – and especially the interactions of the “thing” identities and our own. How we humans of Internet users compared to the “things”, interact with all the devices around us will provide even more insight into our lives, wants, and behaviors. Watching how I interact with my car, and the car with the road, and so on, will help manage city traffic far more efficiently than broad brush traffic studies. Likewise, as the wearable technology I have on my person (or in my person) interacts with the sensors around me, so my experience of almost everything, from shopping to public services, can be tailored and managed more efficiently. This, ultimately is the promise of the IoT, a world that is responsive, intelligent and tailored for every situation.

As we continue to add more and more sensors and smart devices, the potential power of the IoT grows.  Many small, slightly smart things have a habit of combining to perform amazing feats. Taking another example from nature, leaf-cutter ants (tiny in the extreme) nevertheless combine to form the second most complex social structures on earth (after humans) and can build staggeringly large homes.

When we combine the billions of smart devices into the final IoT, we should expect to be surprised by the final form all those interactions take, and by the complexity of the thing we create.  Those things can and will work together, and how they behave will be defined by the identities we give them today.

Geoff Webb is Director of Solution Strategy at NetIQ.

Manipulação do clima pode causar efeitos indesejados (N.Y.Times/FSP)

Ilvy Njiokiktjien/The New York Times
Olivine, a green-tinted mineral said to remove carbon dioxide from the atmosphere, in the hands of retired geochemist Olaf Schuiling in Maasland, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Ilvy Njiokiktjien/The New York Times)
Olivina, um mineral esverdeado que ajudaria remover o dióxido de carbono da atmosfera

HENRY FOUNTAIN
DO “NEW YORK TIMES”

18/11/2014 02h01

Para Olaf Schuiling, a solução para o aquecimento global está sob nossos pés.

Schuiling, geoquímico aposentado, acredita que a salvação climática está na olivina, mineral de tonalidade verde abundante no mundo inteiro. Quando exposta aos elementos, ela extrai lentamente o gás carbônico da atmosfera.

A olivina faz isso naturalmente há bilhões de anos, mas Schuiling quer acelerar o processo espalhando-a em campos e praias e usando-a em diques, trilhas e até playgrounds. Basta polvilhar a quantidade certa de rocha moída, diz ele, e ela acabará removendo gás carbônico suficiente para retardar a elevação das temperaturas globais.

“Vamos deixar a Terra nos ajudar a salvá-la”, disse Schuiling, 82, em seu gabinete na Universidade de Utrecht.
Ideias para combater as mudanças climáticas, como essas propostas de geoengenharia, já foram consideradas meramente fantasiosas.

Todavia, os efeitos das mudanças climáticas podem se tornar tão graves que talvez tais soluções passem a ser consideradas seriamente.

A ideia de Schuiling é uma das várias que visam reduzir os níveis de gás carbônico, o principal gás responsável pelo efeito estufa, de forma que a atmosfera retenha menos calor.

Outras abordagens, potencialmente mais rápidas e viáveis, porém mais arriscadas, criariam o equivalente a um guarda-sol ao redor do planeta, dispersando gotículas reflexivas na estratosfera ou borrifando água do mar para formar mais nuvens acima dos oceanos. A menor incidência de luz solar na superfície da Terra reduziria a retenção de calor, resultando em uma rápida queda das temperaturas.

Ninguém tem certeza de que alguma técnica de geoengenharia funcionaria, e muitas abordagens nesse campo parecem pouco práticas. A abordagem de Schuiling, por exemplo, levaria décadas para ter sequer um pequeno impacto, e os próprios processos de mineração, moagem e transporte dos bilhões de toneladas de olivina necessários produziriam enormes emissões de carbono.

Jasper Juinen/The New York Times
Kids play on a playground made with Olivine, a material said to remove carbon dioxide from the atmosphere, in Arnhem, Netherlands, Oct. 9, 2014. Once considered the stuff of wild-eyed fantasies, such ideas for countering climate change — known as geoengineering solutions — are now being discussed seriously by scientists. (Jasper Juinen/The New York Times)
Crianças brincam em playground na Holanda revestido com olivina; minério esverdeado retira lentamento o gás carbônico presente na atmosfera

Muitas pessoas consideram a ideia da geoengenharia um recurso desesperado em relação à mudança climática, o qual desviaria a atenção mundial da meta de eliminar as emissões que estão na raiz do problema.

O clima é um sistema altamente complexo, portanto, manipular temperaturas também pode ter consequências, como mudanças na precipitação pluviométrica, tanto catastróficas como benéficas para uma região à custa de outra. Críticos também apontam que a geoengenharia poderia ser usada unilateralmente por um país, criando outra fonte de tensões geopolíticas.

Especialistas, porém, argumentam que a situação atual está se tornando calamitosa. “Em breve poderá nos restar apenas a opção entre geoengenharia e sofrimento”, opinou Andy Parker, do Instituto de Estudos Avançados sobre Sustentabilidade, em Potsdam, Alemanha.

Em 1991, uma erupção vulcânica nas Filipinas expeliu a maior nuvem de gás anidrido sulforoso já registrada na alta atmosfera. O gás formou gotículas de ácido sulfúrico, que refletiam os raios solares de volta para o Espaço. Durante três anos, a média das temperaturas globais teve uma queda de cerca de 0,5 grau Celsius. Uma técnica de geoengenharia imitaria essa ação borrifando gotículas de ácido sulfúrico na estratosfera.

David Keith, pesquisador na Universidade Harvard, disse que essa técnica de geoengenharia, chamada de gestão da radiação solar (SRM na sigla em inglês), só deve ser utilizada lenta e cuidadosamente, para que possa ser interrompida caso prejudique padrões climáticos ou gere outros problemas.

Certos críticos da geoengenharia duvidam que qualquer impacto possa ser equilibrado. Pessoas em países subdesenvolvidos são afetadas por mudanças climáticas em grande parte causadas pelas ações de países industrializados. Então, por que elas confiariam que espalhar gotículas no céu as ajudaria?

“Ninguém gosta de ser o rato no laboratório alheio”, disse Pablo Suarez, do Centro do Clima da Cruz Vermelha/Crescente Vermelho.

Ideias para retirar gás carbônico do ar causam menos alarme. Embora tenham questões espinhosas –a olivina, por exemplo, contém pequenas quantidades de metais que poderiam contaminar o meio ambiente–,elas funcionariam de maneira bem mais lenta e indireta, afetando o clima ao longo de décadas ao alterar a atmosfera.

Como o doutor Schuiling divulga há anos sua ideia na Holanda, o país se tornou adepto da olivina. Estando ciente disso, qualquer um pode notar a presença da rocha moída em trilhas, jardins e áreas lúdicas.

Eddy Wijnker, ex-engenheiro acústico, criou a empresa greenSand na pequena cidade de Maasland. Ela vende areia de olivina para uso doméstico ou comercial. A empresa também vende “certificados de areia verde” que financiam a colocação da areia ao longo de rodovias.

A obstinação de Schuiling também incitou pesquisas. No Instituto Real de Pesquisa Marítima da Holanda em Yerseke, o ecologista Francesc Montserrat está pesquisando a possibilidade de espalhar olivina no leito do mar. Na Bélgica, pesquisadores na Universidade de Antuérpia estudam os efeitos da olivina em culturas agrícolas como cevada e trigo.

Boa parte dos profissionais de geoengenharia aponta a necessidade de haver mais pesquisas e o fato de as simulações em computador serem limitadas.

Poucas verbas no mundo são destinadas a pesquisas de geoengenharia. No entanto, até a sugestão de realizar experimentos em campo pode causar clamor popular. “As pessoas gostam de linhas bem demarcadas, e uma bem óbvia é que não há problema em testar coisas em um computador ou em uma bancada de laboratório”, comentou Matthew Watson, da Universidade de Bristol, no Reino Unido. “Mas elas reagem mal assim que você começa a entrar no mundo real.”

Watson conhece bem essas delimitações. Ele liderou um projeto financiado pelo governo britânico, que incluía um teste relativamente inócuo de uma tecnologia. Em 2011, os pesquisadores pretendiam soltar um balão a cerca de um quilômetro de altitude e tentar bombear um pouco de água por uma mangueira até ele. A proposta desencadeou protestos no Reino Unido, foi adiada por meio ano e, finalmente, cancelada.

Hoje há poucas perspectivas de apoio governamental a qualquer tipo de teste de geoengenharia nos EUA, onde muitos políticos negam sequer que as mudanças climáticas sejam uma realidade.

“O senso comum é que a direita não quer falar sobre isso porque reconhece o problema”, disse Rafe Pomerance, que trabalhou com questões ambientais no Departamento de Estado. “E a esquerda está preocupada com o impacto das emissões.”

Portanto, seria bom discutir o assunto abertamente, afirmou Pomerance. “Isso ainda vai levar algum tempo, mas é inevitável”, acrescentou.