Arquivo da tag: Tecnologia

Humans Are Evolving Faster Than Ever. The Reason Is Not Genetic, Study Claims (Science Alert)

Cameron Duke, Live Science – 15 JUNE 2021

At the mercy of natural selection since the dawn of life, our ancestors adapted, mated and died, passing on tiny genetic mutations that eventually made humans what we are today. 

But evolution isn’t bound strictly to genes anymore, a new study suggests. Instead, human culture may be driving evolution faster than genetic mutations can work.

In this conception, evolution no longer requires genetic mutations that confer a survival advantage being passed on and becoming widespread. Instead, learned behaviors passed on through culture are the “mutations” that provide survival advantages.

This so-called cultural evolution may now shape humanity’s fate more strongly than natural selection, the researchers argue.

“When a virus attacks a species, it typically becomes immune to that virus through genetic evolution,” study co-author Zach Wood, a postdoctoral researcher in the School of Biology and Ecology at the University of Maine, told Live Science.

Such evolution works slowly, as those who are more susceptible die off and only those who survive pass on their genes. 

But nowadays, humans mostly don’t need to adapt to such threats genetically. Instead, we adapt by developing vaccines and other medical interventions, which are not the results of one person’s work but rather of many people building on the accumulated “mutations” of cultural knowledge.

By developing vaccines, human culture improves its collective “immune system,” said study co-author Tim Waring, an associate professor of social-ecological systems modeling at the University of Maine.

And sometimes, cultural evolution can lead to genetic evolution. “The classic example is lactose tolerance,” Waring told Live Science. “Drinking cow’s milk began as a cultural trait that then drove the [genetic] evolution of a group of humans.”

In that case, cultural change preceded genetic change, not the other way around. 

The concept of cultural evolution began with the father of evolution himself, Waring said. Charles Darwin understood that behaviors could evolve and be passed to offspring just as physical traits are, but scientists in his day believed that changes in behaviors were inherited. For example, if a mother had a trait that inclined her to teach a daughter to forage for food, she would pass on this inherited trait to her daughter. In turn, her daughter might be more likely to survive, and as a result, that trait would become more common in the population. 

Waring and Wood argue in their new study, published June 2 in the journal Proceedings of the Royal Society B, that at some point in human history, culture began to wrest evolutionary control from our DNA. And now, they say, cultural change is allowing us to evolve in ways biological change alone could not.

Here’s why: Culture is group-oriented, and people in those groups talk to, learn from and imitate one another. These group behaviors allow people to pass on adaptations they learned through culture faster than genes can transmit similar survival benefits.

An individual can learn skills and information from a nearly unlimited number of people in a small amount of time and, in turn, spread that information to many others. And the more people available to learn from, the better. Large groups solve problems faster than smaller groups, and intergroup competition stimulates adaptations that might help those groups survive.

As ideas spread, cultures develop new traits.

In contrast, a person only inherits genetic information from two parents and racks up relatively few random mutations in their eggs or sperm, which takes about 20 years to be passed on to their small handful of children. That’s just a much slower pace of change.

“This theory has been a long time coming,” said Paul Smaldino, an associate professor of cognitive and information sciences at the University of California, Merced who was not affiliated with this study. “People have been working for a long time to describe how evolutionary biology interacts with culture.”

It’s possible, the researchers suggest, that the appearance of human culture represents a key evolutionary milestone.

“Their big argument is that culture is the next evolutionary transition state,” Smaldino told Live Science.

Throughout the history of life, key transition states have had huge effects on the pace and direction of evolution. The evolution of cells with DNA was a big transitional state, and then when larger cells with organelles and complex internal structures arrived, it changed the game again. Cells coalescing into plants and animals was another big sea change, as was the evolution of sex, the transition to life on land and so on.

Each of these events changed the way evolution acted, and now humans might be in the midst of yet another evolutionary transformation. We might still evolve genetically, but that may not control human survival very much anymore.

“In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,” Waring said in a statement.

But genetics drives bee colonies, while the human superorganism will exist in a category all its own. What that superorganism looks like in the distant future is unclear, but it will likely take a village to figure it out. 

If DNA is like software, can we just fix the code? (MIT Technology Review)

In a race to cure his daughter, a Google programmer enters the world of hyper-personalized drugs.

Erika Check Hayden

February 26, 2020

To create atipeksen, Yu borrowed from recent biotech successes like gene therapy. Some new drugs, including cancer therapies, treat disease by directly manipulating genetic information inside a patient’s cells. Now doctors like Yu find they can alter those treatments as if they were digital programs. Change the code, reprogram the drug, and there’s a chance of treating many genetic diseases, even those as unusual as Ipek’s.

The new strategy could in theory help millions of people living with rare diseases, the vast majority of which are caused by genetic typos and have no treatment. US regulators say last year they fielded more than 80 requests to allow genetic treatments for individuals or very small groups, and that they may take steps to make tailor-made medicines easier to try. New technologies, including custom gene-editing treatments using CRISPR, are coming next.

Where it had taken decades for Ionis to perfect its drug, Yu now set a record: it took only eight months for Yu to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.

“I never thought we would be in a position to even contemplate trying to help these patients,” says Stanley Crooke, a biotechnology entrepreneur and founder of Ionis Pharmaceuticals, based in Carlsbad, California. “It’s an astonishing moment.”

Antisense drug

Right now, though, insurance companies won’t pay for individualized gene drugs, and no company is making them (though some plan to). Only a few patients have ever gotten them, usually after heroic feats of arm-twisting and fundraising. And it’s no mistake that programmers like Mehmet Kuzu, who works on data privacy, are among the first to pursue individualized drugs. “As computer scientists, they get it. This is all code,” says Ethan Perlstein, chief scientific officer at the Christopher and Dana Reeve Foundation.

A nonprofit, the A-T Children’s Project, funded most of the cost of designing and making Ipek’s drug. For Brad Margus, who created the foundation in 1993 after his two sons were diagnosed with A-T, the change between then and now couldn’t be more dramatic. “We’ve raised so much money, we’ve funded so much research, but it’s so frustrating that the biology just kept getting more and more complex,” he says. “Now, we’re suddenly presented with this opportunity to just fix the problem at its source.”

Ipek was only a few months old when her father began looking for a cure. A geneticist friend sent him a paper describing a possible treatment for her exact form of A-T, and Kuzu flew from Sunnyvale, California, to Los Angeles to meet the scientists behind the research. But they said no one had tried the drug in people: “We need many more years to make this happen,” they told him.

Timothy Yu, of Boston Children's Hospital
Timothy Yu, of Boston Children’s HospitalCourtesy Photo (Yu)

Kuzu didn’t have years. After he returned from Los Angeles, Margus handed him a thumb drive with a video of a talk by Yu, a doctor at Boston Children’s Hospital, who described how he planned to treat a young girl with Batten disease (a different neurodegenerative condition) in what press reports would later dub “a stunning illustration of personalized genomic medicine.” Kuzu realized Yu was using the very same gene technology the Los Angeles scientists had dismissed as a pipe dream.

That technology is called “antisense.” Inside a cell, DNA encodes information to make proteins. Between the DNA and the protein, though, come messenger molecules called RNA that ferry the gene information out of the nucleus. Think of antisense as mirror-image molecules that stick to specific RNA messages, letter for letter, blocking them from being made into proteins. It’s possible to silence a gene this way, and sometimes to overcome errors, too.

Though the first antisense drugs appeared 20 years ago, the concept achieved its first blockbuster success only in 2016. That’s when a drug called nusinersen, made by Ionis, was approved to treat children with spinal muscular atrophy, a genetic disease that would otherwise kill them by their second birthday.

Yu, a specialist in gene sequencing, had not worked with antisense before, but once he’d identified the genetic error causing Batten disease in his young patient, Mila Makovec, it became apparent to him he didn’t have to stop there. If he knew the gene error, why not create a gene drug? “All of a sudden a lightbulb went off,” Yu says. “Couldn’t one try to reverse this? It was such an appealing idea, and such a simple idea, that we basically just found ourselves unable to let that go.”

Yu admits it was bold to suggest his idea to Mila’s mother, Julia Vitarello. But he was not starting from scratch. In a demonstration of how modular biotech drugs may become, he based milasen on the same chemistry backbone as the Ionis drug, except he made Mila’s particular mutation the genetic target. Where it had taken decades for Ionis to perfect a drug, Yu now set a record: it took only eight months for him to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.

“What’s different now is that someone like Tim Yu can develop a drug with no prior familiarity with this technology,” says Art Krieg, chief scientific officer at Checkmate Pharmaceuticals, based in Cambridge, Massachusetts.

Source code

As word got out about milasen, Yu heard from more than a hundred families asking for his help. That’s put the Boston doctor in a tough position. Yu has plans to try antisense to treat a dozen kids with different diseases, but he knows it’s not the right approach for everyone, and he’s still learning which diseases might be most amenable. And nothing is ever simple—or cheap. Each new version of a drug can behave differently and requires costly safety tests in animals.

Kuzu had the advantage that the Los Angeles researchers had already shown antisense might work. What’s more, Margus agreed that the A-T Children’s Project would help fund the research. But it wouldn’t be fair to make the treatment just for Ipek if the foundation was paying for it. So Margus and Yu decided to test antisense drugs in the cells of three young A-T patients, including Ipek. Whichever kid’s cells responded best would get picked.

Ipek at play
Ipek may not survive past her 20s without treatment.Matthew Monteith

While he waited for the test results, Kuzu raised about $200,000 from friends and coworkers at Google. One day, an email landed in his in-box from another Google employee who was fundraising to help a sick child. As he read it, Kuzu felt a jolt of recognition: his coworker, Jennifer Seth, was also working with Yu.

Seth’s daughter Lydia was born in December 2018. The baby, with beautiful chubby cheeks, carries a mutation that causes seizures and may lead to severe disabilities. Seth’s husband Rohan, a well-connected Silicon Valley entrepreneur, refers to the problem as a “tiny random mutation” in her “source code.” The Seths have raised more than $2 million, much of it from co-workers.

Custom drug

By then, Yu was ready to give Kuzu the good news: Ipek’s cells had responded the best. So last September the family packed up and moved from California to Cambridge, Massachusetts, so Ipek could start getting atipeksen. The toddler got her first dose this January, under general anesthesia, through a lumbar puncture into her spine.

After a year, the Kuzus hope to learn whether or not the drug is helping. Doctors will track her brain volume and measure biomarkers in Ipek’s cerebrospinal fluid as a readout of how her disease is progressing. And a team at Johns Hopkins will help compare her movements with those of other kids, both with and without A-T, to observe whether the expected disease symptoms are delayed.

One serious challenge facing gene drugs for individuals is that short of a healing miracle, it may ultimately be impossible to be sure they really work. That’s because the speed with which diseases like A-T progress can vary widely from person to person. Proving a drug is effective, or revealing that it’s a dud, almost always requires collecting data from many patients, not just one. “It’s important for parents who are ready to pay anything, try anything, to appreciate that experimental treatments often don’t work,” says Holly Fernandez Lynch, a lawyer and ethicist at the University of Pennsylvania. “There are risks. Trying one could foreclose other options and even hasten death.”

Kuzu says his family weighed the risks and benefits. “Since this is the first time for this kind of drug, we were a little scared,” he says. But, he concluded, “there’s nothing else to do. This is the only thing that might give hope to us and the other families.”

Another obstacle to ultra-personal drugs is that insurance won’t pay for them. And so far, pharmaceutical companies aren’t interested either. They prioritize drugs that can be sold thousands of times, but as far as anyone knows, Ipek is the only person alive with her exact mutation. That leaves families facing extraordinary financial demands that only the wealthy, lucky, or well connected can meet. Developing Ipek’s treatment has already cost $1.9 million, Margus estimates.

Some scientists think agencies such as the US National Institutes of Health should help fund the research, and will press their case at a meeting in Bethesda, Maryland, in April. Help could also come from the Food and Drug Administration, which is developing guidelines that may speed the work of doctors like Yu. The agency will receive updates on Mila and other patients if any of them experience severe side effects.

The FDA is also considering giving doctors more leeway to modify genetic drugs to try in new patients without securing new permissions each time. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, likens traditional drug manufacturing to factories that mass-produce identical T-shirts. But, he points out, it’s now possible to order an individual basic T-shirt embroidered with a company logo. So drug manufacturing could become more customized too, Marks believes.

Custom drugs carrying exactly the message a sick kid’s body needs? If we get there, credit will go to companies like Ionis that developed the new types of gene medicine. But it should also go to the Kuzus—and to Brad Margus, Rohan Seth, Julia Vitarello, and all the other parents who are trying save their kids. In doing so, they are turning hyper-personalized medicine into reality.

Erika Check Hayden is director of the science communication program at the University of California, Santa Cruz.

This story was part of our March 2020 issue.

The predictions issue

An elegy for cash: the technology we might never replace (MIT Technology Review)

Cash is gradually dying out. Will we ever have a digital alternative that offers the same mix of convenience and freedom?

Mike Orcutt

January 3, 2020

If you’d rather keep all that to yourself, you’re in luck. The person in the store (or on the street corner) may remember your face, but as long as you didn’t reveal any identifying information, there is nothing that links you to the transaction.

This is a feature of physical cash that payment cards and apps do not have: freedom. Called “bearer instruments,” banknotes and coins are presumed to be owned by whoever holds them. We can use them to transact with another person without a third party getting in the way. Companies cannot build advertising profiles or credit ratings out of our data, and governments cannot track our spending or our movements. And while a credit card can be declined and a check mislaid, handing over money works every time, instantly.

We shouldn’t take this freedom for granted. Much of our commerce now happens online. It relies on banks and financial technology companies to serve as middlemen. Transactions are going digital in the physical world, too: electronic payment tools, from debit cards to Apple Pay to Alipay, are increasingly replacing cash. While notes and coins remain popular in many countries, including the US, Japan, and Germany, in others they are nearing obsolescence.

This trend has civil liberties groups worried. Without cash, there is “no chance for the kind of dignity-preserving privacy that undergirds an open society,” writes Jerry Brito, executive director of Coin Center, a policy advocacy group based in Washington, DC. In a recent report, Brito contends that we must “develop and foster electronic cash” that is as private as physical cash and doesn’t require permission to use.

The central question is who will develop and control the electronic payment systems of the future. Most of the existing ones, like Alipay, Zelle, PayPal, Venmo, and Kenya’s M-Pesa, are run by private firms. Afraid of leaving payments solely in their hands, many governments are looking to develop some sort of electronic stand-in for notes and coins. Meanwhile, advocates of stateless, ownerless cryptocurrencies like Bitcoin say they’re the only solution as surveillance-proof as cash—but can they be feasible at large scales?

We tend to take it for granted that new technologies work better than old ones—safer, faster, more accurate, more efficient, more convenient. Purists may extol the virtues of vinyl records, but nobody can dispute that a digital music collection is easier to carry and sounds almost exactly as good. Cash is a paradox—a technology thousands of years old that may just prove impossible to re-create in a more advanced form.

In (government) money we trust?

We call banknotes and coins “cash,” but the term really refers to something more abstract: cash is essentially money that your government owes you. In the old days this was a literal debt. “I promise to pay the bearer on demand the sum of …” still appears on British banknotes, a notional guarantee that the Bank of England will hand over the same value in gold in exchange for your note. Today it represents the more abstract guarantee that you will always be able to use that note to pay for things.

The digits in your bank account, on the other hand, refer to what your bank owes you. When you go to an ATM, you are effectively converting the bank’s promise to pay into a government promise.

Most people would say they trust the government’s promise more, says Gabriel Söderberg, an economist at the Riksbank, the central bank of Sweden. Their bet—correct, in most countries—is that their government is much less likely to go bust.

That’s why it would be a problem if Sweden were to go completely “cashless,” Söderberg says. He and his colleagues fear that if people lose the option to convert their bank money to government money at will and use it to pay for whatever they need, they might start to lose trust in the whole money system. A further worry is that if the private sector is left to dominate digital payments, people who can’t or won’t use these systems could be shut out of the economy.

This is fast becoming more than just a thought experiment in Sweden. Nearly everyone there uses a mobile app called Swish to pay for things. Economists have estimated that retailers in Sweden could completely stop accepting cash by 2023.

Creating an electronic version of Sweden’s sovereign currency—an “e-krona”—could mitigate these problems, Söderberg says. If the central bank were to issue digital money, it would design it to be a public good, not a profit-making product for a corporation. “Easily accessible, simple and user-friendly versions could be developed for those who currently have difficulty with digital technology,” the bank asserted in a November report covering Sweden’s payment landscape.

The Riksbank plans to develop and test an e-krona prototype. It has examined a number of technologies that might underlie it, including cryptocurrency systems like Bitcoin. But the central bank has also called on the Swedish government to lead a broad public inquiry into whether such a system should ever go live. “In the end, this decision is too big for a central bank alone, at least in the Swedish context,” Söderberg says.

The death of financial privacy

China, meanwhile, appears to have made its decision: the digital renminbi is coming. Mu Changchun, head of the People’s Bank of China’s digital currency research institute, said in September that the currency, which the bank has been working on for years, is “close to being out.” In December, a local news report suggested that the PBOC is nearly ready to start tests in the cities of Shenzhen and Suzhou. And the bank has been explicit about its intention to use it to replace banknotes and coins.

Cash is already dying out on its own in China, thanks to Alipay and WeChat Pay, the QR-code-based apps that have become ubiquitous in just a few years. It’s been estimated that mobile payments made up more than 80% of all payments in China in 2018, up from less than 20% in 2013.

Street Musician takes WeChat Pay
AP Images

It’s not clear how much access the government currently has to transaction data from WeChat Pay and Alipay. Once it issues a sovereign digital currency—which officials say will be compatible with those two services—it will likely have access to a lot more. Martin Chorzempa, a research fellow at the Peterson Institute for International Economics in Washington, DC, told the New York Times in October that the system will give the PBOC “extraordinary power and visibility into the financial system, more than any central bank has today.”

We don’t know for sure what technology the PBOC plans to use as the basis for its digital renminbi, but we have at least two revealing clues. First, the bank has been researching blockchain technology since 2014, and the government has called the development of this technology a priority. Second, Mu said in September that China’s system will bear similarities to Libra, the electronic currency Facebook announced last June. Indeed, PBOC officials have implied in public statements that the unveiling of Libra inspired them to accelerate the development of the digital renminbi, which has been in the works for years.

As currently envisioned, Libra will run on a blockchain, a type of accounting ledger that can be maintained by a network of computers instead of a single central authority. However, it will operate very differently from Bitcoin, the original blockchain system.

The computers in Bitcoin’s network use open-source software to automatically verify and record every single transaction. In the process, they generate a permanent public record of the currency’s entire transaction history: the blockchain. As envisioned, Libra’s network will do something similar. But whereas anyone with a computer and an internet connection can participate anonymously in Bitcoin’s network, the “nodes” that make up Libra’s network will be companies that have been vetted and given membership in a nonprofit association.

Unlike Bitcoin, which is notoriously volatile, Libra will be designed to maintain a stable value. To pull this off, the so-called Libra Association will be responsible for maintaining a reserve (pdf) of government-issued currencies (the latest plan is for it to be half US dollars, with the other half composed of British pounds, euros, Japanese yen, and Singapore dollars). This reserve is supposed to serve as backing for the digital units of value.

Both Libra and the digital renminbi, however, face serious questions about privacy. To start with, it’s not clear if people will be able to use them anonymously.

With Bitcoin, although transactions are public, users don’t have to reveal who they really are; each person’s “address” on the public blockchain is just a random string of letters and numbers. But in recent years, law enforcement officials have grown skilled at combining public blockchain data with other clues to unmask people using cryptocurrencies for illicit purposes. Indeed, in a July blog post, Libra project head David Marcus argued that the currency would be a boon for law enforcement, since it would help “move more cash transactions—where a lot of illicit activities happen—to a digital network.”

As for the Chinese digital currency, Mu has said it will feature some level of anonymity. “We know the demand from the general public is to keep anonymity by using paper money and coins … we will give those people who demand it anonymity,” he said at a November conference in Singapore. “But at the same time we will keep the balance between ‘controllable anonymity’ and anti-money-laundering, CTF [counter-terrorist financing], and also tax issues, online gambling, and any electronic criminal activities,” he added. He did not, however, explain how that “balance” would work.

Sweden and China are leading the charge to issue consumer-focused electronic money, but according to John Kiff, an expert on financial stability for the International Monetary Fund, more than 30 countries have explored or are exploring the idea.  In some, the rationale is similar to Sweden’s: dwindling cash and a growing private-sector payments ecosystem. Others are countries where commercial banks have decided not to set up shop. Many see an opportunity to better monitor for illicit transactions. All will have to wrestle with the same thorny privacy issues that Libra and the digital renminbi are raising.

Robleh Ali, a research scientist at MIT’s Digital Currency Initiative, says digital currency systems from central banks may need to be designed so that the government can “consciously blind itself” to the information. Something like that might be technically possible thanks to cutting-edge cryptographic tools like zero-knowledge proofs, which are used in systems like Zcash to shield blockchain transaction information from public view.

However, there’s no evidence that any governments are even thinking about deploying tools like this. And regardless, can any government—even Sweden’s—really be trusted to blind itself?

Cryptocurrency: A workaround for freedom

That’s wishful thinking, says Alex Gladstein, chief strategy officer for the Human Rights Foundation. While you may trust your government or think you’ve got nothing to hide, that might not always remain true. Politics evolves, governments get pushed out by elections or other events, what constitutes a “crime” changes, and civil liberties are not guaranteed. “Financial privacy is not going to be gifted to you by your government, regardless of how ‘free’ they are,” Gladstein says. He’s convinced that it has to come in the form of a stateless, decentralized digital currency like Bitcoin.

In fact, “electronic cash” was what Bitcoin’s still-unknown inventor, the pseudonymous Satoshi Nakamoto, claimed to be trying to create (before disappearing). Eleven years into its life, Nakamoto’s technology still lacks some of the signature features of cash. It is difficult to use, transactions can take more than an hour to process, and the currency’s value can fluctuate wildly. And as already noted, the supposedly anonymous transactions it enables can sometimes be traced.

But in some places people just need something that works, however imperfectly. Take Venezuela. Cash in the crisis-ridden country is scarce, and the Venezuelan bolivar is constantly losing value to hyperinflation. Many Venezuelans seek refuge in US dollars, storing them under the proverbial (and literal) mattress, but that also makes them vulnerable to thieves.

What many people want is access to stable cash in digital form, and there’s no easy way to get that, says Alejandro Machado, cofounder of the Open Money Initiative. Owing to government-imposed capital controls, Venezuelan banks have largely been cut off from foreign banks. And due to restrictions by US financial institutions, digital money services like PayPal and Zelle are inaccessible to most people.  So a small number of tech-savvy Venezuelans have turned to a service called LocalBitcoins.

It’s like Craigslist, except that the only things for sale are bitcoins and bolivars. On Venezuela’s LocalBitcoins site, people advertise varying quantities of currency for sale at varying exchange rates. The site holds the money in escrow until trades are complete, and tracks the sellers’ reputations.

It’s not for the masses, but it’s “very effective” for people who can make it work, says Machado. For instance, he and his colleagues met a young woman who mines Bitcoin and keeps her savings in the currency. She doesn’t have a foreign bank account, so she’s willing to deal with the constant fluctuations in Bitcoin’s price. Using LocalBitcoins, she can cash out into bolivars whenever she needs them—to buy groceries, for example. “Niche power users” like this are “leveraging the best features of Bitcoin, which is to be an asset that is permissionless and that is very easy to trade electronically,” Machado says.

However, this is possible only because there are enough people using LocalBitcoins to create what finance people call “local liquidity,” meaning you can easily find a buyer for your bitcoins or bolivars. Bitcoin is the only cryptocurrency that has achieved this in Venezuela, says Machado, and it’s mostly thanks to LocalBitcoins.

This is a long way from the dream of cryptocurrency as a widely used substitute for stable, government-issued money. Most Venezuelans can’t use Bitcoin, and few merchants there even know what it is, much less how to accept it.

Still, it’s a glimpse of what a cryptocurrency can offer—a functional financial system that anyone can join and that offers the kind of freedom cash provides in most other places.

Decentralize this

Could something like Bitcoin ever be as easy to use and reliable as today’s cash is for everyone else? The answer is philosophical as well as technical.

To begin with, what does it even mean for something to be like Bitcoin? Central banks and corporations will adapt certain aspects of Bitcoin and apply them to their own ends. Will those be cryptocurrencies? Not according to purists, who say that though Libra or some future central bank-issued digital currency may run on blockchain technology, they won’t be cryptocurrencies because they will be under centralized control.

True cryptocurrencies are “decentralized”—they have no one entity in charge and no single points of failure, no weak spots that an adversary (including a government) could attack. With no middleman like a bank attesting that a transaction took place, each transaction has to be validated by the nodes in a cryptocurrency’s network, which can number many thousands. But this requires an immense expenditure of computing power, and it’s the reason Bitcoin transactions can take more than an hour to settle.

A currency like Libra wouldn’t have this problem, because only a few authorized entities would be able to operate nodes. The trade-off is that its users wouldn’t be able to trust those entities to guarantee their privacy, any more than they can trust a bank, a government, or Facebook.

Is it technically possible to achieve Bitcoin’s level of decentralization and the speed, scale, privacy, and ease of use that we’ve come to expect from traditional payment methods? That’s a problem many talented researchers are still trying to crack. But some would argue that shouldn’t necessarily be the goal.  

In a recent essay, Jill Carlson, cofounder of the Open Money Initiative, argued that perhaps decentralized cryptocurrency systems were “never supposed to go mainstream.” Rather, they were created explicitly for “censored transactions,” from paying for drugs or sex to supporting political dissidents or getting money out of countries with restrictive currency controls. Their slowness is inherent, not a design flaw; they “forsake scale, speed, and cost in favor of one key feature: censorship resistance.” A world in which they went mainstream would be “a very scary place indeed,” she wrote.

In summary, we have three avenues for the future of digital money, none of which offers the same mix of freedom and ease of use that characterizes cash. Private companies have an obvious incentive to monetize our data and pursue profits over public interest. Digital government money may still be used to track us, even by well-intentioned governments, and for less benign ones it’s a fantastic tool for surveillance. And cryptocurrency can prove useful when freedoms are at risk, but it likely won’t work at scale anytime soon, if ever.

How big a problem is this? That depends on where you live, how much you trust your government and your fellow citizens, and why you wish to use cash. And if you’d rather keep that to yourself, you’re in luck. For now.

What AI still can’t do (MIT Technology Review)

Brian Bergstein

February 19, 2020

Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

Elias Bareinboim
Elias Bareinboim: AI systems are clueless when it comes to causation.

Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.

His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.

As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.

But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.

Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.

An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.

Performing miracles

The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant—the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.

Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.

Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.

Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.

Judea Pearl
Judea Pearl: His theory of causal reasoning has transformed science.

Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.

Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.

Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.

One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).

The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.

The last mile

Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.

He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.

Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”

He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.

Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”

That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.

He also doesn’t think it’s that far off: “This is the last mile before the victory.”

What if?

Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.

As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.

For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.

You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.”

Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.

This story was part of our March 2020 issue.

The predictions issue

We’re not prepared for the end of Moore’s Law (MIT Technology Review)

David Rotman

February 24, 2020

Moore’s argument was an economic one. Integrated circuits, with multiple transistors and other electronic devices interconnected with aluminum metal lines on a tiny square of silicon wafer, had been invented a few years earlier by Robert Noyce at Fairchild Semiconductor. Moore, the company’s R&D director, realized, as he wrote in 1965, that with these new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” It was a beautiful bargain—in theory, the more transistors you added, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip.

Soon these cheaper, more powerful chips would become what economists like to call a general purpose technology—one so fundamental that it spawns all sorts of other innovations and advances in multiple industries. A few years ago, leading economists credited the information technology made possible by integrated circuits with a third of US productivity growth since 1974. Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore’s prediction. It has also fueled today’s breakthroughs in artificial intelligence and genetic medicine, by giving machine-learning techniques the ability to chew through massive amounts of data to find answers.

But how did a simple prediction, based on extrapolating from a graph of the number of transistors by year—a graph that at the time had only a few data points—come to define a half-century of progress? In part, at least, because the semiconductor industry decided it would.

Cover of Electronics Magazine April, 1965
The April 1965 Electronics Magazine in which Moore’s article appeared.Wikimedia

Moore wrote that “cramming more components onto integrated circuits,” the title of his 1965 article, would “lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.” In other words, stick to his road map of squeezing ever more transistors onto chips and it would lead you to the promised land. And for the following decades, a booming industry, the government, and armies of academic and industrial researchers poured money and time into upholding Moore’s Law, creating a self-fulfilling prophecy that kept progress on track with uncanny accuracy. Though the pace of progress has slipped in recent years, the most advanced chips today have nearly 50 billion transistors.

Every year since 2001, MIT Technology Review has chosen the 10 most important breakthrough technologies of the year. It’s a list of technologies that, almost without exception, are possible only because of the computation advances described by Moore’s Law.

For some of the items on this year’s list the connection is obvious: consumer devices, including watches and phones, infused with AI; climate-change attribution made possible by improved computer modeling and data gathered from worldwide atmospheric monitoring systems; and cheap, pint-size satellites. Others on the list, including quantum supremacy, molecules discovered using AI, and even anti-aging treatments and hyper-personalized drugs, are due largely to the computational power available to researchers.

But what happens when Moore’s Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time?


“It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.” Numerous other prominent computer scientists have also declared Moore’s Law dead in recent years. In early 2019, the CEO of the large chipmaker Nvidia agreed.

In truth, it’s been more a gradual decline than a sudden death. Over the decades, some, including Moore himself at times, fretted that they could see the end in sight, as it got harder to make smaller and smaller transistors. In 1999, an Intel researcher worried that the industry’s goal of making transistors smaller than 100 nanometers by 2005 faced fundamental physical problems with “no known solutions,” like the quantum effects of electrons wandering where they shouldn’t be.

For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.

Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.

Finding successors to today’s silicon chips will take years of research.If you’re worried about what will replace moore’s Law, it’s time to panic.

Nonetheless, Intel—one of those three chipmakers—isn’t expecting a funeral for Moore’s Law anytime soon. Jim Keller, who took over as Intel’s head of silicon engineering in 2018, is the man with the job of keeping it alive. He leads a team of some 8,000 hardware engineers and chip designers at Intel. When he joined the company, he says, many were anticipating the end of Moore’s Law. If they were right, he recalls thinking, “that’s a drag” and maybe he had made “a really bad career move.”

But Keller found ample technical opportunities for advances. He points out that there are probably more than a hundred variables involved in keeping Moore’s Law going, each of which provides different benefits and faces its own limits. It means there are many ways to keep doubling the number of devices on a chip—innovations such as 3D architectures and new transistor designs.

These days Keller sounds optimistic. He says he has been hearing about the end of Moore’s Law for his entire career. After a while, he “decided not to worry about it.” He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.

Still, even if Intel and the other remaining chipmakers can squeeze out a few more generations of even more advanced microchips, the days when you could reliably count on faster, cheaper chips every couple of years are clearly over. That doesn’t, however, mean the end of computational progress.

Time to panic

Neil Thompson is an economist, but his office is at CSAIL, MIT’s sprawling AI and computer center, surrounded by roboticists and computer scientists, including his collaborator Leiserson. In a new paper, the two document ample room for improving computational performance through better software, algorithms, and specialized chip architecture.

One opportunity is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.

Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.

That sounds like good news for continuing progress, but Thompson worries it also signals the decline of computers as a general purpose technology. Rather than “lifting all boats,” as Moore’s Law has, by offering ever faster and cheaper chips that were universally available, advances in software and specialized architecture will now start to selectively target specific problems and business opportunities, favoring those with sufficient money and resources.

Indeed, the move to chips designed for specific applications, particularly in AI, is well under way. Deep learning and other AI applications increasingly rely on graphics processing units (GPUs) adapted from gaming, which can handle parallel operations, while companies like Google, Microsoft, and Baidu are designing AI chips for their own particular needs. AI, particularly deep learning, has a huge appetite for computer power, and specialized chips can greatly speed up its performance, says Thompson.

But the trade-off is that specialized chips are less versatile than traditional CPUs. Thompson is concerned that chips for more general computing are becoming a backwater, slowing “the overall pace of computer improvement,” as he writes in an upcoming paper, “The Decline of Computers as a General Purpose Technology.”

At some point, says Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon, those developing AI and other applications will miss the decreases in cost and increases in performance delivered by Moore’s Law. “Maybe in 10 years or 30 years—no one really knows when—you’re going to need a device with that additional computation power,” she says.

The problem, says Fuchs, is that the successors to today’s general purpose chips are unknown and will take years of basic research and development to create. If you’re worried about what will replace Moore’s Law, she suggests, “the moment to panic is now.” There are, she says, “really smart people in AI who aren’t aware of the hardware constraints facing long-term advances in computing.” What’s more, she says, because application–specific chips are proving hugely profitable, there are few incentives to invest in new logic devices and ways of doing computing.

Wanted: A Marshall Plan for chips

In 2018, Fuchs and her CMU colleagues Hassan Khan and David Hounshell wrote a paper tracing the history of Moore’s Law and identifying the changes behind today’s lack of the industry and government collaboration that fostered so much progress in earlier decades. They argued that “the splintering of the technology trajectories and the short-term private profitability of many of these new splinters” means we need to greatly boost public investment in finding the next great computer technologies.

If economists are right, and much of the growth in the 1990s and early 2000s was a result of microchips—and if, as some suggest, the sluggish productivity growth that began in the mid-2000s reflects the slowdown in computational progress—then, says Thompson, “it follows you should invest enormous amounts of money to find the successor technology. We’re not doing it. And it’s a public policy failure.”

There’s no guarantee that such investments will pay off. Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.

This story was part of our March 2020 issue.

The predictions issue

Humans were apex predators for two million years (Eureka Alert!)

News Release 5-Apr-2021

What did our ancestors eat during the stone age? Mostly meat

Tel-Aviv University

IMAGE: Human Brain. Credit: Dr. Miki Ben Dor

Researchers at Tel Aviv University were able to reconstruct the nutrition of stone age humans. In a paper published in the Yearbook of the American Physical Anthropology Association, Dr. Miki Ben-Dor and Prof. Ran Barkai of the Jacob M. Alkov Department of Archaeology at Tel Aviv University, together with Raphael Sirtoli of Portugal, show that humans were an apex predator for about two million years. Only the extinction of larger animals (megafauna) in various parts of the world, and the decline of animal food sources toward the end of the stone age, led humans to gradually increase the vegetable element in their nutrition, until finally they had no choice but to domesticate both plants and animals – and became farmers.

“So far, attempts to reconstruct the diet of stone-age humans were mostly based on comparisons to 20th century hunter-gatherer societies,” explains Dr. Ben-Dor. “This comparison is futile, however, because two million years ago hunter-gatherer societies could hunt and consume elephants and other large animals – while today’s hunter gatherers do not have access to such bounty. The entire ecosystem has changed, and conditions cannot be compared. We decided to use other methods to reconstruct the diet of stone-age humans: to examine the memory preserved in our own bodies, our metabolism, genetics and physical build. Human behavior changes rapidly, but evolution is slow. The body remembers.”

In a process unprecedented in its extent, Dr. Ben-Dor and his colleagues collected about 25 lines of evidence from about 400 scientific papers from different scientific disciplines, dealing with the focal question: Were stone-age humans specialized carnivores or were they generalist omnivores? Most evidence was found in research on current biology, namely genetics, metabolism, physiology and morphology.

“One prominent example is the acidity of the human stomach,” says Dr. Ben-Dor. “The acidity in our stomach is high when compared to omnivores and even to other predators. Producing and maintaining strong acidity require large amounts of energy, and its existence is evidence for consuming animal products. Strong acidity provides protection from harmful bacteria found in meat, and prehistoric humans, hunting large animals whose meat sufficed for days or even weeks, often consumed old meat containing large quantities of bacteria, and thus needed to maintain a high level of acidity. Another indication of being predators is the structure of the fat cells in our bodies. In the bodies of omnivores, fat is stored in a relatively small number of large fat cells, while in predators, including humans, it’s the other way around: we have a much larger number of smaller fat cells. Significant evidence for the evolution of humans as predators has also been found in our genome. For example, geneticists have concluded that “areas of the human genome were closed off to enable a fat-rich diet, while in chimpanzees, areas of the genome were opened to enable a sugar-rich diet.”

Evidence from human biology was supplemented by archaeological evidence. For instance, research on stable isotopes in the bones of prehistoric humans, as well as hunting practices unique to humans, show that humans specialized in hunting large and medium-sized animals with high fat content. Comparing humans to large social predators of today, all of whom hunt large animals and obtain more than 70% of their energy from animal sources, reinforced the conclusion that humans specialized in hunting large animals and were in fact hypercarnivores.

“Hunting large animals is not an afternoon hobby,” says Dr. Ben-Dor. “It requires a great deal of knowledge, and lions and hyenas attain these abilities after long years of learning. Clearly, the remains of large animals found in countless archaeological sites are the result of humans’ high expertise as hunters of large animals. Many researchers who study the extinction of the large animals agree that hunting by humans played a major role in this extinction – and there is no better proof of humans’ specialization in hunting large animals. Most probably, like in current-day predators, hunting itself was a focal human activity throughout most of human evolution. Other archaeological evidence – like the fact that specialized tools for obtaining and processing vegetable foods only appeared in the later stages of human evolution – also supports the centrality of large animals in the human diet, throughout most of human history.”

The multidisciplinary reconstruction conducted by TAU researchers for almost a decade proposes a complete change of paradigm in the understanding of human evolution. Contrary to the widespread hypothesis that humans owe their evolution and survival to their dietary flexibility, which allowed them to combine the hunting of animals with vegetable foods, the picture emerging here is of humans evolving mostly as predators of large animals.

“Archaeological evidence does not overlook the fact that stone-age humans also consumed plants,” adds Dr. Ben-Dor. “But according to the findings of this study plants only became a major component of the human diet toward the end of the era.”

Evidence of genetic changes and the appearance of unique stone tools for processing plants led the researchers to conclude that, starting about 85,000 years ago in Africa, and about 40,000 years ago in Europe and Asia, a gradual rise occurred in the consumption of plant foods as well as dietary diversity – in accordance with varying ecological conditions. This rise was accompanied by an increase in the local uniqueness of the stone tool culture, which is similar to the diversity of material cultures in 20th-century hunter-gatherer societies. In contrast, during the two million years when, according to the researchers, humans were apex predators, long periods of similarity and continuity were observed in stone tools, regardless of local ecological conditions.

“Our study addresses a very great current controversy – both scientific and non-scientific,” says Prof. Barkai. “For many people today, the Paleolithic diet is a critical issue, not only with regard to the past, but also concerning the present and future. It is hard to convince a devout vegetarian that his/her ancestors were not vegetarians, and people tend to confuse personal beliefs with scientific reality. Our study is both multidisciplinary and interdisciplinary. We propose a picture that is unprecedented in its inclusiveness and breadth, which clearly shows that humans were initially apex predators, who specialized in hunting large animals. As Darwin discovered, the adaptation of species to obtaining and digesting their food is the main source of evolutionary changes, and thus the claim that humans were apex predators throughout most of their development may provide a broad basis for fundamental insights on the biological and cultural evolution of humans.”

Conceito de tecnologia deve ser pensado à luz da diversidade, diz filósofo chinês (Folha de S.Paulo)

Ronaldo Lemos, 31 de janeiro de 2021

[resumo] Um dos mais originais pensadores contemporâneos, o chinês Yuk Hui refuta a ideia ocidental de que a tecnologia é um fenômeno único e universal guiado apenas pela racionalidade. Em entrevista, ele comenta o conceito de tecnodiversidade, tema de livro publicado agora no Brasil, no qual propõe uma visão mais plural do tema, encarando a tecnologia como resultante de conhecimentos e contextos locais variados, o que pode contribuir para formas de pensar que levem à superação de impasses políticos, sociais e ecológicos atuais.

O chinês Yuk Hui tornou-se um dos pensadores centrais para entender o mundo contemporâneo. A originalidade da sua obra consiste precisamente em inaugurar um olhar novo sobre a questão da tecnologia.

Enquanto nós, no Ocidente, nos encantamos com o poder das próprias plataformas tecnológicas que criamos, enxergando-as a partir de ideias reducionistas, como o conceito de singularidade (grosso modo descrito como o momento em que as máquinas adquirem inteligência), Yuk Hui foi para um outro lugar.

Abraçou o conceito da tecnodiversidade, no qual destrói a ideia da tecnologia como um fenômeno universal. Na sua visão, a forma como lidamos com ela é limitante e obscurece nossa relação com o “cosmos” e suas infinitas possibilidades.

Nesta entrevista, Yuk Hui comenta seu primeiro livro publicado no Brasil. Intitulado “Tecnodiversidade”, resulta de uma cuidadosa compilação de textos do pensador realizada pela editora Ubu, em contato diretamente com o autor.

A originalidade da sua obra confunde-se com a sua história pessoal. Nascido na China, formou-se em engenharia da computação em Hong Kong. Depois rumou para Londres, onde estudou no prestigioso Goldsmiths College. A partir daí, circulou por várias instituições europeias, incluindo o Instituto de Inovação do Centro Pompidou, em Paris, e as universidades Leuphana, em Luneburgo, e Bauhaus, em Weimar, ambas na Alemanha.

Voltou então para a China, onde passou a editar a coleção de filosofia da mídia e tecnologia da Academia de Ciências Sociais, em Xangai, e a dar aulas na Universidade de Hong Kong.

Em 2019 esteve no Brasil para conferências na UFPB, na UFRJ e no ITS Rio. Com isso, Hui tece um diálogo entre o pensamento oriental e o ocidental para enfrentar a principal charada dos nossos tempos: o que fazer com a tecnologia?

De Hong Kong, Hui concedeu a entrevista a seguir via aplicativo de chamada pela internet.

Você ocupa uma posição única como pensador. Nasceu na China, fala mandarim, cantonês, teochew, inglês, francês e alemão, trabalhou em instituições renomadas no Ocidente e no Oriente. Vou começar te perguntando se é mais fácil filosofar em chinês ou alemão? [Risos]. De fato, a filosofia é sobre articulação e elaboração de conceitos cujas possibilidades estão na própria linguagem. Não dá para pensar sem a linguagem e é por isso que, para os gregos, a ideia de logos também significa a capacidade de linguagem.

A língua alemã, assim como o grego antigo, tem a vantagem de permitir a articulação precisa de significados. É por isso que Martin Heidegger disse certa vez que essas duas línguas são as línguas da filosofia.

Já o chinês é uma língua baseada em pictogramas, consistindo em uma forma de pensar conduzida por imagens. No entanto, isso não significa que o chinês seja uma língua imprecisa. Nas línguas europeias a noção de tempo é expressa por meio de tempos verbais. Já no Chinês, o tempo não é expresso em verbos, mas em advérbios.

Curiosamente, é no japonês onde podemos encontrar uma combinação interessante de pictogramas (“kanji”) e a conjugação dos verbos. Nosso cérebro é moldado de acordo com a nossa experiência de aprender uma língua, que sintetiza modos diferentes de pensar.

Então, no final, não sei mais em que língua estou pensando. Como nesta entrevista em que estamos nos comunicando em inglês, mas não é realmente inglês que estamos “falando”.

Como o Ocidente e o Oriente percebem a ideia de tecnologia? A tecnologia é um conceito universal? Primeiro, acho que é importante esclarecer o termo universal. No pensamento tradicional, o universal se opõe ao particular, em uma dualidade entre isto ou aquilo. Quando entendemos o universal dessa forma, a afirmação “a tecnologia é um universal” torna-se problemática.

Ou ainda, acreditamos que o progresso da humanidade é definido por uma racionalidade universal, que se concretiza no tempo. No entanto, esse conceito de tecnologia é apenas um produto histórico da modernidade ocidental.

De fato, entre o conceito grego de “technē” e a tecnologia moderna encontra-se uma ruptura epistemológica e metodológica. Não há conceito singular de tecnologia, nem epistemologicamente nem ontologicamente. Podemos no máximo dizer que o conceito de tecnologia foi universalizado através da história da colonização e da globalização.

Arnold Toynbee certa vez perguntou: “Por que o extremo oriente fechou suas portas para os europeus no século 16, mas abriu essas portas para eles no século 19?”. Sua análise é que, no século 16, os europeus queriam exportar tanto a sua tecnologia quanto a sua religião para o Oriente. Já no século 19, sabendo que a religião pode ser um obstáculo, os europeus exportaram somente a tecnologia.

Isso decorre de um modo de pensar em que a tecnologia seria somente um reles instrumento, que poderia ser controlado com um modo de pensar próprio. Os japoneses chamaram essa ideia de “wakon yōsai” (“alma japonesa, conhecimento ocidental”). Os chineses chamaram de “zhōng ti xī yòng” (“adotar o conhecimento ocidental para seu uso prático, mantendo os valores e usos chineses”).

No entanto, olhando retrospectivamente, essa dualidade provou-se insustentável. Nós ainda não entendemos a tecnologia e, por causa disso, ainda não somos capazes de superar a modernidade.

Sua obra usa muitos conceitos do antropólogo Eduardo Viveiros de Castro, como o perspectivismo ameríndio, para construir um entendimento maior sobre a tecnologia. Como o perspectivismo e Viveiros de Castro chegaram ao seu trabalho?Eu admiro o trabalho do professor Viveiros de Castro e o que ele chamou de multinaturalismo é muito inspirador. Ele e seus colegas, como Philippe Descola, têm tentando mostrar que a natureza não é um conceito universal.

Provavelmente, posso dizer que o que eu estava tentando fazer com o conceito de tecnologia é um eco da sua tentativa de longo prazo de articular um pluralismo ontológico. Meu sentimento é que, por causa da orientação e de conflitos de certas escolas da antropologia (por exemplo, entre Claude Lévi-Strauss e André Leroi-Gourhan), a questão da tecnologia não alcançou a clareza que ela merece.

Por isso, o pensamento sobre a natureza de alguma forma deixou esse seu outro aspecto subanalisado. Esse é um assunto que queria discutir com Viveiros de Castro há alguns anos. A novidade é que agora temos toda uma correspondência que trocamos e que será publicada na revista Philosophy Today a partir de abril deste ano.

O Ocidente está fascinado com a ideia de singularidade, que pode ser descrita como o momento em que as máquinas se tornam inteligentes e obtêm primazia sobre a humanidade, levando a um ponto de convergência. Versões dessa ideia aparecem em Ray Kurzweil, mas também no livro “Homo Deus”, no qual Yuval Harari basicamente desiste da ideia de humanismo em prol de uma tecnologia vindoura. Isso faz sentido? Esses discursos em torno da singularidade e do “homo deus” tornaram-se muito populares e perigosos. Eles revelam uma verdade parcial. E, por ser parcial, deixam de fora as questões mais fundamentais sem resposta. Do ponto de vista histórico, o processo de se tornar humano implica a invenção e o uso da tecnologia.

Assim, as teses de que os seres humanos fazem ferramentas e as ferramentas fazem os seres humanos são ambas válidas, como quando a paleontologia tem mostrado, em termos de continuidade e descontinuidade, a trajetória do zinjatropos aos neantropos (Homo sapiens).

O fato de a linguagem simbólica e a arte (como as pinturas rupestres) só terem aparecido entre os neantropos, mas não entre os paleantropos (neandertais), sugere que possa haver uma ruptura cognitiva e existencial. Esta consiste na capacidade de antecipação, de acordo com André Leroi-Gourhan ou, se seguirmos Georges Bataille em seu texto sobre as pinturas rupestres de Lascaux, no sul da França, na capacidade de reflexão sobre a própria morte.

Em outras palavras, a questão da mortalidade condiciona o horizonte de significados que definem o desenvolvimento da arte, da ciência e da política. A introdução da ideia de imortalidade, por meio de noções como a singularidade ou “homo deus”, pertencem a uma época em que a aceleração tecnológica constantemente é um elemento de disrupção dos nossos hábitos, abrindo caminho para a ficção científica tomar conta das nossas imaginações.

O Zaratustra de Nietzsche acabou se tornando, de forma inocente, o porta-voz desse negócio trans-humanista, que promove um otimismo a respeito da superação da humanidade por meio de aprimoramentos na duração da vida, na inteligência e na emoção.

Se for para nos tornarmos “homo deus” ou imortais, então teríamos de reavaliar todos os valores e significados condicionados pela mortalidade. Essas questões permanecem em silêncio no trans-humanismo e, portanto, o trans-humanismo é fundamentalmente uma forma de humanismo e de niilismo.

Para ser possível indagar sobre o futuro do humano ou do pós-humano, teremos de confrontar, em primeiro lugar, um niilismo do século 21. De outro modo, seremos apenas rebanhos participando de campanhas das empresas de biotecnologia e das editoras de livros.

Ainda faz sentido falar em distinções como orgânico e mecânico com o surgimento da tecnologia digital e do virtual?  A distinção entre orgânico e mecânico veio de uma necessidade histórica na Europa, especificamente o declínio no século 17 da filosofia mecanicista e a emergência da biologia no século 18, disciplina que só foi reconhecida no começo do século 19.

No pensamento chinês não é possível identificar uma trajetória similar, ainda que muita gente afirme que o pensamento chinês é orgânico —isso denota uma falta de compreensão sobre o assunto.

No meu livro “Recursivity and Contingency” (recursividade e contingência) —meu esforço de fornecer uma nova interpretação da história da filosofia europeia, desde o começo do período moderno até hoje—, explico que essa distinção era fundamental para os projetos filosóficos de Kant e de idealistas pós-kantianos, como Fichte, Schelling e Hegel.

Falo também como essa distinção foi colocada em questão pela cibernética, assim como da necessidade de formular uma nova condição filosófica para os nossos tempos. É por essa razão que a primeira frase desse meu livro declara que ele éum tratado sobre cibernética.

Há um enorme debate sobre as big techs, grandes empresas de tecnologia como Google, Facebook, Apple, Amazon etc. Na China há um ecossistema diferente, com empresas como Alibaba, Tencent, ByteDance e Baidu. A questão das big techs é a mesma no Ocidente e na China? De fato, o ecossistema chinês que você menciona não é tanto sobre inovação tecnológica, mas muito mais condicionado pelos sistemas social e político. Como consequência, os processos de normalização são diferentes entre si e cada um desses sistemas políticos e sociais.

Por normalização eu me refiro aos meios pelos quais se adquire legitimidade para se transformar em norma social. As pessoas no Ocidente tendem a pensar que os chineses não se importam com a privacidade, mas isso não é verdade. O fato é que na China o processo de normalização é em grande medida determinado pelo Estado e com isso vemos dinâmicas diferentes na recepção e no uso da tecnologia.

No entanto, como no Ocidente, as empresas de tecnologia usam a coleta de dados, construção de perfis e análises de comportamentos de um modo que ultrapassa a capacidade da administração estatal, o que pode se tornar uma ameaça ao poder do Estado. Acho que as acusações recentes de monopólio contra o Alibaba podem servir como um bom estudo de caso para essa questão.

Ao emergir como uma superpotência tecnológica, a China tem sido capaz de criar um modelo diferente de tecnologia e seus dispositivos ou é tudo a mesma coisa sob o sol, no Ocidente e no Oriente? Infelizmente, acho que não estamos testemunhando uma tecnodiversidade no seu sentido real. Certa vez eu fui a Hong Kong e Shenzhen com um grupo de estudantes russos que estava muito interessado em entender o desenvolvimento tecnológico na China.

No entanto, eles ficaram desapontados ao constatar que os aplicativos utilizados pareciam familiares, com a diferença de que a interface estava em chinês. Essa é outra razão pela qual a questão da tecnodiversidade ainda está por ser formulada.

Como o seu trabalho filosófico pode ser traduzido em ação? Existe um programa político que guie sua obra? Eu sou um filósofo, e o que posso fazer dentro da minha capacidade pessoal é formular questões que considero importantes e elaborar sobre a necessidade dessas questões.

Só posso desejar que o que eu tenho pensado possa ter ressonância sobre quem se preocupa com a questão da tecnologia e que possamos pensar juntos como um programa como esse poderia parecer e funcionar.

Se você me perguntar qual será o melhor caminho de começar algo assim, eu diria que precisamos reformar nossas universidades, em especial o sistema do conhecimento, suas divisões e estruturas, formas definidas séculos atrás.

No ano passado escutei sua conversa com Aleksandr Dugin, pensador a quem muita gente atribui, em uma simplificação grosseira, o rótulo de teórico do movimento antiglobalista contemporâneo. O que há de interessante no pensamento de Dugin e como ele se relaciona com o seu? A proposta de Aleksandr Dugin, bem como o legado da Escola de Kyoto, para mim é um assunto muito difícil de tratar. Quando digo difícil, não quero dizer que suas teorias sejam difíceis de entender, mas sim que é difícil desacreditá-los como fascistas e reacionários.

Se quisermos nos aproximar da questão do pluralismo, não podemos evitar as tensões entre essas diferenças. Desde o Iluminismo, a busca pelo universal tem sido central para o pensamento político. Aceitar com facilidade o universal elimina diversidades, reduzindo-as a meras representações, como no caso do multiculturalismo (o que Viveiros de Castro critica com razão).

A recusa fácil do universal em nome das particularidades também justifica o nacionalismo e a violência estatal. Não tenho a impressão de que Dugin ou os filósofos da Escola de Kyoto ignorem essa questão, no entanto tem havido obstáculos para o seu entendimento. Isso é o que eu chamo de “o dilema do retorno para casa” no meu livro sobre a questão da tecnologia na China. Penso que essa tem de ser a questão central da filosofia política no século 21.

Um dos pontos que chamam a atenção em Dugin é que a natureza em si não deveria ser um fenômeno universal. De forma grosseira, a visão dele acomodaria aqueles que acreditam que a terra é plana. No seu trabalho, você diz que a tecnologia não é um conceito universal. Como suas perspectivas se diferenciam? No meu debate com Dugin, ele elogiou o professor Eduardo Viveiros de Castro por sua ideia de multinaturalismo, mas seria inconcebível imaginar o professor Viveiros de Castro dizendo que a terra é plana. A natureza enquanto conceito, como a técnica, é uma construção histórica e cultural (geográfica).

No entanto, isso não significa que é uma construção puramente social e, desse modo, arbitrária. Quando eu digo que a tecnologia não é um universal, isso não significa que a teoria da causalidade aplicada para uma máquina mecânica é arbitrária ou que possamos reverter as causalidades. Rejeitar um conceito universal de natureza e de tecnologia não significa, de jeito nenhum, ser anticiência ou antitecnologia.

Minha sugestão é que, em vez de entender a tecnologia como uma substância universal sublinhada pela racionalidade, ela tem de ser entendida dentro de uma “gênesis”, por exemplo, em justaposição a outras formas de pensar, como a estética, a religião, a filosofia proposta por Gilbert Simondon, tudo sendo histórico e cultural.

Você esteve no Brasil, onde fez conferências na Paraíba e no Rio. Você tem hoje uma rede de interlocutores no país, como o professor Carlos Dowling, Hermano Vianna, Eduardo Viveiros de Castro e eu também. Quais foram suas impressões do Brasil e que papel nos cabe em termos de tecnologia? Foi minha primeira vez na América Latina, e isso me fez ter um entendimento melhor do legado colonial e também da inquietação social e política na região. Fiquei muito tocado com a recepção que tive no Brasil, e vejo que há muitas conversas por vir.

Acho que a América Latina em geral, e especialmente o Brasil, terá um papel muito importante a desempenhar no desenvolvimento de uma tecnodiversidade, a partir das necessidades de descolonização e das características próprias do seu pensamento.

Você mesmo me disse que há mais de uma década houve muitas tentativas no Brasil de articular uma tecnodiversidade [como no trabalho de Gilberto Gil no Ministério da Cultura], incluindo um trabalho sobre direitos autorais e sobre a perspectiva de grupos indígenas. Espero que surjam novos momentos para continuar esses trabalhos.

O seu livro “Tecnodiversidade” acaba de ser publicado no Brasil. É um livro único, no qual você trabalhou diretamente com a editora Ubu para compilar alguns dos seus trabalhos mais importantes, além de escrever uma introdução especial para o volume. O que você achou do resultado? A editora Florencia Ferrari foi muito gentil de ter se oferecido para publicar uma antologia dos meus textos. Tanto eu quanto ela temos de agradecer a você pelo texto de introdução.

Escolhi vários textos políticos e algumas conferências que realizei em Taipei em 2019, junto com o meu antigo orientador Bernard Stiegler. Fiquei feliz que esses artigos foram traduzidos para o português, uma vez que eles dão uma boa impressão da minha trajetória de 2016 a 2020, bem como quais seriam as implicações dos conceitos que venho desenvolvendo.

Uma pergunta pessoal: como você lida com a tecnologia? Você tem um smartphone? Está presente em redes sociais? Fica online por muito tempo ou tem algum tipo de compulsão para checar o celular o tempo todo, como todos nós? Minha primeira formação é como cientista da computação, então não sou realmente um ludita. Estou no Twitter, no Facebook, no WeChat e em outras mídias sociais. Se você quiser entender essas mídias, você tem de utilizá-las. Não pode criticá-las sem ao menos ter um conhecimento do que são, como muitos filósofos ainda fazem hoje.

No entanto, para conseguir me concentrar, somente me permito checar essas mídias sociais durante um período específico do dia. Em geral eu reservo as manhãs para estudar.

Você assistiu ao documentário “O Dilema das Redes”, na Netflix? Qual sua opinião? Assisti só a uma parte, mas, para mim, o problema não é tanto a questão da manipulação, mas sim a falta de alternativas. Em 2012, eu trabalhava no time do Bernard Stiegler em Paris para desenvolver uma alternativa a redes sociais como o Facebook. Seria o que chamei de rede social baseada em grupos, que possibilitaria um design alternativo.

O problema que vejo hoje é que não somos capazes de prover verdadeiras alternativas. Quando você está cansado do Facebook você muda para outro Facebook, que pode ser diferente apenas em sua política de dados e propriedade, mas você acaba fazendo as mesmas coisas lá e sofre dos mesmos problemas nessas novas plataformas. Criar alternativas faz também parte do que chamo de tecnodiversidade.

Quais sãos seus projetos atuais? Está dando aulas online? Como funciona isso para você? Estou aguardando a publicação de meu novo livro, “Art and Cosmotechnics” (arte e cosmotécnica) —espero que ele seja publicado a partir de abril de 2021. A partir daí, vou embarcar em outras aventuras. Atualmente estou dando aulas em Hong Kong.

Por causa da Covid-19, a maior parte tem sido online. É interessante que nesta época digital dar aulas face a face ainda seja considerado pela maioria das pessoas como mais “autêntico” e que ensinar online seja visto como secundário. Ao mesmo tempo, é espantoso que não haja mais ferramentas de ensino digital online e que todas as universidades acabem usando praticamente as mesmas ferramentas. Isso diz muitas coisas.

A pandemia foi um alarme para nos lembrar do nosso lugar na natureza e também no cosmos? Você acha que há alguma reconfiguração no pensamento por causa dela? Espero mesmo que seja um alarme. No entanto, isso não implica uma grande mudança na nossa orientação política. Ao contrário, pode ser o caso de que a recuperação econômica, social ou política apenas leve a formas ainda mais agressivas de exploração. Então, não é o caso de esperarmos o fim da pandemia para mudar algo.

Algumas novas orientações e estratégias têm de ser formuladas desde já. Ainda tenho esperança de que, com a tecnologia digital, possa haver algo que possamos fazer em termos de construir novas instituições e trocas.

A ideia de tecnodiversidade aplica-se às ciências da vida, como a pesquisa genética ou o desenvolvimento de vacinas? Faz sentido pensar em tecnodiversidade nesses casos? Em termos de tecnodiversidade, talvez possamos dizer que haja duas perspectivas, ainda que estejam inter-relacionadas.

Uma é a perspectiva da cultura, que elaborei no meu livro sobre a cosmotécnica, contra o conceito da universalidade da tecnologia. A outra é a perspectiva da epistemologia, como o que eu disse sobre as redes sociais. Diversificação é imperativo, não apenas da perspectiva do mercado, mas também da imaginação do futuro.

Qual o papel da religião no mundo de hoje? Se me lembro corretamente, você estudou em um colégio católico quando jovem. Além disso, a tecnologia é, de algum modo, uma nova forma ou um substituto para a religião? Depois do anúncio de Nietzsche sobre a morte de Deus, vemos que a religião cristã ainda existe, mas Nietzsche não está mais lá. A morte de Deus é para Nietzsche um momento de superar o conceito do humano, para inventar a ideia do “Übermensch”.

Deus é transcendência que não pode ser substituído pela tecnologia em si. No entanto, a fantasia sobre a tecnologia, como, por exemplo, a ideias de “homo deus”, de singularidade e outros termos que invocamos antes podem desempenhar esse papel.

Nosso conhecimento é limitado, muitas coisas são desconhecidas por nós. Com o avanço da ciência e da tecnologia, essas coisas passaram a parecer ainda mais místicas que antes. O desconhecido ocupa o lugar de Deus; alguns continuam a encontrá-lo na religião, alguns na poesia e alguns na arte. Para mim, a questão é qual a relação entre tecnologia e o desconhecido. Esse é o tema chave dos meus estudos no livro “Art and Cosmotechnics”.

Por fim, vivemos uma exaustão de paradigmas para entender o mundo? É necessário construir uma nova linguagem, baseada na ciência da computação, como propõe Stephen Wolfram, para explicar a natureza e o mundo? Ou ainda temos as ferramentas necessárias para isso? A linguagem para apreender as coisas em si, em sua totalidade, é chamada de metafísica. Nesse sentido, a cibernética, que entende o mundo através da retroalimentação de repetições e pela organização da informação, não apenas é metafísica, mas também sua completude, sua realização.

A linguagem cibernética hoje é concretizada em algoritmos, e algumas pessoas podem ser seduzidas pela ideia de que logo será possível decifrar os segredos do Universo, da mesma forma como se aspirou logo depois da descoberta do DNA. E, de fato, alguns biólogos hoje usam uma metáfora linguística para entender o DNA como se fossem algoritmos e informação codificada por meio dele.

No entanto, hoje vivemos em um mundo pós-metafísico, posterior à ideia da morte de Deus e posterior ao fim da metafísica e das batalhas do século 20. Vamos precisar de uma abertura para entender este mundo e procurar um lugar para nós mesmos no cosmos.

Não estou dizendo que não seja promissor entender o mundo através da ciência, ou que deveríamos abdicar da ciência ou da tecnologia. Isso de forma alguma é o que afirmo. Em vez disso, não importa o quão avançado seja nosso conhecimento, devemos sempre lembrar a nós mesmos da nossa própria finitude em face ao mundo. De outro modo, vamos apenas bater em retirada em direção à metafísica.

Algo parecido com o que Rainer Maria Rilke disse nas “Elegias de Duíno”: “Com todos os seus olhos, o mundo natural vê o Aberto. Nosso olhar, porém, foi revertido e como armadilha se oculta em torno do livre caminho. O que está além, pressentimos apenas na expressão do animal; pois desde a infância desviamos o olhar para trás e o espaço livre perdemos, ah, esse espaço profundo que há na face do animal”.

Inteligência artificial já imita Guimarães Rosa e pode mudar nossa forma de pensar (Folha de S.Paulo)

Hermano Vianna Antropólogo, escreve no blog

22 de agosto de 2020

[resumo] Espantado com as proezas de tecnologias capazes de produzir textos, até mesmo criando propostas a partir de frase de Guimarães Rosa, antropólogo analisa os impactos gerados pela inteligência artificial, aponta dilemas éticos relativos a seu uso, teme pelo aumento da dependência em relação aos países produtores de softwares e almeja que as novas práticas façam florescer no Brasil modos mais diversos e colaborativos de pensar.

GPT-3 é o nome da nova estrela da busca por IA (inteligência artificial). Foi lançado em maio deste ano pela OpenAI, companhia que vai completar cinco anos desde sua fundação bilionária financiada por, entre outros, Elon Musk.

Até agora, o acesso a sua já lendária gigacapacidade de geração de textos surpreendentes, sobre qualquer assunto, é privilégio de pouca gente rica e poderosa. Há, contudo, atalhos divertidos para pobres mortais: um deles é o jogo “AI Dungeon” (masmorra de IA), criação de um estudante mórmon, que desde julho funciona com combustível GPT-3.

O objetivo dos jogadores é criar obras literárias de ficção com ajuda desse modelo de IA. A linguagem de partida é o inglês, mas usei português, e o bichinho teve jogo de cintura admirável para driblar minha pegadinha.

Fui até mais implicante. Não usei apenas português, usei Guimarães Rosa. Copiei e colei, da primeira página de “Grande Sertão: Veredas”: “Alvejei mira em árvore, no quintal, no baixo do córrego”. O “AI Dungeon”, que até aquele ponto estava falando inglês, pegou a deixa e continuou assim: “Uma fogueira crepitante brinca e lambiça em torno de um lindo carvalho”.

Tudo bem, Rosa nunca escreveria essa frase. Fiz uma busca: crepitar não aparece em nenhum momento de “Grande Sertão: Veredas”, e carvalho não costuma ser vizinho de buritis. Porém, o GPT-3 entendeu que precisava mudar de língua para jogar comigo e resolveu arriscar: uma fogueira não fica deslocada no meu quintal, ainda mais uma fogueira brincante. E fez o favor de confundir Rosa com James Joyce, inventando o verbo lambiçar, que meu corretor ortográfico não reconhece, talvez para sugerir uma lambida caprichada ou sutilmente gulosa.

Fiquei espantado. Não é todo dia que recebo uma resposta tão desconcertante. Fiz outra busca, aproveitando os serviços do Google: não há registro da frase completa que o “AI Dungeon” propôs. Foi realmente uma criação original. Uma criação “bem criativa”.

(E testei Joyce também: quando inseri, de “Ulysses”, sampleado igualmente de sua primeira página, “Introibo ad altare Dei”, o jogo foi apenas um pouco menos surpreendente, mandou de volta a tradução do latim para o inglês.)

Originalidade. Criatividade. A combinação disso tudo parece mesmo atributo de um ser inteligente, que tem consciência do que está fazendo ou pensando.

Pelo que entendo, já que minha pouca inteligência não é muito treinada nessa matéria, o GPT-3, certamente o mais parrudo modelo de geração artificial de textos com intenção de pé e cabeça, tem uma maneira muito especial de pensar, que não sou capaz de diferenciar daquilo que acontece entre nossos neurônios: seu método é estatístico, probabilístico.

Está fundamentado na análise de uma quantidade avassaladora de textos, quase tudo que existe na internet, em várias línguas, inclusive linguagens de computador. Sua estratégia mais simples, e certamente estou simplificando muito, é identificar quais palavras costumam aparecer com mais frequência depois de outras. Assim, em suas respostas, chuta o que no seu “pensamento” parecem ser as respostas mais “prováveis”.

Claro que não “sabe” do que está falando. Talvez, no meu teste Rosa, se tivesse escrito peixe, no lugar do carvalho poderia surgir um “lindo tubarão”; e isso não significaria que essa IA entenda profundamente a distinção peixe-árvore.

Mas qual profundidade o entendimento precisa atingir para ser reconhecido como verdadeiramente inteligente? E o chute não é, afinal, uma característica corriqueira dos artifícios da nossa IA? Não estou chutando aqui descaradamente, falando daquilo que não domino, ou não entendo?

Não estou escrevendo isto para tentar definir o que é inteligência ou consciência; melhor voltarmos a um território mais concreto: a probabilidade. Há algo de inusitado em uma fogueira que brinca. Não deve ser tão comum assim essa associação de ideias ou palavras, mas árvore remeter a carvalho deve indicar um treinamento de “machine learning” (aprendizado de máquina) que não aconteceu no Brasil.

Outros “pés de quê” são por aqui estatisticamente mais prováveis de despontarem em nossas memórias “nacionais” quando penetram no reino vegetal. Estou pensando, é claro, em tema bem batido do debate sobre IA: o “bias”, ou viés, inevitável em seus modelos, consequência dos dados que alimentaram seu aprendizado, não importa quão “deep learning” (aprendizagem profunda) tenha sido.
São conhecidos os exemplos mais preconceituosos, como o da IA de identificação de fotos que classificou pessoas negras como gorilas, pois no seu treinamento a quase totalidade dos seres humanos que “viu” era gente branca. Problema dos bancos de dados? É preciso ir mais “deep”.

Então, lembro-me do primeiro artigo assinado por Kai Fu Lee, empresário baseado na China, que li no jornal The New York Times. Resumo: na corrida pela IA, EUA e China ocupam as primeiras posições, muito na frente dos demais países. Poucas grandes companhias serão vencedoras.

Cada avanço exige muitos recursos, inclusive energéticos tradicionais, vide o consumo insustentável de eletricidade para o GPT-3 aprender a “lambiçar”. Muitos empregos vão sumir. Todo o mundo precisará de algo como “renda universal”. De onde virá o dinheiro?

Resposta assustadora de Kai Fu Lee, em tradução do Google Translator, sem minhas correções: “Portanto, se a maioria dos países não for capaz de tributar a IA ultra-lucrativa, empresas para subsidiar seus trabalhadores, que opções eles terão? Prevejo apenas um: a menos que desejem mergulhar seu povo na pobreza, serão forçados a negociar com o país que fornecer a maior parte de seu IA software —China ou Estados Unidos— para se tornar essencialmente dependente econômico desse país, recebendo subsídios de bem-estar em troca de deixar a nação ‘mãe’ IA. as empresas continuam lucrando com os usuários do país dependente. Tais arranjos econômicos remodelariam as alianças geopolíticas de hoje”.

Apesar dos muitos erros, a conclusão é bem compreensível: uma nova teoria da dependência. Eis o pós-colonialismo, ou o cibercolonialismo, como destino inevitável para a humanidade?

Isso sem tocar em algo central no pacote a ser negociado: a colônia se submeterá também ao conjunto de “bias” da “nação ‘mãe’ IA”. Prepare-se: florestas de carvalho, sem buritis.

Recentemente, mas antes do “hype” do GPT-3, o mesmo Kai Fu Lee virou notícia dando nota B- para a atuação da IA durante a pandemia. Ele passou sua quarentena em Pequim. Diz que entregadores de suas compras foram sempre robôs —e, pelo que vi na temporada 2019 do Expresso Futuro, gravada por Ronaldo Lemos e companhia na China, eu acredito.

Ficou decepcionado, todavia, com a falta de protagonismo do “machine learning” no desenvolvimento de vacinas e tratamentos. Eu, com minha ousadia pouco preparada, chutaria nota semelhante, talvez C+, para seguir o viés universidade americana.

Aplaudi, por exemplo, quando a IBM liberou os serviços do Watson para organizações em seu combate contra o coronavírus. Ou quando empresas gigantes, como Google e Amazon, proibiram o uso de suas tecnologias de reconhecimento facial depois das manifestações antirracistas pelo mundo todo.

No entanto, empresas menores, com IAs de vigilância não menos potentes, aproveitaram a falta de concorrência para aumentar sua clientela. E vimos como os aplicativos de rastreamento de contatos e contaminações anunciam a transparência totalitária de todos os nossos movimentos, através de algoritmos que já tornaram obsoletas antigas noções de privacidade.

Tudo bem assustador, para quem defende princípios democráticos. Contudo, nem o Estado mais autoritário terá garantia de controle de seus próprios segredos.

Esses problemas são reconhecidos por toda a comunidade de desenvolvedores de IA. Há muitos grupos —como The Partnership on AI, que inclui da OpenAI a Electronic Frontier Foundation— que se dedicam há anos ao debate sobre as questões éticas do uso da inteligência artificial.

Debate extremamente complexo e cheio de becos perigosos, como demonstra a trajetória de Mustafa Suleyman, uma das personalidades mais fascinantes do século 21. Ele foi um dos três fundadores da DeepMind, a empresa britânica, depois comprada pelo Google, que criou aquela IA famosa que venceu o campeão mundial de Go, jogo de tabuleiro criado na China há mais de 2.500 anos.

As biografias do trio poderiam inspirar filmes ou séries. Demis Hassabis tem pai grego-cipriota e mãe de Singapura; Shane Legg nasceu no norte da Nova Zelândia; e Mustafa Suleyman é filho de sírio taxista imigrante em Londres.

A história de Suleyman pré-DeepMind é curiosa: enquanto estudava na Universidade de Oxford, montou um serviço telefônico para cuidar da saúde mental de jovens muçulmanos. Depois foi consultor para resolução de conflitos. No mundo da IA —hoje cuida de “policy” no Google— nunca teve papas na língua. Procure por suas palestras e entrevistas no YouTube: sempre tocou em todas as feridas, como se fosse crítico de fora, mas com lugar de fala do centro mais poderoso.

Gosto especialmente de sua palestra na Royal Society, com seu estilo pós-punk e apresentado pela princesa Ana. Mesmo assim, com toda sua consciência política muito clara e preocupações éticas que me parecem muito sinceras, Mustafa Suleyman se viu metido em um escândalo que envolve a acusação de uso sem autorização de dados de pacientes do NHS (serviço britânico de saúde pública) para desenvolvimento de aplicativos que pretendiam ajudar a monitorar doentes hospitalares em estado crítico.

Foram muitas as explicações da DeepMind, do Google e do NHS. Exemplo de problemas com os quais vamos viver cada vez mais e que precisam de novos marcos regulatórios para determinar que algoritmos podem se meter com nossas vidas —e, sobretudo, quem vai entender o que pode um algoritmo e o que pode a empresa dona desse algoritmo.

Uma coisa já aprendi, pensando nesse tipo de problema: diversidade não é importante apenas nos bancos de dados usados em processos de “machine learning”, mas também nas maneiras de cada IA “pensar” e nos sistemas de segurança para auditar os algoritmos que moldam esses pensamentos.

Essa necessidade tem sido mais bem explorada nas experiências que reúnem desenvolvedores de IA e artistas. Acompanho com enorme interesse o trabalho de Kenric Mc Dowell, que cuida da aproximação de artistas com os laboratórios de “machine learning” do Google.

Seus trabalhos mais recentes investem na possibilidade de existência de inteligências não humanas e na busca de colaboração entre tipos diferentes de inteligências e modos de pensar, incluindo a inspiração nas cosmotécnicas do filósofo chinês Yuk Hui, que andou pela Paraíba e pelo Rio de Janeiro no ano passado.

Na mesma trilha, sigo a evolução da prática em artes e robótica de Ken Goldberg, professor da Universidade da Califórnia em Berkeley. Ele publicou um artigo no Wall Street Journal em 2017 defendendo a ideia que se tornou meu lema atual: esqueçam a singularidade, viva a multiplicidade.

Através de Ken Goldberg também aprendi o que é floresta randômica (“random forest”), método de “machine learning” que usa não apenas um algoritmo, mas uma mata atlântica de algoritmos, de preferência cada um pensando de um jeito, com decisões tomadas em conjunto, procurando, assim, entre outras vantagens, evitar vieses “individuais”.

Minha utopia desesperada de Brasil: que a “random forest” seja aqui cada vez mais verdejante. Com desenvolvimento de outras IAs, ou IAs realmente outras. Inteligências artificiais antropófagas. GPTs-n ao infinito, capazes de pensar nas 200 línguas indígenas que existem/resistem por aqui. Chatbots que façam rap com sotaque tecnobrega paraense, anunciando as fórmulas para resolução de todos os problemas alimentares da humanidade.

Inteligência não nos falta. Inteligência como a da jovem engenheira Marianne Linhares, que saiu da graduação da Universidade Federal de Campina Grande e foi direto para a DeepMind de Londres.

Em outro mundo possível, poderia continuar por aqui, colaborando com o pessoal de “machine learning” da UFPB (e via Github com o mundo todo), talvez inventando uma IA que realmente entenda a literatura de Guimarães Rosa. Ou que possa responder à pergunta de Meu Tio o Iauaretê, ”você sabe o que onça pensa?”, pensando igual a uma onça. Bom. Bonito.

Lilia Schwarcz: Pandemia marca fim do século 20 e indica limites da tecnologia (UOL Universa)

Camila Brandalise e Andressa Rovani, 9 de abril de 2020

Um milhão e quinhentas mil pessoas infectadas pelo mundo —um terço delas na última semana. Oitenta e sete mil mortos em uma velocidade desconcertante. O fim dos deslocamentos. Milhões de pessoas obrigadas a readequar suas rotinas ao limite de suas casas. Há 100 dias, o mundo parou.

Em 31 de dezembro de 2019 um comunicado do governo chinês alertava a Organização Mundial da Saúde para a ocorrência de casos de uma pneumonia “de origem desconhecida” registrada no sul do país. Ainda sem nome, o novo coronavírus alcançaria 180 países ou territórios. “É incrível refletir sobre quão radicalmente o mundo mudou em tão curto período de tempo”, indica o diretor-geral da OMS, Tedros Ghebreyesus.

Para uma das principais historiadoras do país, no futuro, professores precisarão investir algumas aulas para explicar o que vivemos hoje —momento que, para ela, pode ser comparado à quebra da Bolsa de Nova York, em 1929. “A quebra da Bolsa também parecia inimaginável”, afirma Lilia Schwarcz, professora da Universidade de São Paulo e de Princeton, nos EUA. “A aula vai se chamar: O dia em que a Terra parou.”

Lilia sugere ainda que a crise causada pela disseminação da covid-19 marca o fim do século 20, período pautado pela tecnologia. “Nós tivemos um grande desenvolvimento tecnológico, mas agora a pandemia mostra esses limites”, diz.

A seguir, trechos da entrevista em que a historiadora compara o coronavírus à gripe espanhola, de 1918, diz que o negacionismo em relação a doenças sempre existiu e afirma que grandes crises sanitárias construíram heróis nacionais, como Oswaldo Cruz e Carlos Chagas, e reforçaram a fé na ciência.

Completam-se 100 dias desde que o primeiro caso de coronavírus, na China, foi notificado à Organização Mundial de Saúde. Podemos considerar que esses 100 dias mudaram o mundo?

É impressionante como um uma coisinha tão pequena, minúscula, invisível, tenha capacidade de paralisar o planeta. É uma experiência impressionante de assistir. Eu estava dando aula em Princeton [universidade nos EUA], e foi muito impressionante ver como as instituições foram fechando. É uma coisa que só se conhecia do passado, ou de distopias, era mais uma fantasia.

Nunca se sai de um estado de anomalia da mesma maneira. Crises desse tipo fecham e abrem portas. Estamos privados da nossa rotina, sem poder ver pessoas que a gente gosta, de quem sentimos imensa falta, não podemos cumprir compromissos.

Mas também abre portas: estamos refletindo um pouco se essa rotina acelerada é de fato necessária, se todas as viagens de avião são necessárias, se todo mundo precisa sair de casa e voltar no mesmo horário. Se não podemos ser mais flexíveis, menos congestionados, com menos poluição.

Então, talvez abra [a oportunidade] para refletir sobre alguns valores como a solidariedade. Todo mundo que diz que sabe o que vai acontecer está equivocado, a humanidade é muito teimosa. Mas penso que estamos vivendo uma situação muito singular, de outra temporalidade, num tempo diferente. Isso pode romper com algumas barreiras: estamos vivendo num país de muito negacionismo. No Brasil vivemos situação paradoxal, o presidente nega a pandemia.

Mas o mundo, neste momento, é outro?

Neste exato momento em que conversamos, o mundo está mudado. Nós que éramos tão certeiros nas nossas agendas, draconianas, de repente me convidam para um evento em setembro, eu digo: “Olha, não sei se vou poder ir, se vai dar para confirmar”. Essa humanização das nossas agendas, dos nossos tempos, eu penso que já mudou, sim.

Ficar em casa é reinventar sua rotina, se descobrir como uma pessoa estrangeira [à nova rotina]. Eu me conheço como uma pessoa que acorda de manhã, vai correr, vai para o trabalho, vai para o outro, chega em casa exausta. Agora, sou eu tendo que me inventar numa temporalidade diferente, que parece férias mas não é. É um movimento interior de redescoberta.

Insisto que nem todos passam por isso. [O filósofo francês] Montaigne dizia: “A humanidade é vária”. Nem todos estão passando por isso da mesma maneira, depende de raça, classe, há diferenças, varia muito.

E em relação aos papéis sociais dos homens e das mulheres?

Nós, mulheres, já temos um conhecimento distinto dos homens na noção do cuidado, na casa, acho que a mudança vai ser maior para os homens, que não estão acostumados com o dia a dia da casa, com fazer comida, arrumar. Essa ideia de cuidado foi eminentemente uma função feminina.

E estou muito interessada em ver como os homens vão lidar com essa ideia de ficar em casa e ter que cuidar também. É uma experiência muito única que vivemos.

Há discussões que dizem que o século 20 carecia de um “marco” para seu fim e que as primeiras décadas do século 21 ainda estavam lidando com a herança do século passado. A senhora concorda? Essa pandemia pode funcionar como esse divisor?

Sim. [O historiador britânico Eric] Hobsbawn disse que o longo século 19 só terminou depois das Primeira Guerra Mundial [1914-1918]. Nós usamos o marcador de tempo: virou o século, tudo mudou. Mas não funciona assim, a experiência humana é que constrói o tempo. Ele tem razão, o longo século 19 terminou com a Primeira Guerra, com mortes, com a experiência do luto, mas também o que significou sobre a capacidade destrutiva.

Acho que essa nossa pandemia marca o final do século 20, que foi o século da tecnologia. Nós tivemos um grande desenvolvimento tecnológico, mas agora a pandemia mostra esses limites

Mostra que não dá conta de conter uma pandemia como essa, nem de manter a sua rotina numa situação como essa. A grande palavra do final do século 19 era progresso. Euclides da Cunha dizia: “Estamos condenados ao progresso”. Era quase natural, culminava naquela sociedade que gostava de se chamar de civilização.

O que a Primeira Guerra mostrou? Que [o mundo] não era tão civilizado quando se imaginava. Pessoas se guerreavam frente a frente. E isso mostrou naquele momento o limite da noção de civilização e de evolução, que era talvez o grande mito do final do século 19 e começo do 20. E nós estamos movendo limites. Investimos tanto na tecnologia, mas não em sistemas de saúde e de prevenção que pudessem conter esse grande inimigo invisível.

A senhora já assinalou que a gripe espanhola matou muito mais do que as duas Grandes Guerras juntas e que, assim como vivemos hoje no Brasil, houve muito negacionismo e lentidão na tomada de decisões. Não aprendemos essa lição? Por que é difícil não repetir os erros?

A doença, seja ela qual for, produz uma sensação de medo e insegurança. Diante desse tipo de crise, sanitária, a nossa primeira reação é dizer: “Não, aqui não, aqui não vai entrar”. Antes de virar pandemia, as mortes são distantes, esse discurso do “aqui não”, é muito claro, é natural, com todas as aspas que se pode colocar, porque o estado que queremos é de saúde. Mas nós também somos uma sociedade que esquece o nosso próprio corpo, ele serve para botar uma roupa, pentear o cabelo, é como se ele não existisse.

É demorado assumir, o negacionismo existiu sempre. No começo do século, em 1903, a expectativa de vida era de 33 anos. O Brasil era chamado de grande hospital e tinha todo tipo de doença: lepra, sífilis, tuberculose, peste bubônica, febre amarela. Quando entra [o presidente] Rodrigues Alves e indica um médico sanitarista para combater a febre amarela, a peste bubônica e a varíola, eles começam matando ratos e mosquitos e depois passam a vacinar contra a varíola.

Mas na época a população não entendeu, não foi informada e reagiu. O mesmo presidente que indicou Osvaldo Cruz é o que vai estar no poder no contexto da gripe espanhola. Osvaldo Cruz já tinha morrido, então indica o herdeiro dele, Carlos Chagas. [Com a gripe espanhola] As autoridades brasileiras já sabiam o que estava acontecendo, mesmo assim não tomaram atitude. A gripe entrou a bordo de navios que atracaram no Brasil e aí explodiu. Mas a atitude sempre foi essa: “Aqui não, é um país de clima quente, não é de pessoas idosas”.

Como pode falar em ter menos risco no Brasil porque a população é mais jovem, se é muito mais desigual que países europeus que já estão sofrendo? O negacionismo cria o bode expiatório, é recorrente.

Mas por que não aprendemos com os erros do passado?

Porque o negacionismo nega a história também. É dizer: “Em 1918 não tínhamos as condições que temos agora, não tínhamos a tecnologia”. Então também se pode usar a história de maneira negacionista, negando o passado e dizendo que isso aconteceu naquela época mas não vai acontecer agora.

Quando se fala em guerra, o que acontece? Por que todos os países têm seu exército e tem reserva? Porque, na hipótese de ter uma guerra, temos que ter um exército, tem toda uma população de reserva na hipótese de ter guerra.

Se o estado brasileiro levasse a sério a metáfora bélica, o que já deveria ter sido feito? Uma estrutura para atender guerras de saúde, e isso não é só no Brasil, mas os estados não fazem, não existe um sistema para prevenir as pandemias.

A doença só existe quando as pessoas concordam que ela existe, é preciso ensinar para população. Se não tem esse comando, as pessoas não constroem a doença e continuam a negá-la

As reações contra a gripe espanhola foram muito semelhantes às de agora: poucas pessoas andavam nas ruas, quem andava estava de máscara, igrejas fechadas, teatros lavados com detergente. A humanidade ainda não inventou outra maneira de lidar com a pandemia a não ser esperar pelo remédio ou pela vacina.

Nos acostumamos com o discurso de que os idosos vão morrer quase que inevitavelmente caso sejam infectados. O que isso mostra sobre a maneira como lidamos com as pessoas mais velhas?

Mostra que somos uma sociedade que preza a juventude e faz o que com a história e com os idosos? Transforma tudo em velharia. Eu particularmente não acho que juventude seja qualidade. É uma forma de estar no mundo. Você pode ser jovem na terceira idade, ou um velho jovem. Essa nossa construção da juventude faz muito mal.

E a pergunta que cada um de nós tem que se fazer: alguém tem direito de dizer quem pode morrer ou não? Se cuidarmos melhor das populações vulneráveis, e aí se incluem os idosos, estaremos cuidando melhor de nós mesmos, não só na questão simbólica, também na questão prática.

O que é não lidar com a velhice? É uma forma que nós temos de não lidar com a morte, não sabemos falar do luto. Não vemos o presidente falar uma palavra de solidariedade às famílias das pessoas que morreram, é como se não quisesse falar da morte.

Estamos esticando a nossa linha do tempo, as pessoas não podem envelhecer, e ao mesmo tempo estamos acabando com nossa capacidade subjetiva. Velhice é vista só como momento de decrepitude. Não são valores que são estimados pela população e no nosso século.

Tem a ver com tecnologia também: velho é aquele que não sabe lidar com ela. Portanto, o isole. E ele que aguarde a morte.

Remédios milagrosos também fazem parte da história das pandemias?

Todos nós sempre esperamos por um milagre. Nossa prepotência é um pouco esta: achamos que somos uma sociedade muito racional, que se pauta pela tecnologia, mas todos nós esperamos por um milagre sempre.

Todo mundo quer ouvir o que o presidente fala: “Tenho um remédio que vai acabar com isso tudo”. Que pensamento mágico é esse? A crise vai mudar o mundo? Depende do quanto as pessoas saírem do pensamento mágico, refletirem mais sobre seus castelos de verdades.

A pandemia traz alguma mudança em relação à história das mulheres?

A questão das mulheres é também questão de gênero e classe social. Mulheres de classes média e alta têm muitos recursos e podem lidar mais livremente com trabalho. O que é muito diferente no caso de mulheres pobres, negras, que vivenciam ainda mais essa situação. Há muitas enfermeiras negras e pardas. A posição da enfermeira é de cuidado também, com os pacientes, até com os médicos, ela desempenha esse papel que tem no interior da sua casa no sistema de saúde.

E essas mulheres são vulneráveis porque muitas delas estão nas lidas dos hospitais, sem proteção necessária, e porque estão nas lidas das suas casas.

Os séculos 20 e 21 são da revolução feminista, como já vai aparecendo. As mulheres não vão voltar atrás. Teremos uma realidade marcada por uma nova posição das mulheres

Eu desejo que as pessoas usem esse momento para repensar suas verdades, e dentre as muitas verdades [que precisam ser repensadas, está essa questão de gênero muitas vezes invisível: mulheres ocupam as posições de cuidado sem ser vista.

Como um professor de história explicará a pandemia de 2020 daqui a 100 anos?

Vai explicar como o crash da Bolsa de Nova York é explicado hoje. Essa pandemia vai merecer algumas aulas. A quebra da Bolsa também parecia inimaginável, e estamos vivendo situações que são anomalias nesse sentido, porque são inimagináveis.

O professor de história terá que lidar com o fato de que a pandemia poderá marcar o final de um século e começo de outro, como também conseguiu parar o mundo em tal atividade e com tal rotatividade, e com tanta velocidade. Nós aceleramos muito, e agora tivemos que parar.

O título da aula será: “O dia em que a Terra parou”

A ameaça da pandemia também deu mais voz a quem tenta chamar a atenção para as condições de moradia e saúde precárias de uma parte significativa dos brasileiros. A crise é também uma oportunidade para uma mudança social?

O Brasil consistentemente vai ganhando posições de proeminência de desigualdade social, há classes sociais muito distintas no alcance das benesses da tão proclamada civilização. O Brasil é o 6º país mais desigual do mundo. Tendemos também a negar a desigualdade. Não acho que será pior com classes baixas do que será com idosos, são grupos muito vulneráveis [ao risco de agravamento].

Na gripe espanhola, os grupos mais afetados eram as populações pobres, dos subúrbios. As vítimas tinham entre 20 a 40 anos, mas muitos mais morreram em nome da civilização, porque a pobreza foi expulsa [do centro]. E as epidemias são impiedosas. Quando dizem “Fique em casa, mantenha o isolamento”, tem que refletir sobre as condições que moram essas populações.

Em um Brasil tão múltiplo, com condições sociais tão diferentes, os mais pobres serão as populações mais afetadas. O Brasil também é o terceiro país em população carcerária. Me tira o sono o que vai acontecer se a pandemia entrar nas prisões. Se é que já não chegou e nós não sabemos. Se isso acontecer, quando chegar nos mais pobres, vamos ter que enfrentar como é perversa a correlação de pandemia e desigualdade social.

No Brasil, que tem uma saúde dividida entre privada e pública, as pessoas de mais renda nem pensam em usar a saúde pública. A doença faz isso, vai nivelar, porque atinge as várias classes sociais

Já podemos vislumbrar alguma aprendizagem com a crise atual?

Eu penso que sim, vários países já estão começando a pensar no exército de reserva, como vamos construir não só uma estrutura para reagir à pandemia mas que também se antecipe.

O problema é que nós vivemos um governo no Brasil que não acredita na ciência. Vamos ver se aprendemos de uma vez, que a gente pense no que a ciência produz. Em horas como agora fica mais claro: a saída virá da ciência, com a vacina ou remédio que venha controlar a pandemia.

Não estranharia se tivermos os próximos presidentes médicos, o que estamos aprendendo nos vários países é a importância do Ministério da Saúde, e de termos de fato especialistas nos ministérios, contar não apenas com um político, mas com um político especialista.

Que grande mudança política já é possível dizer que a pandemia trouxe ao Brasil?

Ela está acontecendo. O presidente foi destituído pelo Ministro da Saúde. Mandetta seria demitido mas recuou após pressão. Você já está verificando um crescimento dessas figuras, como aconteceu na época da Revolta da Vacina [1904], o grande herói daquele momento era Oswaldo Cruz, e na gripe espanhola, Carlos Chagas virou grande herói nacional.

Espero que essas pessoas, se chegarem a esses lugares, não usem a posição para garantir mais poder, torço muito para que usem de forma generosa essa posição.

A política é como cachaça, quem tomou não abre mais mão. É o caso de não baixar a vigilância cidadã em relação a políticos médicos. Mandetta, que está ocupando bem seu cargo, foi profundamente ideológico, com a carreira vinculada a seguros médicos privados, e, por ideologia, acabou com o Mais Médicos.

As pessoas olhavam para nós, acadêmicos, e diziam: “Vocês são parasitas”. Espero que as pessoas reflitam e entendam que o mundo da produção tem temporalidades diferentes.

Uma coisa é o tempo da indústria, da tecnologia, que é questão de segundos. Outra é o tempo do cientista, que usa da temporalidade mais alargada para descobrir novas saídas. As pessoas vão começar a entender, como na época da gripe espanhola, porque Carlos Chagas se tornou mais popular do que cantor e jogador de futebol — as charges falavam isso.

A ciência, que era o bandido, é hoje a grande a utopia.

Antropóloga e historiadora, Lila Schwarcz é professora titular na Universidade de São Paulo e professora visitante na Universidade de Princeton, nos EUA. É autora de uma série de livros, entre eles: “Sobre o autoritarismo brasileiro“; “Espetáculo das raças” e “Brasil: Uma biografia”. É editora da Companhia das Letras, colunista do jornal Nexo e curadora adjunta para histórias do Masp.

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

If The UAE Builds A Mountain Will It Actually Bring More Rain? (Vocativ)

You’re not the only one who thinks constructing a rain-inducing mountain in the desert is a bonkers idea

May 03, 2016 at 6:22 PM ET

Photo Illustration: R. A. Di ISO

The United Arab Emirates wants to build a mountain so the nation can control the weather—but some experts are skeptical about the effectiveness of this project, which may sound more like a James Bond villain’s diabolical plan than a solution to drought.

The actual construction of a mountain isn’t beyond the engineering prowess of the UAE. The small country on the Arabian Peninsula has pulled off grandiose environmental projects before, like the artificial Palm Islands off the coast of Dubai and an indoor ski hill in the Mall of the Emirates. But the scientific purpose of the mountain is questionable.

The UAE’s National Center for Meteorology and Seismology (NCMS) is currently collaborating with the U.S.-based University Corporation for Atmospheric Research (UCAR) for the first planning phase of the ambitious project, according to Arabian Business. The UAE government gave the two groups $400,000 in funding to determine whether they can bring more rain to the region by constructing a mountain that will foster better cloud-seeding.

Last week the NCMS revealed that the UAE spent $588,000 on cloud-seeding in 2015. Throughout the year, 186 flights dispersed potassium chloride, sodium chloride and magnesium into clouds—a process that can trigger precipitation. Now, the UAE is hoping they can enhance the chemical process by forcing air up around the artificial mountain, creating clouds that can be seeded more easily and efficiently.

“What we are looking at is basically evaluating the effects on weather through the type of mountain, how high it should be and how the slopes should be,” NCAR lead researcher Roelof Bruintjes told Arabian Business. “We will have a report of the first phase this summer as an initial step.”

But some scientists don’t expect NCAR’s research will lead to a rain-inducing alp. “I really doubt that it would work,” Raymond Pierrehumbert, a professor of physics at the University of Oxford told Vocativ. “You’d need to build a long ridge, not just a cone, otherwise the air would just go around. Even if you could do that, mountains cause local enhanced rain on the upslope side, but not much persistent cloud downwind, and if you need cloud seeding to get even the upslope rain, it’s really unlikely to work as there is very little evidence that cloud seeding produces much rainfall.”

Pierrehumbert, who specializes in geophysics and climate change, believes the regional environment would make the project especially difficult. “UAE is a desert because of the wind patterns arising from global atmospheric circulations, and any mountain they build is not going to alter those,” he said. 

Pierrehumbert concedes that NCAR is a respectable organization that will be able to use the “small amount of money to research the problem.” He thinks some good scientific study will come of the effort—perhaps helping to determine why a hot, humid area bordered by the ocean receives so little rainfall.

But he believes the minimal sum should go into another project: “They’d be way better off putting the money into solar-powered desalination plants.”

If the project doesn’t work out, at least wealthy Emirates have a 125,000-square-foot indoor snow park to look forward to in 2018.

Hit Steyerl | Politics of Post-Representation (Dis Blog)

[Accessed Nov 23, 2015]

In conversation with Marvin Jordan

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today. As a documentary filmmaker, she has created multiple works addressing the widespread proliferation of images in contemporary media, deepening her engagement with the technological conditions of globalization. Steyerl’s work has been exhibited in numerous solo and group exhibitions including documenta 12, Taipei Biennial 2010, and 7th Shanghai Biennial. She currently teaches New Media Art at Berlin University of the Arts.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Marvin Jordan I’d like to open our dialogue by acknowledging the central theme for which your work is well known — broadly speaking, the socio-technological conditions of visual culture — and move toward specific concepts that underlie your research (representation, identification, the relationship between art and capital, etc). In your essay titled “Is a Museum a Factory?” you describe a kind of ‘political economy’ of seeing that is structured in contemporary art spaces, and you emphasize that a social imbalance — an exploitation of affective labor — takes place between the projection of cinematic art and its audience. This analysis leads you to coin the term “post-representational” in service of experimenting with new modes of politics and aesthetics. What are the shortcomings of thinking in “representational” terms today, and what can we hope to gain from transitioning to a “post-representational” paradigm of art practices, if we haven’t arrived there already?

Hito Steyerl Let me give you one example. A while ago I met an extremely interesting developer in Holland. He was working on smart phone camera technology. A representational mode of thinking photography is: there is something out there and it will be represented by means of optical technology ideally via indexical link. But the technology for the phone camera is quite different. As the lenses are tiny and basically crap, about half of the data captured by the sensor are noise. The trick is to create the algorithm to clean the picture from the noise, or rather to define the picture from within noise. But how does the camera know this? Very simple. It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It looks through the pictures you already made, or those that are networked to you and tries to match faces and shapes. In short: it creates the picture based on earlier pictures, on your/its memory. It does not only know what you saw but also what you might like to see based on your previous choices. In other words, it speculates on your preferences and offers an interpretation of data based on affinities to other data. The link to the thing in front of the lens is still there, but there are also links to past pictures that help create the picture. You don’t really photograph the present, as the past is woven into it.

The result might be a picture that never existed in reality, but that the phone thinks you might like to see. It is a bet, a gamble, some combination between repeating those things you have already seen and coming up with new versions of these, a mixture of conservatism and fabulation. The paradigm of representation stands to the present condition as traditional lens-based photography does to an algorithmic, networked photography that works with probabilities and bets on inertia. Consequently, it makes seeing unforeseen things more difficult. The noise will increase and random interpretation too. We might think that the phone sees what we want, but actually we will see what the phone thinks it knows about us. A complicated relationship — like a very neurotic marriage. I haven’t even mentioned external interference into what your phone is recording. All sorts of applications are able to remotely shut your camera on or off: companies, governments, the military. It could be disabled for whole regions. One could, for example, disable recording functions close to military installations, or conversely, live broadcast whatever you are up to. Similarly, the phone might be programmed to auto-pixellate secret or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW content or auto-modify pubic hair, stretch or omit bodies, exchange or collage context or insert AR advertisement and pop up windows or live feeds. Now lets apply this shift to the question of representative politics or democracy. The representational paradigm assumes that you vote for someone who will represent you. Thus the interests of the population will be proportionally represented. But current democracies work rather like smartphone photography by algorithmically clearing the noise and boosting some data over other. It is a system in which the unforeseen has a hard time happening because it is not yet in the database. It is about what to define as noise — something Jacques Ranciere has defined as the crucial act in separating political subjects from domestic slaves, women and workers. Now this act is hardwired into technology, but instead of the traditional division of people and rabble, the results are post-representative militias, brands, customer loyalty schemes, open source insurgents and tumblrs.

Additionally, Ranciere’s democratic solution: there is no noise, it is all speech. Everyone has to be seen and heard, and has to be realized online as some sort of meta noise in which everyone is monologuing incessantly, and no one is listening. Aesthetically, one might describe this condition as opacity in broad daylight: you could see anything, but what exactly and why is quite unclear. There are a lot of brightly lit glossy surfaces, yet they don’t reveal anything but themselves as surface. Whatever there is — it’s all there to see but in the form of an incomprehensible, Kafkaesque glossiness, written in extraterrestrial code, perhaps subject to secret legislation. It certainly expresses something: a format, a protocol or executive order, but effectively obfuscates its meaning. This is a far cry from a situation in which something—an image, a person, a notion — stood in for another and presumably acted in its interest. Today it stands in, but its relation to whatever it stands in for is cryptic, shiny, unstable; the link flickers on and off. Art could relish in this shiny instability — it does already. It could also be less baffled and mesmerised and see it as what the gloss mostly is about – the not-so-discreet consumer friendly veneer of new and old oligarchies, and plutotechnocracies.

MJ In your insightful essay, “The Spam of the Earth: Withdrawal from Representation”, you extend your critique of representation by focusing on an irreducible excess at the core of image spam, a residue of unattainability, or the “dark matter” of which it’s composed. It seems as though an unintelligible horizon circumscribes image spam by image spam itself, a force of un-identifiability, which you detect by saying that it is “an accurate portrayal of what humanity is actually not… a negative image.” Do you think this vacuous core of image spam — a distinctly negative property — serves as an adequate ground for a general theory of representation today? How do you see today’s visual culture affecting people’s behavior toward identification with images?

HS Think of Twitter bots for example. Bots are entities supposed to be mistaken for humans on social media web sites. But they have become formidable political armies too — in brilliant examples of how representative politics have mutated nowadays. Bot armies distort discussion on twitter hashtags by spamming them with advertisement, tourist pictures or whatever. Bot armies have been active in Mexico, Syria, Russia and Turkey, where most political parties, above all the ruling AKP are said to control 18,000 fake twitter accounts using photos of Robbie Williams, Megan Fox and gay porn stars. A recent article revealed that, “in order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.” It is ever more difficult to identify bots – partly because humans are being paid to enter CAPTCHAs on their behalf (1,000 CAPTCHAs equals 50 USD cents). So what is a bot army? And how and whom does it represent if anyone? Who is an AKP bot that wears the face of a gay porn star and quotes Hobbes’ Leviathan — extolling the need of transforming the rule of militias into statehood in order to escape the war of everyone against everyone else? Bot armies are a contemporary vox pop, the voice of the people, the voice of what the people are today. It can be a Facebook militia, your low cost personalized mob, your digital mercenaries. Imagine your photo is being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump cutting between glamour, sectarianism, porn, corruption and Post-Baath Party ideology. Think of the meaning of the word “affirmative action” after twitter bots and like farms! What does it represent?

MJ You have provided a compelling account of the depersonalization of the status of the image: a new process of de-identification that favors materialist participation in the circulation of images today.  Within the contemporary technological landscape, you write that “if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.” How does this shift from personal identification to material circulation — that is, to cybernetic participation — affect your notion of representation? If an image is merely “a thing like you and me,” does this amount to saying that identity is no more, no less than a .jpeg file?

HS Social media makes the shift from representation to participation very clear: people participate in the launch and life span of images, and indeed their life span, spread and potential is defined by participation. Think of the image not as surface but as all the tiny light impulses running through fiber at any one point in time. Some images will look like deep sea swarms, some like cities from space, some are utter darkness. We could see the energy imparted to images by capital or quantified participation very literally, we could probably measure its popular energy in lumen. By partaking in circulation, people participate in this energy and create it.
What this means is a different question though — by now this type of circulation seems a little like the petting zoo of plutotechnocracies. It’s where kids are allowed to make a mess — but just a little one — and if anyone organizes serious dissent, the seemingly anarchic sphere of circulation quickly reveals itself as a pedantic police apparatus aggregating relational metadata. It turns out to be an almost Althusserian ISA (Internet State Apparatus), hardwired behind a surface of ‘kawaii’ apps and online malls. As to identity, Heartbleed and more deliberate governmental hacking exploits certainly showed that identity goes far beyond a relationship with images: it entails a set of private keys, passwords, etc., that can be expropriated and detourned. More generally, identity is the name of the battlefield over your code — be it genetic, informational, pictorial. It is also an option that might provide protection if you fall beyond any sort of modernist infrastructure. It might offer sustenance, food banks, medical service, where common services either fail or don’t exist. If the Hezbollah paradigm is so successful it is because it provides an infrastructure to go with the Twitter handle, and as long as there is no alternative many people need this kind of container for material survival. Huge religious and quasi-religious structures have sprung up in recent decades to take up the tasks abandoned by states, providing protection and survival in a reversal of the move described in Leviathan. Identity happens when the Leviathan falls apart and nothing is left of the commons but a set of policed relational metadata, Emoji and hijacked hashtags. This is the reason why the gay AKP pornstar bots are desperately quoting Hobbes’ book: they are already sick of the war of Robbie Williams (Israel Defense Forces) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP) and are hoping for just any entity to organize day care and affordable dentistry.


But beyond all the portentous vocabulary relating to identity, I believe that a widespread standard of the contemporary condition is exhaustion. The interesting thing about Heartbleed — to come back to one of the current threats to identity (as privacy) — is that it is produced by exhaustion and not effort. It is a bug introduced by open source developers not being paid for something that is used by software giants worldwide. Nor were there apparently enough resources to audit the code in the big corporations that just copy-pasted it into their applications and passed on the bug, fully relying on free volunteer labour to produce their proprietary products. Heartbleed records exhaustion by trying to stay true to an ethics of commonality and exchange that has long since been exploited and privatized. So, that exhaustion found its way back into systems. For many people and for many reasons — and on many levels — identity is just that: shared exhaustion.

MJ This is an opportune moment to address the labor conditions of social media practice in the context of the art space. You write that “an art space is a factory, which is simultaneously a supermarket — a casino and a place of worship whose reproductive work is performed by cleaning ladies and cellphone-video bloggers alike.” Incidentally, DIS launched a website called ArtSelfie just over a year ago, which encourages social media users to participate quite literally in “cellphone-video blogging” by aggregating their Instagram #artselfies in a separately integrated web archive. Given our uncanny coincidence, how can we grasp the relationship between social media blogging and the possibility of participatory co-curating on equal terms? Is there an irreconcilable antagonism between exploited affective labor and a genuinely networked art practice? Or can we move beyond — to use a phrase of yours — a museum crowd “struggling between passivity and overstimulation?”

HS I wrote this in relation to something my friend Carles Guerra noticed already around early 2009; big museums like the Tate were actively expanding their online marketing tools, encouraging people to basically build the museum experience for them by sharing, etc. It was clear to us that audience participation on this level was a tool of extraction and outsourcing, following a logic that has turned online consumers into involuntary data providers overall. Like in the previous example – Heartbleed – the paradigm of participation and generous contribution towards a commons tilts quickly into an asymmetrical relation, where only a minority of participants benefits from everyone’s input, the digital 1 percent reaping the attention value generated by the 99 percent rest.

Brian Kuan Wood put it very beautifully recently: Love is debt, an economy of love and sharing is what you end up with when left to your own devices. However, an economy based on love ends up being an economy of exhaustion – after all, love is utterly exhausting — of deregulation, extraction and lawlessness. And I don’t even want to mention likes, notes and shares, which are the child-friendly, sanitized versions of affect as currency.
All is fair in love and war. It doesn’t mean that love isn’t true or passionate, but just that love is usually uneven, utterly unfair and asymmetric, just as capital tends to be distributed nowadays. It would be great to have a little bit less love, a little more infrastructure.

MJ Long before Edward Snowden’s NSA revelations reshaped our discussions of mass surveillance, you wrote that “social media and cell-phone cameras have created a zone of mutual mass-surveillance, which adds to the ubiquitous urban networks of control,” underscoring the voluntary, localized, and bottom-up mutuality intrinsic to contemporary systems of control. You go on to say that “hegemony is increasingly internalized, along with the pressure to conform and perform, as is the pressure to represent and be represented.” But now mass government surveillance is common knowledge on a global scale — ‘externalized’, if you will — while social media representation practices remain as revealing as they were before. Do these recent developments, as well as the lack of change in social media behavior, contradict or reinforce your previous statements? In other words, how do you react to the irony that, in the same year as the unprecedented NSA revelations, “selfie” was deemed word of the year by Oxford Dictionaries?

HS Haha — good question!

Essentially I think it makes sense to compare our moment with the end of the twenties in the Soviet Union, when euphoria about electrification, NEP (New Economic Policy), and montage gives way to bureaucracy, secret directives and paranoia. Today this corresponds to the sheer exhilaration of having a World Wide Web being replaced by the drudgery of corporate apps, waterboarding, and “normcore”. I am not trying to say that Stalinism might happen again – this would be plain silly – but trying to acknowledge emerging authoritarian paradigms, some forms of algorithmic consensual governance techniques developed within neoliberal authoritarianism, heavily relying on conformism, “family” values and positive feedback, and backed up by all-out torture and secret legislation if necessary. On the other hand things are also falling apart into uncontrollable love. One also has to remember that people did really love Stalin. People love algorithmic governance too, if it comes with watching unlimited amounts of Game of Thrones. But anyone slightly interested in digital politics and technology is by now acquiring at least basic skills in disappearance and subterfuge.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

MJ In “Politics of Art: Contemporary Art and the Transition to Post-Democracy,” you point out that the contemporary art industry “sustains itself on the time and energy of unpaid interns and self-exploiting actors on pretty much every level and in almost every function,” while maintaining that “we have to face up to the fact that there is no automatically available road to resistance and organization for artistic labor.” Bourdieu theorized qualitatively different dynamics in the composition of cultural capital vs. that of economic capital, arguing that the former is constituted by the struggle for distinction, whose value is irreducible to financial compensation. This basically translates to: everyone wants a piece of the art-historical pie, and is willing to go through economic self-humiliation in the process. If striving for distinction is antithetical to solidarity, do you see a possibility of reconciling it with collective political empowerment on behalf of those economically exploited by the contemporary art industry?

HS In Art and Money, William Goetzmann, Luc Renneboog, and Christophe Spaenjers conclude that income inequality correlates to art prices. The bigger the difference between top income and no income, the higher prices are paid for some art works. This means that the art market will benefit not only if less people have more money but also if more people have no money. This also means that increasing the amount of zero incomes is likely, especially under current circumstances, to raise the price of some art works. The poorer many people are (and the richer a few), the better the art market does; the more unpaid interns, the more expensive the art. But the art market itself may be following a similar pattern of inequality, basically creating a divide between the 0,01 percent if not less of artworks that are able to concentrate the bulk of sales and the 99,99 percent rest. There is no short term solution for this feedback loop, except of course not to accept this situation, individually or preferably collectively on all levels of the industry. This also means from the point of view of employers. There is a long term benefit to this, not only to interns and artists but to everyone. Cultural industries, which are too exclusively profit oriented lose their appeal. If you want exciting things to happen you need a bunch of young and inspiring people creating a dynamics by doing risky, messy and confusing things. If they cannot afford to do this, they will do it somewhere else eventually. There needs to be space and resources for experimentation, even failure, otherwise things go stale. If these people move on to more accommodating sectors the art sector will mentally shut down even more and become somewhat North-Korean in its outlook — just like contemporary blockbuster CGI industries. Let me explain: there is a managerial sleekness and awe inspiring military perfection to every pixel in these productions, like in North Korean pixel parades, where thousands of soldiers wave color posters to form ever new pixel patterns. The result is quite something but this something is definitely not inspiring nor exciting. If the art world keeps going down the way of raising art prices via starvation of it’s workers – and there is no reason to believe it will not continue to do this – it will become the Disney version of Kim Jong Un’s pixel parades. 12K starving interns waving pixels for giant CGI renderings of Marina Abramovic! Imagine the price it will fetch!

kim jon hito

kim hito jon

Chimpanzés caçadores dão pistas sobre os primeiros humanos (El País)

Primatas que usam lanças podem fornecer indícios sobre origem das sociedades humanas

 12 MAY 2015 – 18:14 BRT

Um velho chimpanzé bebe água em um lago, em Fongoli, no Senegal. / FRANS LANTING

Na quente savana senegalesa se encontra o único grupo de chimpanzés que usa lanças para caçar animais com os quais se alimenta. Um ou outro grupo de chimpanzés foi visto portando ferramentas para a captura de pequenos mamíferos, mas esses, na comunidade de Fongoli, caçam regularmente usando ramos afiados. Esse modo de conseguir alimento é um uso cultural consolidado para esse grupo de chimpanzés.

Além dessa inovação tecnológica, em Fongoli ocorre também uma novidade social que os distingue dos demais chimpanzés estudados na África: há mais tolerância, maior paridade dos sexos na caça e os machos mais corpulentos não passam com tanta frequência por cima dos interesses dos demais, valendo-se de sua força. Para os pesquisadores que vêm observando esse comportamento há uma década esses usos poderiam, além disso, oferecer pistas sobre a evolução dos ancestrais humanos.

“São a única população não humana conhecida que caça vertebrados com ferramentas de forma sistemática, por isso constituem uma fonte importante para a hipótese sobre o comportamento dos primeiros hominídeos, com base na analogia”, explicam os pesquisadores do estudo no qual formularam suas conclusões depois de dez anos observando as caçadas de Fongoli. Esse grupo, liderado pela antropóloga Jill Pruetz, considera que esses animais são um bom exemplo do que pode ser a origem dos primeiros primatas eretos sobre duas patas.

Os machos mais fortes dessa comunidade respeitam as fêmeas na caça

Na sociedade Fongoli as fêmeas realizam exatamente a metade das caçadas com lança. Graças à inovação tecnológica que representa a conversão de galhos em pequenas lanças com as quais se ajudam para caçar galagos – pequenos macacos muito comuns nesse entorno –, as fêmeas conseguem certa independência alimentar. Na comunidade de Gombe, que durante muitos anos foi estudada por Jane Goodall, os machos arcam com cerca de 90% do total das presas; em Fongoli, somente 70%. Além disso, em outros grupos de chimpanzés os machos mais fortes roubam uma de cada quatro presas caçadas pelas fêmeas (sem ferramentas): em Fongoli, apenas 5%.

Uma fêmea de chimpanzé apanha e examina um galho que usará para capturar sua presa. / J. PRUETZ

“Em Fongoli, quando uma fêmea ou um macho de baixo escalão captura uma presa, permitem que ele fique com ela e a coma. Em outros lugares, o macho alfa ou outro macho dominante costuma tomar-lhe a presa. Assim, as fêmeas obtêm pouco benefício da caça, se outro chimpanzé lhe tira sua presa”, afirma Pruetz. Ou seja, o respeito dos machos de Fongoli pelas presas obtidas por suas companheiras serviria de incentivo para que elas se decidam a ir à caça com mais frequência do que as de outras comunidades. Durante esses anos de observação, praticamente todos os chimpanzés do grupo – cerca de 30 indivíduos – caçaram com ferramentas,

O clima seco faz com que os macacos mais acessíveis em Fongoli sejam os pequenos galagos, e não os colobos vermelhos – os preferidos dos chimpanzés em outros lugares da África –, que são maiores e difíceis de capturar por outros que não sejam os machos mais rápidos e corpulentos. Quase todos os episódios de caça com lanças observados (três centenas) se deram nos meses úmidos, nos quais outras fontes de alimento são escassas.

A savana senegalesa, com poucas árvores, é um ecossistema que tem uma importante semelhança com o cenário em que evoluíram os ancestrais humanos. Ao contrário de outras comunidades africanas, os chimpanzés de Fongoli passam a maior parte do tempo no chão, e não entre os galhos. A excepcional forma de caça de Fongoli leva os pesquisadores a sugerir em seu estudo que os primeiros hominídeos provavelmente intensificaram o uso de ferramentas tecnológicas para superar as pressões ambientais, e que eram até mesmo “suficientemente sofisticados a ponto de aperfeiçoar ferramentas de caça”.

“Sabemos que o entorno tem um impacto importante no comportamento dos chimpanzés”, afirma o primatólogo Joseph Call, do Instituto Max Planck. “A distribuição das árvores determina o tipo de caça: onde a vegetação é mais frondosa, a caçada é mais cooperativa em relação a outros entornos nos quais é mais fácil seguir a presa, e eles são mais individualistas”, assinala Call.

No entanto, Call põe em dúvida que essas práticas de Fongoli possam ser consideradas caçadas com lança propriamente ditas, já que para ele lembram mais a captura de formigas e cupins usando palitos, algo mais comum entre os primatas. “A definição de caça que os pesquisadores estabelecem em seu estudo não se distingue muito do que fazem colocando um raminho em um orifício para conseguir insetos para comer”, diz Call. Os chimpanzés de Fongoli cutucam com paus os galagos quando eles se escondem em cavidades das árvores para forçá-los a sair e, uma vez fora, lhes arrancam a cabeça com uma mordida. “É algo que fica entre uma coisa e a outra”, argumenta.

Esses antropólogos acreditam que o achado permite pensar que os primeiros hominídeos eretos também usavam lanças

Pruetz responde a esse tipo de crítica dizendo que se trata de uma estratégia para evitar que o macaco os morda ou escape, uma situação muito diferente daquela de colocar um galho em um orifício para capturar bichos. Se for o mesmo, argumentam Pruetz e seus colegas, a pergunta é “por que os chimpanzés de outros grupos não caçam mais”.

Além do caso particular, nem sequer está encerrado o debate sobre se os chimpanzés devem ser considerados modelos do que foram os ancestrais humanos. “Temos de levar em conta que o bonobo não faz nada disso e é tão próximo de nós como o chimpanzé”, defende Call. “Pegamos o chimpanzé por que nos cai bem para assinalar determinadas influências comuns. É preciso ter muito cuidado e não pesquisar a espécie dependendo do que queiramos encontrar”, propõe.

On Reverse Engineering (Anthropology and Algorithms)

Nick Seaver

Looking for the cultural work of engineers

The Atlantic welcomed 2014 with a major feature on web behemoth Netflix. If you didn’t know, Netflix has developed a system for tagging movies and for assembling those tags into phrases that look like hyper-specific genre names: Visually-striking Foreign Nostalgic Dramas, Critically-acclaimed Emotional Underdog Movies, Romantic Chinese Crime Movies, and so on. The sometimes absurd specificity of these names (or “altgenres,” as Netflix calls them) is one of the peculiar pleasures of the contemporary web, recalling the early days of website directories and Usenet newsgroups, when it seemed like the internet would be a grand hotel, providing a room for any conceivable niche.

Netflix’s weird genres piqued the interest of Atlantic editor Alexis Madrigal, who set about scraping the whole list. Working from the US in late 2013, his scraper bot turned up a startling 76,897 genre names — clearly the emanations of some unseen algorithmic force. How were they produced? What was their generative logic? What made them so good—plausible, specific, with some inexpressible touch of the human? Pursuing these mysteries brought Madrigal to the world of corpus analysis software and eventually to Netflix’s Silicon Valley offices.

The resulting article is an exemplary piece of contemporary web journalism — a collaboratively produced, tech-savvy 5,000-word “long read” that is both an exposé of one of the largest internet companies (by volume) and a reflection on what it is like to be human with machines. It is supported by a very entertaining altgenre-generating widget, built by professor and software carpenter Ian Bogost and illustrated by Twitter mystery darth. Madrigal pieces the story together with his signature curiosity and enthusiasm, and the result feels so now that future corpus analysts will be able to use it as a model to identify texts written in the United States from 2013–14. You really should read it.

A Māori eel trap. The design and construction of traps (or filters) like this are classic topics of interest for anthropologists of technology. cc-by-sa-3.0

As a cultural anthropologist in the middle of a long-term research project on algorithmic filtering systems, I am very interested in how people think about companies like Netflix, which take engineering practices and apply them to cultural materials. In the popular imagination, these do not go well together: engineering is about universalizable things like effectiveness, rationality, and algorithms, while culture is about subjective and particular things, like taste, creativity, and artistic expression. Technology and culture, we suppose, make an uneasy mix. When Felix Salmon, in his response to Madrigal’s feature, complains about “the systematization of the ineffable,” he is drawing on this common sense: engineers who try to wrangle with culture inevitably botch it up.

Yet, in spite of their reputations, we always seem to find technology and culture intertwined. The culturally-oriented engineering of companies like Netflix is a quite explicit case, but there are many others. Movies, for example, are a cultural form dependent on a complicated system of technical devices — cameras, editing equipment, distribution systems, and so on. Technologies that seem strictly practical — like the Māori eel trap pictured above—are influenced by ideas about effectiveness, desired outcomes, and interpretations of the natural world, all of which vary cross-culturally. We may talk about technology and culture as though they were independent domains, but in practice, they never stay where they belong. Technology’s straightforwardness and culture’s contingency bleed into each other.

This can make it hard to talk about what happens when engineers take on cultural objects. We might suppose that it is a kind of invasion: The rationalizers and quantifiers are over the ridge! They’re coming for our sensitive expressions of the human condition! But if technology and culture are already mixed up with each other, then this doesn’t make much sense. Aren’t the rationalizers expressing their own cultural ideas? Aren’t our sensitive expressions dependent on our tools? In the present moment, as companies like Netflix proliferate, stories trying to make sense of the relationship between culture and technology also proliferate. In my own research, I examine these stories, as told by people from a variety of positions relative to the technology in question. There are many such stories, and they can have far-reaching consequences for how technical systems are designed, built, evaluated, and understood.

The story Madrigal tells in The Atlantic is framed in terms of “reverse engineering.” The engineers of Netflix have not invaded cultural turf — they’ve reverse engineered it and figured out how it works. To report on this reverse engineering, Madrigal has done some of his own, trying to figure out the organizing principles behind the altgenre system. So, we have two uses of reverse engineering here: first, it is a way to describe what engineers do to cultural stuff; second, it is a way to figure out what engineers do.

So what does “reverse engineering” mean? What kind of things can be reverse engineered? What assumptions does reverse engineering make about its objects? Like any frame, reverse engineering constrains as well as enables the presentation of certain stories. I want to suggest here that, while reverse engineering might be a useful strategy for figuring out how an existing technology works, it is less useful for telling us how it came to work that way. Because reverse engineering starts from a finished technical object, it misses the accidents that happened along the way — the abandoned paths, the unusual stories behind features that made it to release, moments of interpretation, arbitrary choice, and failure. Decisions that seemed rather uncertain and subjective as they were being made come to appear necessary in retrospect. Engineering looks a lot different in reverse.

This is especially evident in the case of explicitly cultural technologies. Where “technology” brings to mind optimization, functionality, and necessity, “culture” seems to represent the opposite: variety, interpretation, and arbitrariness. Because it works from a narrowly technical view of what engineering entails, reverse engineering has a hard time telling us about the cultural work of engineers. It is telling that the word “culture” never appears in this piece about the contemporary state of the culture industry.

Inspired by Madrigal’s article, here are some notes on the consequences of reverse engineering for how we think about the cultural lives of engineers. As culture and technology continue to escape their designated places and intertwine, we need ways to talk about them that don’t assume they can be cleanly separated.

Ben Affleck, fact extractor.

There is a terrible movie about reverse engineering, based on a short story by Philip K. Dick. It is called Paycheck, stars Ben Affleck, and is not currently available for streaming on Netflix. In it, Affleck plays a professional reverse engineer (the “best in the business”), who is hired by companies to figure out the secrets of their competitors. After doing this, his memory of the experience is wiped and in return, he is compensated very well. Affleck is a sort of intellectual property conduit: he extracts secrets from devices, and having moved those secrets from one company to another, they are then extracted from him. As you might expect, things go wrong: Affleck wakes up one day to find that he has forfeited his payment in exchange for an envelope of apparently worthless trinkets and, even worse, his erstwhile employer now wants to kill him. The trinkets turn out to be important in unexpected ways as Affleck tries to recover the facts that have been stricken from his memory. The movie’s tagline is “Remember the Future”—you get the idea.

Paycheck illustrates a very popular way of thinking about engineering knowledge. To know about something is to know the facts about how it works. These facts are like physical objects — they can be hidden (inside of technologies, corporations, envelopes, or brains), and they can be retrieved and moved around. In this way of thinking about knowledge, facts that we don’t yet know are typically hidden on the other side of some barrier. To know through reverse engineering is to know by trying to pull those pre-existing facts out.

This is why reverse engineering is sometimes used as a metaphor in the sciences to talk about revealing the secrets of Nature. When biologists “reverse engineer” a cell, for example, they are trying to uncover its hidden functional principles. This kind of work is often described as “pulling back the curtain” on nature (or, in older times, as undressing a sexualized, female Nature — the kind of thing we in academia like to call “problematic”). Nature, if she were a person, holds the secrets her reverse engineers want.

In the more conventional sense of the term, reverse engineering is concerned with uncovering secrets held by engineers. Unlike its use in the natural sciences, here reverse engineering presupposes that someone already knows what we want to find out. Accessing this kind of information is often described as “pulling back the curtain” on a company. (This is likely the unfortunate naming logic behind Kimono, a new service for scraping websites and automatically generating APIs to access the scraped data.) Reverse engineering is not concerned with producing “new” knowledge, but with extracting facts from one place and relocating them to another.

Reverse engineering (and I guess this is obvious) is concerned with finished technologies, so it presumes that there is a straightforward fact of the matter to be worked out. Something happened to Ben Affleck before his memory was wiped, and eventually he will figure it out. This is not Rashomonwhich suggests there might be multiple interpretations of the same event (although that isn’t available for streaming either)The problem is that this narrow scope doesn’t capture everything we might care about: why this technology and not another one? If a technology is constantly changing, like the algorithms and data structures under the hood at Netflix, then why is it changing as it does? Reverse engineering, at best, can only tell you the what, not the why or the how. But it even has some trouble with the what.

“Fantastic powers at his command / And I’m sure that he will understand / He’s the Wiz and he lives in Oz”

Netflix, like most companies today, is surrounded by a curtain of non-disclosure agreements and intellectual property protections. This curtain animates Madrigal’s piece, hiding the secrets that his reverse engineering is aimed at. For people inside the curtain, nothing in his article is news. What is newsworthy, Madrigal writes, is that “no one outside the company has ever assembled this data before.” The existence of the curtain shapes what we imagine knowledge about Netflix to be: something possessed by people on the inside and lacked by people on the outside.

So, when Madrigal’s reverse engineering runs out of steam, the climax of the story comes and the curtain is pulled back to reveal the “Wizard of Oz, the man who made the machine”: Netflix’s VP of Product Innovation Todd Yellin. Here is the guy who holds the secrets behind the altgenres, the guy with the knowledge about how Netflix has tried to bridge the world of engineering and the world of cultural production. According to the logic of reverse engineering, Yellin should be able to tell us everything we want to know.

From Yellin, Madrigal learns about the extensiveness of the tagging that happens behind the curtain. He learns some things that he can’t share publicly, and he learns of the existence of even more secrets — the contents of the training manual which dictate how movies are to be entered into the system. But when it comes to how that massive data and intelligence infrastructure was put together, he learns this:

“It’s a real combination: machine-learned, algorithms, algorithmic syntax,” Yellin said, “and also a bunch of geeks who love this stuff going deep.”

This sentence says little more than “we did it with computers,” and it illustrates a problem for the reverse engineer: there is always another curtain to get behind. Scraping altgenres will only get you so far, and even when you get “behind the curtain,” companies like Netflix are only willing to sketch out their technical infrastructure in broad strokes. In more technically oriented venues or the academic research community, you may learn more, but you will never get all the way to the bottom of things. The Wizard of Oz always holds on to his best secrets.

But not everything we want to know is a trade secret. While reverse engineers may be frustrated by the first part of Yellin’s sentence — the vagueness of “algorithms, algorithmic syntax” — it’s the second part that hides the encounter between culture and technology: What does it look like when “geeks who love this stuff go deep”? How do the people who make the algorithms understand the “deepness” of cultural stuff? How do the loves of geeks inform the work of geeks? The answers to these questions are not hidden away as proprietary technical information; they’re often evident in the ways engineers talk about and work with their objects. But because reverse engineering focuses narrowly on revealing technical secrets, it fails to piece together how engineers imagine and engage with culture. For those of us interested in the cultural ramifications of algorithmic filtering, these imaginings and engagements—not usually secret, but often hard to access — are more consequential than the specifics of implementation, which are kept secret and frequently change.

“My first goal was: tear apart content!”

While Yellin may not have told us enough about the technical secrets of Netflix to create a competitor, he has given us some interesting insights into the way he thinks about movies and how to understand them. If you’re familiar with research on algorithmic recommenders, you’ll recognize the system he describes as an example of content-based recommendation. Where “classic” recommender systems rely on patterns in ratings data and have little need for other information, content-based systems try to understand the material they recommend, through various forms of human or algorithmic analysis. These analyses are a lot of work, but over the past decade, with the increasing availability of data and analytical tools, content-based recommendation has become more popular. Most big recommender systems today (including Netflix’s) are hybrids, drawing on both user ratings and data about the content of recommended items.

The “reverse engineering of Hollywood” is the content side of things: Netflix’s effort to parse movies into its database so that they can be recommended based on their content. By calling this parsing “reverse engineering,” Madrigal implies that there is a singular fact of the matter to be retrieved from these movies, and as a result, he focuses his description on Netflix’s thoroughness. What is tagged? “Everything. Everyone.” But the kind of parsing Yellin describes is not the only way to understand cultural objects; rather, it is a specific and recognizable mode of interpretation. It bears a strong resemblance to structuralism — a style of cultural analysis that had its heyday in the humanities and social sciences during the mid-20th century.

Structuralism, according to Roland Barthes, is a way of interpreting objects by decomposing them into parts and then recomposing those parts into new wholes. By breaking a text apart and putting it back together, the structuralist aims to understand its underlying structure: what order lurks under the surface of apparently idiosyncratic objects?

For example, the arch-structuralist anthropologist Claude Lévi-Strauss took such an approach in his study of myth. Take the Oedipus myth: there are many different ways to tell the same basic story, in which a baby is abandoned in the wilderness and then grows up to unknowingly kill his father, marry his mother, and blind himself when he finds out (among other things). But, across different tellings of the myth, there is a fairly persistent set of elements that make up the story. Lévi-Strauss called these elements “mythemes” (after linguistic “phonemes”). By breaking myths down into their constituent parts, you could see patterns that linked them together, not only across different tellings of the “same” myth, but even across apparently disparate myths from other cultures. Through decomposition and recomposition, structuralists sought what Barthes called the object’s “rules of functioning.” These rules, governing the combination of mythemes, were the object of Lévi-Strauss’s cultural analysis.

Todd Yellin is, by all appearances, a structuralist. He tells Madrigal that his goal was to “tear apart content” and create a “Netflix Quantum Theory,” under which movies could be broken down into their constituent parts — into “quanta” or the “little ‘packets of energy’ that compose each movie.” Those quanta eventually became “microtags,” which Madrigal tells us are used to describe everything in the movie. Large teams of human taggers are trained, using a 36-page secret manual, and they go to town, decomposing movies into microtags. Take those tags, recompose them, and you get the altgenres, a weird sort of structuralist production intended to help you find things in Netflix’s pool of movies. If Lévi-Strauss had lived to be 104 instead of just 100, he might have had some thoughts about this computerized structuralism: in his 1955 article on the structural study of myth, he suggested that further advances would require mathematicians and “I.B.M. equipment” to handle the complicated analysis. Structuralism and computers go way back.

Although structuralism sounds like a fairly technical way to analyze cultural material, it is not, strictly speaking, objective. When you break an object down into its parts and put it back together again, you have not simply copied it — you’ve made something new. A movie’s set of microtags, no matter how fine-grained, is not the same thing as the movie. It is, as Barthes writes, a “directed, interested simulacrum” of the movie, a re-creation made with particular goals in mind. If you had different goals — different ideas about what the significant parts of movies were, different imagined use-cases — you might decompose differently. There is more than one way to tear apart content.

This does not jive well with common sense ideas about what engineering is like. Instead of the cold, rational pursuit of optimal solutions, we have something a little more creative. We have options, a variety of choices which are all potentially valid, depending on a range of contextual factors not exhausted by obviously “technical” concerns. Barthes suggested that composing a structuralist analysis was like composing a poem, and engineering is likewise expressive. Netflix’s altgenres are in no way the final statement on the movies. They are, rather, one statement among many — a cultural production in their own right, influenced by local assumptions about meaning, relevance, and taste. “Reverse engineering” seems a poor name for this creative practice, because it implies a singular right answer — a fact of the matter that merely needs to be retrieved from the insides of the movies. We might instead, more accurately, call this work “interpretation.”

So, where does this leave us with reverse engineering? There are two questions at issue here:

  1. Does “reverse engineering” as a term adequately describe the work that engineers like those employed at Netflix do when they interact with cultural objects?
  2. Is reverse engineering a useful strategy for figuring out what engineers do?

The answer to both of these questions, I think, is a measured “no,” and for the same reason: reverse engineering, as both a descriptor and a research strategy, misses the things engineers do that do not fit into conventional ideas about engineering. In the ongoing mixture of culture and technology, reverse engineering sticks too closely to the idealized vision of technical work. Because it assumes engineers care strictly about functionality and efficiency, it is not very good at telling stories about accidents, interpretations, and arbitrary choices. It assumes that cultural objects or practices (like movies or engineering) can be reduced to singular, universally-intelligible logics. It takes corporate spokespeople at their word when they claim that there was a straight line from conception to execution.

As Nicholas Diakopoulos has written, reverse engineering can be a useful way to figure out what obscured technologies do, but it cannot get us answers to “the question of why.” As these obscured technologies — search engines, recommender systems, and other algorithmic filters — are constantly refined, we need better ways to talk about the whys and hows of engineering as a practice, not only the what of engineered objects that immediately change.

The risk of reverse engineering is that we come to imagine that the only things worth knowing about companies like Netflix are the technical details hidden behind the curtain. In my own research, I argue that the cultural lives and imaginations of the people behind the curtain are as important, if not more, for understanding how these systems come to exist and function as they do. Moreover, these details are not generally considered corporate secrets, so they are accessible if we look for them. Not everything worth knowing has been actively hidden, and transparency can conceal as much as it reveals.

All engineering mixes culture and technology. Even Madrigal’s “reverse engineering” does not stay put in technical bounds: he supplements the work of his bot by talking with people, drawing on their interpretations and offering his own, reading the altgenres, populated with serendipitous algorithmic accidents, as “a window unto the American soul.” Engineers, reverse and otherwise, have cultural lives, and these lives inform their technical work. To see these effects, we need to get beyond the idea that the technical and the cultural are necessarily distinct. But if we want to understand the work of companies like Netflix, it is not enough to simply conclude that culture and technology — humans and computers — are mixed. The question we need to answer is how.

‘Technological Disobedience’: How Cubans Manipulate Everyday Technologies For Survival (WLRN)

12:05  PM

MON JULY 1, 2013

In Cuban Spanish, there is a word for overcoming great obstacles with minimal resources: resolver.

Literally, it means to resolve, but to many Cubans on the island and living in South Florida, resolviendo is an enlightened reality born of necessity.

When the Soviet Union collapsed in 1991, Cuba entered a “Special Period in Times of Peace”, which saw unprecedented shortages of every day items. Previously, the Soviets had been Cuba’s principal traders, sending goods for low prices and buying staple export commodities like sugar at above market prices.

Rationing goods was a normal part of life for a long time, but Cubans found themselves in dire straights without Soviet support. As the crisis got worse and worse over time, the more creative people would have to get.

Verde Olivo, the publishing house for the Cuban Revolutionary Armed Forces, published a largely crowdsourced book shortly after the Special Period began. Titled Con Nuestros Propios Esfuerzos (With Our Own Efforts), the book detailed all the possible ways that household items could be manipulated and turned inside out in order to fulfill the needs of a starving population.

Included in the book is a famous recipe for turning grapefruit rind into makeshift beef steak (after heavy seasoning).

Cuban artist and designer Ernesto Oroza watched with amazement as uses sprang from everyday items, and he soon began collecting these items from this sad but ingeniously creative period of Cuban history.

A Cuban rikimbili-- the word for bicycles that have been converted into motorcycles. The engine of 100cc's or less typically is constructed out of motor-powered, misting backpacks or Russian tank AC generators.

A Cuban rikimbili– the word for bicycles that have been converted into motorcycles. The engine of 100cc’s or less typically is constructed out of motor-powered, misting backpacks or Russian tank AC generators. Credit

“People think beyond the normal capacities of an object, and try to surpass the limitations that it imposes on itself”, Oraza explains in a recently published Motherboard documentary that originally aired in 2011.

Oraza coined the phrase “Technological Disobedience”, which he says summarizes how Cubans reacted to technology during this time.

After graduating from design school to an abysmal economy, Oraza and a friend began to travel the island and collect these unique items from every province.

These post-apocalyptic contraptions reflect a hunger for more, and a resilience to fatalism within the Cuban community.

“The same way a surgeon, after having opened so many bodies, becomes insensitive to blood, to the smell of blood and organs… It’s the same for a Cuban,” Oraza explains.

“Once he has opened a fan, he is used to seeing everything from the inside… All the symbols that unify an object, that make a unique entity– for a Cuban those don’t exist.”

When Exponential Progress Becomes Reality (Medium)

Niv Dror

“I used to say that this is the most important graph in all the technology business. I’m now of the opinion that this is the most important graph ever graphed.”

Steve Jurvetson

Moore’s Law

The expectation that your iPhone keeps getting thinner and faster every two years. Happy 50th anniversary.

Components get cheapercomputers get smallera lot of comparisontweets.

In 1965 Intel co-founder Gordon Moore made his original observation, noticing that over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years. The prediction was specific to semiconductors and stretched out for a decade. Its demise has long been predicted, and eventually will come to an end, but continues to be valid to this day.

Expanding beyond semiconductors, and reshaping all kinds of businesses, including those not traditionally thought of as tech.

Yes, Box co-founder Aaron Levie is the official spokesperson for Moore’s Law, and we’re all perfectly okay with that. His cloud computing company would not be around without it. He’s grateful. We’re all grateful. In conversations Moore’s Law constantly gets referenced.

It has become both a prediction and an abstraction.

Expanding far beyond its origin as a transistor-centric metric.

But Moore’s Law of integrated circuits is only the most recent paradigm in a much longer and even more profound technological trend.

Humanity’s capacity to compute has been compounding for as long as we could measure it.

5 Computing Paradigms: Electromechanical computer build by IBM for the 1890 U.S. Census → Alan Turing’s relay based computer that cracked the Nazi Enigma → Vacuum-tube computer predicted Eisenhower’s win in 1952 → Transistor-based machines used in the first space launches → Integrated-circuit-based personal computer

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines Google’s Director of Engineering, futurist, and author Ray Kurzweil proposed “The Law of Accelerating Returns”, according to which the rate of change in a wide variety of evolutionary systems tends to increase exponentially. A specific paradigm, a method or approach to solving a problem (e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the paradigm exhausts its potential. When this happens, a paradigm shift, a fundamental change in the technological approach occurs, enabling the exponential growth to continue.

Kurzweil explains:

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing’s relay-based machine that cracked the Nazi enigma code, to the vacuum tube computer that predicted Eisenhower’s win in 1952, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

This graph, which venture capitalist Steve Jurvetson describes as the most important concept ever to be graphed, is Kurzweil’s 110 year version of Moore’s Law. It spans across five paradigm shifts that have contributed to the exponential growth in computing.

Each dot represents the best computational price-performance device of the day, and when plotted on a logarithmic scale, they fit on the same double exponential curve that spans over a century. This is a very long lasting and predictable trend. It enables us to plan for a time beyond Moore’s Law, without knowing the specifics of the paradigm shift that’s ahead. The next paradigm will advance our ability to compute to such a massive scale, it will be beyond our current ability to comprehend.

The Power of Exponential Growth

Human perception is linear, technological progress is exponential. Our brains are hardwired to have linear expectations because that has always been the case. Technology today progresses so fast that the past no longer looks like the present, and the present is nowhere near the future ahead. Then seemingly out of nowhere, we find ourselves in a reality quite different than what we would expect.

Kurzweil uses the overall growth of the internet as an example. The bottom chart being linear, which makes the internet growth seem sudden and unexpected, whereas the the top chart with the same data graphed on a logarithmic scale tell a very predictable story. On the exponential graph internet growth doesn’t come out of nowhere; it’s just presented in a way that is more intuitive for us to comprehend.

We are still prone to underestimate the progress that is coming because it’s difficult to internalize this reality that we’re living in a world of exponential technological change. It is a fairly recent development. And it’s important to get an understanding for the massive scale of advancements that the technologies of the future will enable. Particularly now, as we’ve reachedwhat Kurzweil calls the “Second Half of the Chessboard.”

(In the end the emperor realizes that he’s been tricked, by exponents, and has the inventor beheaded. In another version of the story the inventor becomes the new emperor).

It’s important to note that as the emperor and inventor went through the first half of the chessboard things were fairly uneventful. The inventor was first given spoonfuls of rice, then bowls of rice, then barrels, and by the end of the first half of the chess board the inventor had accumulated one large field’s worth — 4 billion grains — which is when the emperor started to take notice. It was only as they progressed through the second half of the chessboard that the situation quickly deteriorated.

# of Grains on 1st half: 4,294,967,295

# of Grains on 2nd half: 18,446,744,069,414,600,000

Mind-bending nonlinear gains in computing are about to get a lot more realistic in our lifetime, as there have been slightly more than 32 doublings of performance since the first programmable computers were invented.

Kurzweil’s Predictions

Kurzweil is known for making mind-boggling predictions about the future. And his track record is pretty good.

“…Ray is the best person I know at predicting the future of artificial intelligence.” —Bill Gates

Ray’s prediction for the future may sound crazy (they do sound crazy), but it’s important to note that it’s not about the specific prediction or the exact year. What’s important to focus on is what the they represent. These predictions are based on an understanding of Moore’s Law and Ray’s Law of Accelerating Returns, an awareness for the power of exponential growth, and an appreciation that information technology follows an exponential trend. They may sound crazy, but they are not based out of thin air.

And with that being said…

Second Half of the Chessboard Predictions

“By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.”

“By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.”

To expand image →

Not quite there yet…

“By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.”

These clones are cute.

“By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.”

Multiplying our intelligence a billionfold by linking our neocortex to a synthetic neocortex in the cloud — what does that actually mean?

In March 2014 Kurzweil gave an excellent talk at the TED Conference. It was appropriately called: Get ready for hybrid thinking.

Here is a summary:

To expand image →

These are the highlights:

Nanobots will connect our neocortex to a synthetic neocortex in the cloud, providing an extension of our neocortex.

Our thinking then will be a hybrid of biological and non-biological thinking(the non-biological portion is subject to the Law of Accelerating Returns and it will grow exponentially).

The frontal cortex and neocortex are not really qualitatively different, so it’s a quantitative expansion of the neocortex (like adding processing power).

The last time we expanded our neocortex was about two million years ago. That additional quantity of thinking was the enabling factor for us to take aqualitative leap and advance language, science, art, technology, etc.

We’re going to again expand our neocortex, only this time it won’t be limited by a fixed architecture of inclosure. It will be expanded without limits, by connecting our brain directly to the cloud.

We already carry a supercomputer in our pocket. We have unlimited access to all the world’s knowledge at our fingertips. Keeping in mind that we are prone to underestimate technological advancements (and that 2045 is not a hard deadline) is it really that far of a stretch to imagine a future where we’re always connected directly from our brain?

Progress is underway. We’ll be able to reverse engineering the neural cortex within five years. Kurzweil predicts that by 2030 we’ll be able to reverse engineer the entire brain. His latest book is called How to Create a Mind… This is the reason Google hired Kurzweil.

Hybrid Human Machines

To expand image →

“We’re going to become increasingly non-biological…”

“We’ll also have non-biological bodies…”

“If the biological part went away it wouldn’t make any difference…”

They* will be as realistic as real reality.”

Impact on Society

technological singularity —“the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization” — is beyond the scope of this article, but these advancements will absolutely have an impact on society. Which way is yet to be determined.

There may be some regret

Politicians will not know who/what to regulate.

Evolution may take an unexpected twist.

The rich-poor gap will expand.

The unimaginable will become reality and society will change.

O que esperar da ciência em 2015 (Zero Hora)

Apostamos em cinco coisas que tendem a aparecer neste ano

19/01/2015 | 06h01

O que esperar da ciência em 2015 SpaceX/Youtube
Foto: SpaceX/Youtube

Em 2014, a ciência conseguiu pousar em um cometa, descobriu que estava errada sobre a evolução genética das aves, revelou os maiores fósseis da história. Miguel Nicolelis apresentou seu exoesqueleto na Copa do Mundo, o satélite brasileiro CBERS-4, em parceria com a China, foi ao espaço com sucesso, um brasileiro trouxe a principal medalha da matemática para casa.

Mas e em 2015, o que veremos? Apostamos em cinco coisas que poderão aparecer neste ano.

Foguetes reusáveis

Se queremos colonizar Marte, não adianta passagem só de ida. Esses foguetes, capazes de ir e voltar, são a promessa para transformar o futuro das viagens espaciais. Veremos se a empresa SpaceX, que já está nessa, consegue.

Robôs em casa

Os japoneses da Softbank começam a vender, em fevereiro, um robô humanoide chamado Pepper. Ele usa inteligência artificial para reconhecer o humor do dono e fala quatro línguas. Apesar de ser mais um ajudante do que um cara que faz, logo logo aprenderá novas funções.

Universo invisível

Grande Colisor de Hádrons vai voltar a funcionar em março e terá potência duas vezes maior de quebrar partículas. Uma das possibilidades é que ele ajude a descobrir novas superpartículas que, talvez, componham a matéria escura. Seria o primeiro novo estado da matéria descoberto em um século.

Cura para o ebola

Depois da crise de 2014, pode ser que as vacinas para o ebola comecem a funcionar e salvem muitas vidas na África. Vale o mesmo para a aids. O HIV está cercado, esperamos que a ciência finalmente o vença neste ano.

Discussões climáticas

2014 foi um dos mais quentes da história e, do jeito que a coisa vai, 2015 seguirá a mesma trilha. Em dezembro, o mundo vai discutir um acordo para tentar reverter o grau de emissões de gases em Paris. São medidas para ser implementadas a partir de 2020. Que sejam sensatos nossos líderes.

Citizen science network produces accurate maps of atmospheric dust (Science Daily)

Date: October 27, 2014

Source: Leiden University

Summary: Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The research team analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

iSPEX map compiled from all iSPEX measurements performed in the Netherlands on July 8, 2013, between 14:00 and 21:00. Each blue dot represents one of the 6007 measurements that were submitted on that day. At each location on the map, the 50 nearest iSPEX measurements were averaged and converted to Aerosol Optical Thickness, a measure for the total amount of atmospheric particles. This map can be compared to the AOT data from the MODIS Aqua satellite, which flew over the Netherlands at 16:12 local time. The relatively high AOT values were caused by smoke clouds from forest fires in North America, which were blown over the Netherlands at an altitude of 2-4 km. In the course of the day, winds from the North brought clearer air to the northern provinces. Credit: Image courtesy of Leiden, Universiteit

Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The iSPEX team, led by Frans Snik of Leiden University, analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

The iSPEX maps achieve a spatial resolution as small as 2 kilometers whereas satellite data are much courser. They also fill in blind spots of established ground-based atmospheric measurement networks. The scientific article that presents these first results of the iSPEX project is being published in Geophysical Research Letters.

The iSPEX team developed a new atmospheric measurement method in the form of a low-cost add-on for smartphone cameras. The iSPEX app instructs participants to scan the blue sky while the phone’s built-in camera takes pictures through the add-on. The photos record both the spectrum and the linear polarization of the sunlight that is scattered by suspended dust particles, and thus contain information about the properties of these particles. While such properties are difficult to measure, much better knowledge on atmospheric particles is needed to understand their effects on health, climate and air traffic.

Thousands of participants performed iSPEX measurements throughout the Netherlands on three cloud-free days in 2013. This large-scale citizen science experiment allowed the iSPEX team to verify the reliability of this new measurement method.

After a rigorous quality assessment of each submitted data point, measurements recorded in specific areas within a limited amount of time are averaged to obtain sufficient accuracy. Subsequently the data are converted to Aerosol Optical Thickness (AOT), which is a standardized quantity related to the total amount of atmospheric particles. The iSPEX AOT data match comparable data from satellites and the AERONET ground station at Cabauw, the Netherlands. In areas with sufficiently high measurement densities, the iSPEX maps can even discern smaller details than satellite data.

Team leader Snik: “This proves that our new measurement method works. But the great strength of iSPEX is the measurement philosophy: the establishment of a citizen science network of thousands of enthusiastic volunteers who actively carry out outdoor measurements. In this way, we can collect valuable information about atmospheric particles on locations and/or at times that are not covered by professional equipment. These results are even more accurate than we had hoped, and give rise to further research and development. We are currently investigating to what extent we can extract more information about atmospheric particles from the iSPEX data, like their sizes and compositions. And of course, we want to organize many more measurement days.”

With the help of a grant that supports public activities in Europe during the International Year of Light 2015, the iSPEX team is now preparing for the international expansion of the project. This expansion provides opportunities for national and international parties to join the project. Snik: “Our final goal is to establish a global network of citizen scientists who all contribute measurements to study the sources and societal effects of polluting atmospheric particles.”

Journal Reference:

  1. Frans Snik, Jeroen H. H. Rietjens, Arnoud Apituley, Hester Volten, Bas Mijling, Antonio Di Noia, Stephanie Heikamp, Ritse C. Heinsbroek, Otto P. Hasekamp, J. Martijn Smit, Jan Vonk, Daphne M. Stam, Gerard van Harten, Jozua de Boer, Christoph U. Keller. Mapping atmospheric aerosols with a citizen science network of smartphone spectropolarimeters. Geophysical Research Letters, 2014; DOI: 10.1002/2014GL061462

Why Do the Anarcho-Primitivists Want to Abolish Civilization? (io9)

George Dvorsky

Sept 12, 2014 11:28am

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Anarcho-primitivists are the ultimate Luddites — ideologues who favor complete technological relinquishment and a return to a hunter-gatherer lifestyle. We spoke to a leading proponent to learn more about this idea and why he believes civilization was our worst mistake.

Philosopher John Zerzan wants you to get rid of all your technology — your car, your mobile phone, your computer, your appliances — the whole lot. In his perfect world, you’d be stripped off all your technological creature comforts, reduced to a lifestyle that harkens back to when our hunter-gatherer ancestors romped around the African plains.

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Photo via Cast/John Zerzan/CC

You see, Zerzan is an outspoken advocate of anarcho-primitivism, a philosophical and political movement predicated under the assumption that the move from hunter-gatherer to agricultural subsistence was a stupendously awful mistake — an existential paradigm shift that subsequently gave rise to social stratification, coercion, alienation, and unchecked population growth. It’s only through the abandonment of technology, and a return to “non-civilized” ways of being — a process anarcho-primitivists call “wilding” — that we can eliminate the host of social ills that now plagues the human species.

As an anarchist, Zerzan is opposed to the state, along with all forms of hierarchical and authoritarian relations. The crux of his argument, one inspired by Karl Marx and Ivan Illich, is that the advent of technologies irrevocably altered the way humans interact with each other. There’s a huge difference, he argues, between simple tools that stay under the control of the user, and those technological systems that draw the user under the control of those who produce the tools. Zerzan says that technology has come under the control of an elite class, thus giving rise to alienation, domestication, and symbolic thought.

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Zerzan is not alone in his views. When the radical Luddite Ted “the Unabomber” Kasczinski was on trial for killing three people and injuring 23, Zerzan became his confidant, offering support for his ideas but condemning his actions (Zerzan recentlystated that he and Kasczinski are “not on terms anymore.”) Radicalized groups have also sprung up promoting similar views, including a Mexican group called the Individualists Tending Toward the Wild — a group with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course.” Back in 2011, this group sent several mail bombs to nanotechnology lab and researchers in Latin America, killing two people.

Looking ahead to the future, and considering the scary potential for advanced technologies such as artificial superintelligence and robotics, there’s the very real possibility that these sorts of groups will start to become more common — and more radicalized (similar to the radical anti-technology terrorist group Revolutionary Independence From Technology (RIFT) that was portrayed in the recent Hollywood film, Transcendence).

Why Do the Anarcho-Primitivists Want to Abolish Civilization?EXPAND

But Zerzan does not promote or condone violence. He’d rather see the rise of the “Future Primitive” come about voluntarily. To that end, he uses technology — like computers and phones — to get his particular message across (he considers it a necessary evil). That’s how I was able to conduct this interview with him, which we did over email.

io9: Anarcho-primitivism is as much a critique of modernity as is it a prescription for our perceived ills. Can you describe the kind of future you’re envisioning?

Zerzan: I want to see mass society radically decentralized into face-to-face communities. Only then can the individual be both responsible and autonomous. As Paul Shepard said, “Back to the Pleistocene!”

As an ideology, primitivism is fairly self-explanatory. But why add the ‘anarcho’ part to it? How can you be so sure there’s a link between more primitive states of being and the diminishment of power relations and hierarchies among complex primates?

The anarcho part refers to the fact that this question, this approach, arose mainly within an anarchist or anti-civilization milieu. Everyone I know in this context is an anarchist. There are no guarantees for the future, but we do know that egalitarian and anti-hierarchical relations were the norm with Homo for 1-2 million years. This is indisputable in the anthropological literature.

Then how do you distinguish between tools that are acceptable for use versus those that give rise to “anti-hierarchical relations”?

Those tools that involve the least division of labor or specialization involve or imply qualities such as intimacy, equality, flexibility. With increased division of labor we moved away from tools to systems of technology, where the dominant qualities or values are distancing, reliance on experts, inflexibility.

But tool use and symbolic language are indelible attributes of Homo sapiens — these are our distinguishing features. Aren’t you just advocating for biological primitivism — a kind of devolution of neurological characteristics?

Anthropologists (e.g. Thomas Wynn) seem to think that Homo had an intelligence equal to ours at least a million years ago. Thus neurology doesn’t to enter into it. Tool use, of course, has been around from before the beginning of Homo some 3 million years ago. As for language, it’s quite debatable as to when it emerged.

Early humans had a workable, non-destructive approach, that did not generally speaking involve much work, did not objectify women, and was anti-hierarchical. Does this sound backward to you?

You’ve got some provocative ideas about language and how it demeans or diminishes experience.

Every symbolic dimension — time, language, art, number — is a mediation between ourselves and reality. We lived more directly, immediately before these dimensions arrived, fairly recently. Freud, the arch-rationalist, thought that we once communicated telepathically, though I concede that my critique of language is the most speculative of my forays into the symbolic.

You argue that a hunter-gatherer lifestyle is as close to the ideal state of being as is possible. The Amish, on the other hand, have drawn the line at industrialization, and they’ve subsequently adopted an agrarian lifestyle. What is it about the advent of agriculture and domestication that’s so problematic?

In the 1980s Jared Diamond called the move to domestication or agriculture “the worst mistake humans ever made.” A fundamental shift away from taking what nature gives to the domination of nature. The inner logic of domestication of animals and plants is an unbroken progression, which always deepens and extends the ethos of control. Now of course control has reached the molecular level with nanotechnology, and the sphere of what I think is the very unhealthy fantasies of transhumanist neuroscience and AI.

In which ways can anarcho-primitivism be seen as the ultimate green movement? Do you see it that way?

We are destroying the biosphere at a fearful rate. Anarcho-primitivism seeks the end of the primary institutions that drive the destruction: domestication/civilization and industrialization. To accept “green” and “sustainable” illusions ignores the causes of the all-enveloping undermining of nature, including our inner nature. Anarcho-primitivism insists on a deeper questioning and helps identify the reasons for the overall crisis.

Tell us about the anarcho-primitivist position on science.

The reigning notion of what is science is an objectifying method, which magnifies the subject-object split. “Science” for hunter-gatherers is very basically different. It is based on participation with living nature, intimacy with it. Science in modernity mainly breaks reality down into now dead, inert fragments to “unlock” its “secrets.” Is that superior to a forager who knows a number of things from the way a blade of grass is bent?

Well, being trapped in an endless cycle of Darwinian processes doesn’t seem like the most enlightened or moral path for our species to take. Civilization and industrialization have most certainly introduced innumerable problems, but our ability to remove ourselves from the merciless “survival of the fittest” paradigm is a no-brainer. How could you ever convince people to relinquish the gifts of modernity — things like shelter, food on-demand, vaccines, pain relief, anesthesia, and ambulances at our beckon call?

It is reality that will “convince” people — or not. Conceivably, denial will continue to rule the day. But maybe only up to a point. If/when it can be seen that their reality is worsening qualitatively in every sphere a new perspective may emerge. One that questions the deep un-health of mass society and its foundations. Again, non-robust, de-skilled folks may keep going through the motions, stupefied by techno-consumerism and drugs of all kinds. Do you think that can last?

Most futurists would answer that things are getting better — and that through responsible foresight and planning we’ll be able to create the future we imagine.

“Things are getting better”? I find this astounding. The immiseration surrounds us: anxiety, depression, stress, insomnia, etc. on a mass scale, the rampage shootings now commonplace. The progressive ruin of the natural world. I wonder how anyone who even occasionally picks up a newspaper can be so in the dark. Of course I haven’t scratched the surface of how bad it is becoming. It is deeply irresponsible to promote such ignorance and projections.

That’s a very presentist view. Some left-leaning futurists argue, for example, that ongoing technological progress (both in robotics and artificial intelligence) will lead to an automation revolution — one that will free us from dangerous and demeaning work. It’s very possible that we’ll be able to invent our way out of the current labor model that you’re so opposed to.

Technological advances have only meant MORE work. That is the record. In light of this it is not quite cogent to promise that a more technological mass society will mean less work. Again, reality anyone??

Transhumanists advocate for the iterative improvement of the human species, things like enhanced intelligence and memory, the elimination of psychological disorders (including depression), radical life extension, and greater physical capacities. Tell us why you’re so opposed to these things.

Why I am opposed to these things? Let’s take them in order:

Enhanced intelligence and memory? I think it is now quite clear that advancing technology in fact makes people stupider and reduces memory. Attention span is lessened by Tweet-type modes, abbreviated, illiterate means of communicating. People are being trained to stare at screens at all times, a techno-haze that displaces life around them. I see zombies, not sharper, more tuned in people.

Elimination of psychological disorders? But narcissism, autism and all manner of such disabilities are on the rise in a more and more tech-oriented world.

Radical life extension? One achievement of modernity is increased longevity, granted. This has begun to slip a bit, however, in some categories. And one can ponder what is the quality of life? Chronic conditions are on the rise though people can often be kept alive longer. There’s no evidence favoring a radical life extension.

Greater physical capacities? Our senses were once acute and we were far more robust than we are now under the sign of technology. Look at all the flaccid, sedentary computer jockeys and extend that forward. It is not I who doesn’t want these thing; rather, the results are negative looking at the techno project, eh?

Do you foresee the day when a state of anarcho-primitivism can be achieved (even partially by a few enthusiasts)?

A few people cannot achieve such a future in isolation. The totality infects everything. It all must go and perhaps it will. Do you think people are happy with it?

Final Thoughts

Zerzan’s critique of civilization is certainly interesting and worthy of discussion. There’s no doubt that technology has taken humanity along a path that’s resulted in massive destruction and suffering, both to ourselves and to our planet and its animal inhabitants.

But there’s something deeply unsatisfying with the anarcho-primitivist prescription — that of erasing our technological achievements and returning to a state of nature. It’s fed by a cynical and defeatist world view that buys into the notion that everything will be okay once we regress back to a state where our ecological and sociological footprints are reduced to practically nil. It’s a way of eliminating our ability to make an impact on the world — and onto ourselves.

It’s also an ideological view that fetishizes our ancestral past. Despite Zerzan’s cocksure proclamations to the contrary, our paleolithic forebears were almost certainly hierarchical and socially stratified. There isn’t a single social species on this planet — whether they’re primates or elephants or cetaceans — that doesn’t organize its individuals according to capability, influence, or level of reproductive fitness. Feeling “alienated,” “frustrated,” and “controlled” is an indelible part of the human condition, regardless of whether we live in tribal arrangements or in the information age. The anarcho-primitivist fantasy of the free and unhindered noble savage is just that — a fantasy. Hunter-gatherers were far from free, coerced by the demands of biology and nature to mete out an existence under the harshest of circumstances.

Technology One Step Ahead of War Laws (Science Daily)

Jan. 6, 2014 — Today’s emerging military technologies — including unmanned aerial vehicles, directed-energy weapons, lethal autonomous robots, and cyber weapons like Stuxnet — raise the prospect of upheavals in military practices so fundamental that they challenge long-established laws of war. Weapons that make their own decisions about targeting and killing humans, for example, have ethical and legal implications obvious and frightening enough to have entered popular culture (for example, in the Terminator films).

The current international laws of war were developed over many centuries and long before the current era of fast-paced technological change. Military ethics and technology expert Braden Allenby says the proper response to the growing mismatch between long-established international law and emerging military technology “is neither the wholesale rejection of the laws of war nor the comfortable assumption that only minor tweaks to them are necessary.” Rather, he argues, the rules of engagement should be reconsidered through deliberate and focused international discussion that includes a wide range of cultural and institutional perspectives.

Allenby’s article anchors a special issue on the threat of emerging military technologies in the latest Bulletin of the Atomic Scientists (BOS), published by SAGE.

History is replete with paradigm shifts in warfare technology, from the introduction of gunpowder, which arguably gave rise to nation states, to the air-land-battle technologies used during the Desert Storm offensive in Kuwait and Iraq in 1991, which caused 20,000 to 30,000 Iraqi casualties and left only 200 US coalition troops dead. But today’s accelerating advances across the technological frontier and dramatic increases in the numbers of social institutions at play around the world are blurring boundaries between military and civil entities and state and non-state actors. And because the United States has an acknowledged primacy in terms of conventional forces, the nations and groups that compete with it increasingly think in terms of asymmetric warfare, raising issues that lie beyond established norms of military conduct and may require new legal thinking and institutions to address.

“The impact of emerging technologies on the laws of war might be viewed as a case study and an important learning opportunity for humankind as it struggles to adapt to the complexity that it has already wrought, but has yet to learn to manage,” Allenby writes.

Other articles in the Bulletin’s January/February special issue on emerging military technologies include “The enhanced warfighter” by Ken Ford, which looks at the ethics and practicalities of performance enhancement for military personnel, and Michael C. Horowitz’s overview of the near-term future of US war-fighting technology, “Coming next in military tech.” The issue also offers two views of the use of advanced robotics: “Stopping killer robots,” Mark Gubrud’s argument in favor of an international ban on lethal autonomous weapons, and “Robot to the rescue,” Gill Pratt’s account of a US Defense Department initiative aiming to develop robots that will improve response to disasters, like the Fukushima nuclear catastrophe, that involve highly toxic environments.

Journal Reference:

  1. Braden R. Allenby. Are new technologies undermining the laws of war? Bulletin of the Atomic Scientists, January/February 2014

Solar Cells Made Thin, Efficient and Flexible (Science Daily)

Dec. 9, 2013 — Converting sunshine into electricity is not difficult, but doing so efficiently and on a large scale is one of the reasons why people still rely on the electric grid and not a national solar cell network.

Debashis Chanda helped create large sheets of nanotextured, silicon micro-cell arrays that hold the promise of making solar cells lightweight, more efficient, bendable and easy to mass produce. (Credit: UCF)

But a team of researchers from the University of Illinois at Urbana-Champaign and the University of Central Florida in Orlando may be one step closer to tapping into the full potential of solar cells. The team found a way to create large sheets of nanotextured, silicon micro-cell arrays that hold the promise of making solar cells lightweight, more efficient, bendable and easy to mass produce.

The team used a light-trapping scheme based on a nanoimprinting technique where a polymeric stamp mechanically emboss the nano-scale pattern on to the solar cell without involving further complex lithographic steps. This approach has led to the flexibility researchers have been searching for, making the design ideal for mass manufacturing, said UCF assistant professor Debashis Chanda, lead researcher of the study.

The study’s findings are the subject of the November cover story of the journal Advanced Energy Materials.

Previously, scientists had suggested designs that showed greater absorption rates of sunlight, but how efficiently that sunlight was converted into electrical energy was unclear, Debashis said. This study demonstrates that the light-trapping scheme offers higher electrical efficiency in a lightweight, flexible module.

The team believes this technology could someday lead to solar-powered homes fueled by cells that are reliable and provide stored energy for hours without interruption.

Journal Reference:

  1. Ki Jun Yu, Li Gao, Jae Suk Park, Yu Ri Lee, Christopher J. Corcoran, Ralph G. Nuzzo, Debashis Chanda, John A. Rogers. Light Trapping: Light Trapping in Ultrathin Monocrystalline Silicon Solar Cells (Adv. Energy Mater. 11/2013)Advanced Energy Materials, 2013; 3 (11): 1528 DOI: 10.1002/aenm.201370046

Bonobo genius makes stone tools like early humans did (New Scientist)

13:09 21 August 2012 by Hannah Krakauer

Kanzi the bonobo continues to impress. Not content with learning sign language or making up “words” for things like banana or juice, he now seems capable of making stone tools on a par with the efforts of early humans.

Even a human could manage this <i>(Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))</i>

Even a human could manage this (Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))

Eviatar Nevo of the University of Haifa in Israel and his colleagues sealed food inside a log to mimic marrow locked inside long bones, and watched Kanzi, a 30-year-old male bonobo chimp, try to extract it. While a companion bonobo attempted the problem a handful of times, and succeeded only by smashing the log on the ground, Kanzi took a longer and arguably more sophisticated approach.

Both had been taught to knap flint flakes in the 1990s, holding a stone core in one hand and using another as a hammer. Kanzi used the tools he created to come at the log in a variety of ways: inserting sticks into seams in the log, throwing projectiles at it, and employing stone flints as choppers, drills, and scrapers. In the end, he got food out of 24 logs, while his companion managed just two.

Perhaps most remarkable about the tools Kanzi created is their resemblance to early hominid tools. Both bonobos made and used tools to obtain food – either by extracting it from logs or by digging it out of the ground. But only Kanzi’s met the criteria for both tool groups made by early Homo: wedges and choppers, and scrapers and drills.

Do Kanzi’s skills translate to all bonobos? It’s hard to say. The abilities of animals like Alex the parrot, who could purportedly count to six, and Betty the crow, who crafted a hook out of wire, sometimes prompt claims about the intelligence of an entire species. But since these animals are raised in unusual environments where they frequently interact with humans, their cases may be too singular to extrapolate their talents to their brethren.

The findings will fuel the ongoing debate over whether stone tools mark the beginning of modern human culture, or predate our Homo genus. They appear to suggest the latter – though critics will point out that Kanzi and his companion were taught how to make the tools. Whether the behaviour could arise in nature is unclear.

Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1212855109