Arquivo mensal: junho 2021

Indigenous People Advance a Dramatic Goal: Reversing Colonialism (New York Times)

nytimes.com

Max Fisher


The Interpreter

Fifty years of patient advocacy, including the shocking discovery of mass graves at Kamloops, have secured once-unthinkable gains.

A makeshift memorial to honor the 215 children whose remains have been discovered near the Kamloops Indian Residential School in British Columbia, earlier this month.
Credit: Darryl Dyck/The Canadian Press, via Associated Press

June 17, 2021

When an Indigenous community in Canada announced recently that it had discovered a mass burial site with the remains of 215 children, the location rang with significance.

Not just because it was on the grounds of a now-shuttered Indian Residential School, whose forcible assimilation of Indigenous children a 2015 truth and reconciliation report called “a key component of a Canadian government policy of cultural genocide.”

That school is in Kamloops, a city in British Columbia from which, 52 years ago, Indigenous leaders started a global campaign to reverse centuries of colonial eradication and reclaim their status as sovereign nations.

Their effort, waged predominantly in courts and international institutions, has accumulated steady gains ever since, coming further than many realize.

It has brought together groups from the Arctic to Australia. Those from British Columbia, in Canada’s mountainous west, have been at the forefront throughout.

Only two years ago, the provincial government there became the world’s first to adopt into law United Nations guidelines for heightened Indigenous sovereignty. On Wednesday, Canada’s Parliament passed a law, now awaiting a final rubber stamp, to extend those measures nationwide.

It was a stunning victory, decades in the making, that activists are working to repeat in New Zealand — and, perhaps one day, in more recalcitrant Australia, Latin America and even the United States.

“There’s been a lot of movement in the field. It’s happening with different layers of courts, with different legislatures,” said John Borrows, a prominent Canadian legal scholar and a member of the Chippewa of the Nawash Unceded First Nation.

The decades-long push for sovereignty has come with a rise in activism, legal campaigning and historical reckonings like the discovery at Kamloops. All serve the movement’s ultimate aim, which is nothing less than overturning colonial conquests that the world has long accepted as foregone.

A classroom at All Saints Residential School in Lac la Ronge, Saskatchewan, circa 1950.
Credit: Shingwauk Residential Schools Center, via Reuters

No one is sure precisely what that will look like or how long it might take. But advances once considered impossible “are happening now,” Dr. Borrows said, “and in an accelerating way.”

The Indigenous leaders who gathered in 1969 had been galvanized by an array of global changes.

The harshest assimilation policies were rolled back in most countries, but their effects remained visible in everyday life. Extractive and infrastructure megaprojects were provoking whole communities in opposition. The civil rights era was energizing a generation.

But two of the greatest motivators were gestures of ostensible reconciliation.

In 1960, world governments near-unanimously backed a United Nations declaration calling to roll back colonialism. European nations began withdrawing overseas, often under pressure from the Cold War powers.

But the declaration excluded the Americas, Australia and New Zealand, where colonization was seen as too deep-rooted to reverse. It was taken as effectively announcing that there would be no place in the modern world for Indigenous peoples.

Then, at the end of the decade, Canada’s progressive government issued a fateful “white paper” announcing that it would dissolve colonial-era policies, including reserves, and integrate Indigenous peoples as equal citizens. It was offered as emancipation.

A statue in Toronto of Egerton Ryerson, considered an architect of Canada’s residential indigenous school system, was toppled and defaced during a protest this month.
Credit: Chris Helgren/Reuters

Other countries were pursuing similar measures, with the United States’ inauspiciously named “termination policy.”

To the government’s shock, Indigenous groups angrily rejected the proposal. Like the United Nations declaration, it implied that colonial-era conquests were to be accepted as forgone.

Indigenous leaders gathered in Kamloops to organize a response. British Columbia was a logical choice. Colonial governments had never signed treaties with its original inhabitants, unlike in other parts of Canada, giving special weight to their claim to live under illegal foreign occupation.

“It’s really Quebec and British Columbia that have been the two epicenters, going back to the ’70s,” said Jérémie Gilbert, a human rights lawyer who works with Indigenous groups. Traditions of civil resistance run deep in both.

The Kamloops group began what became a campaign to impress upon the world that they were sovereign peoples with the rights of any nation, often by working through the law.

They linked up with others around the world, holding the first meeting of The World Council of Indigenous Peoples on Vancouver Island. Its first leader, George Manuel, had passed through the Kamloops residential school as a child.

The council’s charter implicitly treated countries like Canada and Australia as foreign powers. It began lobbying the United Nations to recognize Indigenous rights.

It was nearly a decade before the United Nations so much as established a working group. Court systems were little faster. But the group’s ambitions were sweeping.

Legal principles like terra nullius — “nobody’s land” — had long served to justify colonialism. The activists sought to overturn these while, in parallel, establishing a body of Indigenous law.

“The courts are very important because it’s part of trying to develop our jurisprudence,” Dr. Borrows said.

The movement secured a series of court victories that, over decades, stitched together a legal claim to the land, not just as its owners but as sovereign nations. One, in Canada, established that the government had an obligation to settle Indigenous claims to territory. In Australia, the high court backed a man who argued that his family’s centuries-long use of their land superseded the government’s colonial-era conquest.

Activists focused especially on Canada, Australia and New Zealand, which each draw on a legal system inherited from Britain. Laws and rulings in one can become precedent in the others, making them easier to present to the broader world as a global norm.

Irene Watson, an Australian scholar of international Indigenous law and First Nations member, described this effort, in a 2016 book, as “the development of international standards” that would pressure governments to address “the intergenerational impact of colonialism, which is a phenomenon that has never ended.”

It might even establish a legal claim to nationhood. But it is the international arena that ultimately confers acceptance on any sovereign state.

By the mid-1990s, the campaign was building momentum.

The United Nations began drafting a declaration of Indigenous rights. Several countries formally apologized, often alongside promises to settle old claims.

This period of truth and reconciliation was meant to address the past and, by educating the broader public, create support for further advances.

A sweeping 1996 report, chronicling many of Canada’s darkest moments, was followed by a second investigation, focused on residential schools. Completed 19 years after the first, the Truth and Reconciliation Commission spurred yet more federal policy recommendations and activism, including last month’s discovery at Kamloops.

Prime Minister Justin Trudeau visited a makeshift memorial near Canada’s Parliament honoring the children whose remains were found near the school in Kamloops.
Credit: Dave Chan/Agence France-Presse — Getty Images

Judicial advances have followed a similar process: yearslong efforts that bring incremental gains. But these add up. Governments face growing legal obligations to defer to Indigenous autonomy.

The United States has lagged. Major court rulings have been fewer. The government apologized only in 2010 for “past ill-conceived policies” against Indigenous people and did not acknowledge direct responsibility. Public pressure for reconciliation has been lighter.

Still, efforts are growing. In 2016, activists physically impeded construction of a North Dakota pipeline whose environmental impact, they said, would infringe on Sioux sovereignty. They later persuaded a federal judge to pause the project.

Native Americans marching against the Dakota Access oil pipeline near Cannon Ball, North Dakota, in 2017.
Credit: Terray Sylvester/Reuters

Latin America has often lagged as well, despite growing activism. Militaries in several countries have targeted Indigenous communities in living memory, leaving governments reluctant to self-incriminate.

In 2007, after 40 years of maneuvering, the United Nations adopted the declaration on Indigenous rights. Only the United States, Australia, New Zealand and Canada opposed, saying it elevated some Indigenous claims above those of other citizens. All four later reversed their positions.

“The Declaration’s right to self-determination is not a unilateral right to secede,” Dr. Claire Charters, a New Zealand Māori legal expert, wrote in a legal journal. However, its recognition of “Indigenous peoples’ collective land rights” could be “persuasive” in court systems, which often treat such documents as proof of an international legal principle.

Few have sought formal independence. But an Australian group’s 2013 declaration, brought to the United Nations and the International Court of Justice, inspired several others to follow. All failed. But, by demonstrating widening legal precedent and grass roots support, they highlighted that full nationhood is not as unthinkable as it once was.

It may not have seemed like a step in that direction when, in 2019, British Columbia enshrined the U.N. declaration’s terms into provincial law.

But Dr. Borrows called its provisions “quite significant,” including one requiring that the government win affirmative consent from Indigenous communities for policies that affect them. Conservatives and legal scholars have argued it would amount to an Indigenous veto, though Justin Trudeau, Canada’s prime minister, and his liberal government dispute this.

Mr. Trudeau promised to pass a similar law nationally in 2015, but faced objections from energy and resource industries that it would allow Indigenous communities to block projects. He continued trying, and Wednesday’s passage in Parliament all but ensures that Canada will fully adopt the U.N. terms.

Mr. Gilbert said that activists’ current focus is “getting this into the national systems.” Though hardly Indigenous independence, it would bring them closer than any step in generations.

Near the grounds of the former Kamloops Indian Residential School.
Credit: Jennifer Gauthier/Reuters

As the past 50 years show, this could help pressure others to follow (New Zealand is considered a prime candidate), paving the way for the next round of gradual but quietly historical advances.

It is why, Mr. Gilbert said, “All the eyes are on Canada.”

Greater than the sum of our parts: The evolution of collective intelligence (EurekaAlert!)

News Release 15-Jun-2021

University of Cambridge

Research News

The period preceding the emergence of behaviourally modern humans was characterised by dramatic climatic and environmental variability – it is these pressures, occurring over hundreds of thousands of years that shaped human evolution.

New research published today in the Cambridge Archaeological Journal proposes a new theory of human cognitive evolution entitled ‘Complementary Cognition’ which suggests that in adapting to dramatic environmental and climactic variabilities our ancestors evolved to specialise in different, but complementary, ways of thinking.

Lead author Dr Helen Taylor, Research Associate at the University of Strathclyde and Affiliated Scholar at the McDonald Institute for Archaeological Research, University of Cambridge, explained: “This system of complementary cognition functions in a way that is similar to evolution at the genetic level but instead of underlying physical adaptation, may underlay our species’ immense ability to create behavioural, cultural and technological adaptations. It provides insights into the evolution of uniquely human adaptations like language suggesting that this evolved in concert with specialisation in human cognition.”

The theory of complementary cognition proposes that our species cooperatively adapt and evolve culturally through a system of collective cognitive search alongside genetic search which enables phenotypic adaptation (Darwin’s theory of evolution through natural selection can be interpreted as a ‘search’ process) and cognitive search which enables behavioural adaptation.

Dr Taylor continued, “Each of these search systems is essentially a way of adapting using a mixture of building on and exploiting past solutions and exploring to update them; as a consequence, we see evolution in those solutions over time. This is the first study to explore the notion that individual members of our species are neurocognitively specialised in complementary cognitive search strategies.”

Complementary cognition could lie at the core of explaining the exceptional level of cultural adaptation in our species and provides an explanatory framework for the emergence of language. Language can be viewed as evolving both as a means of facilitating cooperative search and as an inheritance mechanism for sharing the more complex results of complementary cognitive search. Language is viewed as an integral part of the system of complementary cognition.

The theory of complementary cognition brings together observations from disparate disciplines, showing that they can be viewed as various faces of the same underlying phenomenon.

Dr Taylor continued: “For example, a form of cognition currently viewed as a disorder, dyslexia, is shown to be a neurocognitive specialisation whose nature in turn predicts that our species evolved in a highly variable environment. This concurs with the conclusions of many other disciplines including palaeoarchaeological evidence confirming that the crucible of our species’ evolution was highly variable.”

Nick Posford, CEO, British Dyslexia Association said, “As the leading charity for dyslexia, we welcome Dr Helen Taylor’s ground-breaking research on the evolution of complementary cognition. Whilst our current education and work environments are often not designed to make the most of dyslexia-associated thinking, we hope this research provides a starting point for further exploration of the economic, cultural and social benefits the whole of society can gain from the unique abilities of people with dyslexia.”

At the same time, this may also provide insights into understanding the kind of cumulative cultural evolution seen in our species. Specialisation in complementary search strategies and cooperatively adapting would have vastly increased the ability of human groups to produce adaptive knowledge, enabling us to continually adapt to highly variable conditions. But in periods of greater stability and abundance when adaptive knowledge did not become obsolete at such a rate, it would have instead accumulated, and as such Complementary Cognition may also be a key factor in explaining cumulative cultural evolution.

Complementary cognition has enabled us to adapt to different environments, and may be at the heart of our species’ success, enabling us to adapt much faster and more effectively than any other highly complex organism. However, this may also be our species’ greatest vulnerability.

Dr Taylor concluded: “The impact of human activity on the environment is the most pressing and stark example of this. The challenge of collaborating and cooperatively adapting at scale creates many difficulties and we may have unwittingly put in place a number of cultural systems and practices, particularly in education, which are undermining our ability to adapt. These self-imposed limitations disrupt our complementary cognitive search capability and may restrict our capacity to find and act upon innovative and creative solutions.”

“Complementary cognition should be seen as a starting point in exploring a rich area of human evolution and as a valuable tool in helping to create an adaptive and sustainable society. Our species may owe our spectacular technological and cultural achievements to neurocognitive specialisation and cooperative cognitive search, but our adaptive success so far may belie the importance of attaining an equilibrium of approaches. If this system becomes maladjusted, it can quickly lead to equally spectacular failures to adapt – and to survive, it is critical that this system be explored and understood further.”

Humans Are Evolving Faster Than Ever. The Reason Is Not Genetic, Study Claims (Science Alert)

sciencealert.com

Cameron Duke, Live Science – 15 JUNE 2021


At the mercy of natural selection since the dawn of life, our ancestors adapted, mated and died, passing on tiny genetic mutations that eventually made humans what we are today. 

But evolution isn’t bound strictly to genes anymore, a new study suggests. Instead, human culture may be driving evolution faster than genetic mutations can work.

In this conception, evolution no longer requires genetic mutations that confer a survival advantage being passed on and becoming widespread. Instead, learned behaviors passed on through culture are the “mutations” that provide survival advantages.

This so-called cultural evolution may now shape humanity’s fate more strongly than natural selection, the researchers argue.

“When a virus attacks a species, it typically becomes immune to that virus through genetic evolution,” study co-author Zach Wood, a postdoctoral researcher in the School of Biology and Ecology at the University of Maine, told Live Science.

Such evolution works slowly, as those who are more susceptible die off and only those who survive pass on their genes. 

But nowadays, humans mostly don’t need to adapt to such threats genetically. Instead, we adapt by developing vaccines and other medical interventions, which are not the results of one person’s work but rather of many people building on the accumulated “mutations” of cultural knowledge.

By developing vaccines, human culture improves its collective “immune system,” said study co-author Tim Waring, an associate professor of social-ecological systems modeling at the University of Maine.

And sometimes, cultural evolution can lead to genetic evolution. “The classic example is lactose tolerance,” Waring told Live Science. “Drinking cow’s milk began as a cultural trait that then drove the [genetic] evolution of a group of humans.”

In that case, cultural change preceded genetic change, not the other way around. 

The concept of cultural evolution began with the father of evolution himself, Waring said. Charles Darwin understood that behaviors could evolve and be passed to offspring just as physical traits are, but scientists in his day believed that changes in behaviors were inherited. For example, if a mother had a trait that inclined her to teach a daughter to forage for food, she would pass on this inherited trait to her daughter. In turn, her daughter might be more likely to survive, and as a result, that trait would become more common in the population. 

Waring and Wood argue in their new study, published June 2 in the journal Proceedings of the Royal Society B, that at some point in human history, culture began to wrest evolutionary control from our DNA. And now, they say, cultural change is allowing us to evolve in ways biological change alone could not.

Here’s why: Culture is group-oriented, and people in those groups talk to, learn from and imitate one another. These group behaviors allow people to pass on adaptations they learned through culture faster than genes can transmit similar survival benefits.

An individual can learn skills and information from a nearly unlimited number of people in a small amount of time and, in turn, spread that information to many others. And the more people available to learn from, the better. Large groups solve problems faster than smaller groups, and intergroup competition stimulates adaptations that might help those groups survive.

As ideas spread, cultures develop new traits.

In contrast, a person only inherits genetic information from two parents and racks up relatively few random mutations in their eggs or sperm, which takes about 20 years to be passed on to their small handful of children. That’s just a much slower pace of change.

“This theory has been a long time coming,” said Paul Smaldino, an associate professor of cognitive and information sciences at the University of California, Merced who was not affiliated with this study. “People have been working for a long time to describe how evolutionary biology interacts with culture.”

It’s possible, the researchers suggest, that the appearance of human culture represents a key evolutionary milestone.

“Their big argument is that culture is the next evolutionary transition state,” Smaldino told Live Science.

Throughout the history of life, key transition states have had huge effects on the pace and direction of evolution. The evolution of cells with DNA was a big transitional state, and then when larger cells with organelles and complex internal structures arrived, it changed the game again. Cells coalescing into plants and animals was another big sea change, as was the evolution of sex, the transition to life on land and so on.

Each of these events changed the way evolution acted, and now humans might be in the midst of yet another evolutionary transformation. We might still evolve genetically, but that may not control human survival very much anymore.

“In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,” Waring said in a statement.

But genetics drives bee colonies, while the human superorganism will exist in a category all its own. What that superorganism looks like in the distant future is unclear, but it will likely take a village to figure it out. 

Supercomputador do Inpe será desligado, afetando previsões do clima (Tecmundo)

tecmundo.com.br

Giovanna Fantinato, 14/06/2021


Em agosto, o Instituto Nacional de Pesquisas Espaciais (Inpe) deve desligar o supercomputador chamado Tupã, responsável por prever o tempo, emitir alertas climáticos, coletar e monitorar dados para pesquisas e desenvolvimento científico.

Segundo o Instituto, o desligamento — o primeiro da história — será realizado por falta de recursos. Neste ano, o Inpe recebeu o menor orçamento vindo do Governo Federal, totalizando R$ 44,7 milhões. No total, eram previstos o encaminhamento de R$ 76 milhões de verba. Para efeito de comparação, só o supercomputador consome R$ 5 milhões por ano de energia elétrica.

Como resposta, o Instituto Brasileiro de Proteção Ambiental (Proam) enviou um documento ao Ministério Público, pedindo a manutenção do monitoramento e um plano urgente para a gestão da crise. O mesmo documento também foi enviado ao Tribunal de Contas da União (TCU) e às defensorias públicas das regiões Sudeste, Sul e Centro-Oeste.

Consequências

“É inaceitável que em um momento como esse, diante da crise hídrica esperada no segundo semestre, com aumento dos preços da energia e risco de racionamento de água, o supercomputador seja desligado, com o argumento de falta de verbas”, afirma Carlos Bocuhy, presidente do Proam.

A professora da Universidade de São Paulo, Yara Schaeffer-Novelli, explica que o desligamento será extremamente prejudicial para os estudos do clima, dificultando, inclusive, o monitoramento de queimadas, estiagens e mudanças climáticas no Brasil.

Um parlamento para dar voz aos indígenas do Brasil (Sete Margens)

setemargens.com


Manifestação em Brasília durante o Acampamento Terra Livre de 2017. Foto © Guilherme Cavalli/Cimi.

Um parlamento indígena aberto, para dar voz e visibilidade política aos 305 povos originários do país, é o objectivo do Parlaíndio, fundado este mês no Brasil, anunciado nesta quarta-feira, 26 de Maio, e que terá assembleias mensais.

O Parlaíndio integra as lideranças indígenas brasileiras e tem já um portal com fotos dos seus líderes e notícias de assembleias ou de acontecimentos directa ou indirectamente relacionados com os povos indígenas.  

O cacique Raoni Metuktire, um importante líder indígena brasileiro, conhecido em todo o mundo pela sua luta pela preservação da Amazónia e dos povos nativos, é o seu presidente de honra, enquanto a coordenação executiva é da responsabilidade do cacique Almir Narayamoga Suruí, principal liderança do povo Paiter Suruí, da Rondónia, reconhecido internacionalmente pelos seus projectos de sustentabilidade em terras indígenas.

A primeira assembleia do Parlaíndio Brasil, noticia a Lusa citada pela TSF, decorreu virtualmente na última quinta-feira, 20 de Maio. Nessa altura, as lideranças indígenas discutiram os objectivos do movimento, bem como a sua estruturação e o modo como decorrerão as assembleias mensais.  

Entre as principais questões que o movimento abordará, ainda de acordo com a Lusa, estão a desflorestação e invasões das terras indígenas, projectos de mineração e hidroeléctricas em terras dos povos nativos, garimpo ilegal, poluição dos rios por mercúrio e contaminação das populações originárias e ribeirinhas.

O Parlaíndio tomou já uma primeira decisão política: a entrada com uma acção na justiça pedindo a exoneração do presidente da Funai (Fundação Nacional do Índio), órgão tutelado pelo Governo brasileiro, cuja missão deveria ser coordenar e pôr em prática políticas de protecção dos povos nativos.

“Foi aprovado, por unanimidade, o Parlaíndio Brasil entrar com uma acção na justiça pedindo a exoneração do presidente da Funai, delegado Marcelo Xavier, que à frente do órgão não tem cumprido a missão institucional de proteger e promover os direitos dos povos indígenas do país”, indicou o movimentou o em comunicado.

Em causa, refere a mesma fonte, está um pedido feito recentemente pelo presidente da Funai à Polícia Federal (PF), para a abertura de um inquérito contra lideranças indígenas, sob o pretexto de difamação do Governo de Jair Bolsonaro. 

“A Funai é um órgão que deveria promover assistência, protecção e garantias dos direitos dos povos indígenas brasileiros e, actualmente, faz o inverso. O inquérito teve carácter de intimidação e criminalização a partir de uma determinação do presidente da Funai”, explicou Almir Suruí, coordenador executivo do Parlaíndio Brasil.

Assembleia de indígenas. Foto da página do Parlaíndio.

O mesmo responsável considera que esta estrutura será importante para construir uma política de defesa dos povos indígenas, depois de a Constituição de 1988 ter consagrado um conjunto de políticas públicas e direitos para os indígenas brasileiros. “Um dos nossos objectivos é debater a construção do presente e do futuro a partir de uma cuidadosa avaliação do passado. Vamos discutir também as políticas públicas e fornecer subsídios para as organizações que integram o movimento indígena”, acrescentou o responsável na sessão de lançamento do movimento.

A ideia de criar o Parlamento Indígena do Brasil, como se pode ler no portal do Parlaíndio, surgiu numa reunião de lideranças indígenas realizada em Outubro de 2017, no Conselho Indigenista Missionário, uma organização da Igreja Católica de apoio aos povos indígenas. 

De acordo com a mesma informação, há actualmente no Brasil mais de 900 mil indígenas no Brasil, membros de 305 povos distintos, que falam mais de 180 línguas, segundo dados do Parlaíndio (a propósito do qual se pode ouvir aqui a crónica Outros Sinais, de Fernando Alves, na TSF, nesta quinta, 27). 

Cada vez mais pobres e indígenas em Manaus

Paolo Maria Braghini, franciscano em Manaus a ajudar famílias pobres. Foto © ACN Portugal.

Esta notícia surge em simultâneo com a denúncia de um frade católico franciscano, segundo o qual muitos indígenas e outras pessoas do interior do Amazonas estão a chegar a Manaus, a capital do Estado, sem nada para viver. 

“Temos famílias nos subúrbios que não têm nada para viver. Muitos vieram do interior do país e chegaram aqui na esperança de encontrar comida na cidade. Mas aqui só encontram fome e desemprego. Para cúmulo, agora nem sequer têm uma horta para cultivar ou o rio para pescar”, diz o padre Paolo Maria Braghini, franciscano capuchinho italiano, citado pela Ajuda à Igreja que Sofre

“No meio de tanta pobreza, escolhemos certas localidades na periferia e, com a ajuda de líderes comunitários locais, identificámos as famílias mais carenciadas”, explica frei Paolo, sobre o modo como a comunidade de franciscanos está a procurar minorar a situação. 

Manaus, um dos principais centros financeiros, industriais e económicos de toda a região norte, tem mais de dois milhões de habitantes e continua a atrair as populações da região. A cidade, que já tinha muitas bolsas de pobreza, viu a situação agravar-se com a pandemia do novo coronavírus e o colapso dos serviços de saúde.

As populações pobres e indígenas do Amazonas foram alguns dos sectores mais atingidos pela falta de estruturas. Em Janeiro, num dos picos da crise, o bispo de Manaus chegou mesmo a pedir ajuda para que fosse enviado oxigénio para os hospitais.

UMaine researchers: Culture drives human evolution more than genetics (Eureka Alert!)

News Release 2-Jun-2021

University of Maine

Research News

In a new study, University of Maine researchers found that culture helps humans adapt to their environment and overcome challenges better and faster than genetics.

After conducting an extensive review of the literature and evidence of long-term human evolution, scientists Tim Waring and Zach Wood concluded that humans are experiencing a “special evolutionary transition” in which the importance of culture, such as learned knowledge, practices and skills, is surpassing the value of genes as the primary driver of human evolution.

Culture is an under-appreciated factor in human evolution, Waring says. Like genes, culture helps people adjust to their environment and meet the challenges of survival and reproduction. Culture, however, does so more effectively than genes because the transfer of knowledge is faster and more flexible than the inheritance of genes, according to Waring and Wood.

Culture is a stronger mechanism of adaptation for a couple of reasons, Waring says. It’s faster: gene transfer occurs only once a generation, while cultural practices can be rapidly learned and frequently updated. Culture is also more flexible than genes: gene transfer is rigid and limited to the genetic information of two parents, while cultural transmission is based on flexible human learning and effectively unlimited with the ability to make use of information from peers and experts far beyond parents. As a result, cultural evolution is a stronger type of adaptation than old genetics.

Waring, an associate professor of social-ecological systems modeling, and Wood, a postdoctoral research associate with the School of Biology and Ecology, have just published their findings in a literature review in the Proceedings of the Royal Society B, the flagship biological research journal of The Royal Society in London.

“This research explains why humans are such a unique species. We evolve both genetically and culturally over time, but we are slowly becoming ever more cultural and ever less genetic,” Waring says.

Culture has influenced how humans survive and evolve for millenia. According to Waring and Wood, the combination of both culture and genes has fueled several key adaptations in humans such as reduced aggression, cooperative inclinations, collaborative abilities and the capacity for social learning. Increasingly, the researchers suggest, human adaptations are steered by culture, and require genes to accommodate.

Waring and Wood say culture is also special in one important way: it is strongly group-oriented. Factors like conformity, social identity and shared norms and institutions — factors that have no genetic equivalent — make cultural evolution very group-oriented, according to researchers. Therefore, competition between culturally organized groups propels adaptations such as new cooperative norms and social systems that help groups survive better together.

According to researchers, “culturally organized groups appear to solve adaptive problems more readily than individuals, through the compounding value of social learning and cultural transmission in groups.” Cultural adaptations may also occur faster in larger groups than in small ones.

With groups primarily driving culture and culture now fueling human evolution more than genetics, Waring and Wood found that evolution itself has become more group-oriented.

“In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,” Waring says. “The ‘society as organism’ metaphor is not so metaphorical after all. This insight can help society better understand how individuals can fit into a well-organized and mutually beneficial system. Take the coronavirus pandemic, for example. An effective national epidemic response program is truly a national immune system, and we can therefore learn directly from how immune systems work to improve our COVID response.”

###

Waring is a member of the Cultural Evolution Society, an international research network that studies the evolution of culture in all species. He applies cultural evolution to the study of sustainability in social-ecological systems and cooperation in organizational evolution.

Wood works in the UMaine Evolutionary Applications Laboratory managed by Michael Kinnison, a professor of evolutionary applications. His research focuses on eco-evolutionary dynamics, particularly rapid evolution during trophic cascades.

The professionals who predict the future for a living (MIT Technology Review)

technologyreview.com

Everywhere from business to medicine to the climate, forecasting the future is a complex and absolutely critical job. So how do you do it—and what comes next?

Bobbie Johnson

February 26, 2020


Inez Fung

Professor of atmospheric science, University of California, Berkeley

Inez Fung
Leah Fasten

Prediction for 2030: We’ll light up the world… safely

I’ve spoken to people who want climate model information, but they’re not really sure what they’re asking me for. So I say to them, “Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?”

I joined Jim Hansen’s group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What we’ve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.

Still, there are things we’re missing. We still don’t have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through history—and now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium-thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of  historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.

I don’t see us reducing fossil fuels by 2030. I don’t see us reducing CO2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so I’m looking forward to the growth in alternative energy going to parts of the world that have no electricity. That’s important because it’s education, health, everything associated with a Western standard of living. That’s where I’m putting my hopes.

Anne-Lise Kjaer
Dvora Photography

Anne Lise Kjaer

Futurist, Kjaer Global, London

Prediction for 2030: Adults will learn to grasp new ideas

As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.

When it comes to the future, you have two choices. You can sit back and think “It’s not happening to me” and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.

A lot of companies come to us and think they want to hear about the future, but really it’s just an exercise for them—let’s just tick that box, do a report, and put it on our bookshelf.

So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislation—how does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future?

What’s next? Obviously with technology we can educate much better than we could in the past. But it’s a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world?

Philip Tetlock
Courtesy Photo

Philip Tetlock

Coauthor of Superforecasting and professor, University of Pennsylvania

Prediction for 2030: We’ll get better at being uncertain

At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which it’s usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstractionbut from our point of view, it’s more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didn’t happen: driverless Ubers weren’t picking people up for fares in any major American city at the end of 2017. Watson didn’t defeat the world’s best oncologists in a medical diagnosis tournament. So I don’t think we’re on a fast track toward the singularity, put it that way.

Forecasts have the potential to be either self-fulfilling or self-negatingY2K was arguably a self-negating forecast. But it’s possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that?

What I’ve seen over the last 10 years, and it’s a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think there’s a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.

Keith Chen
Ryan Young

Keith Chen

Associate professor of economics, UCLA

Prediction for 2030: We’ll be more—and less—private

When I worked on Uber’s surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable times—like New Year’s—when we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. It’s like trying to predict the weather. Yes, the amount of weather data that we collect today—temperature, wind speed, barometric pressure, humidity data—is 10,000 times greater than what we were collecting 20 years ago. But we still can’t predict the weather 10,000 times further out than we could back then. And social movements—even in a very specific setting, such as where riders want to go at any given point in time—are, if anything, even more chaotic than weather systems.

These days what I’m doing is a little bit more like forensic economics. We look to see what we can find and predict from people’s movement patterns. We’re just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information?

I think the next big social tipping point is people actually starting to really care about their privacy. It’ll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I don’t quite know how to reconcile the two.

Annalee Newitz
Sarah Deragon

Annalee Newitz

Science fiction and nonfiction author, San Francisco

Prediction for 2030: We’re going to see a lot more humble technology

Every era has its own ideas about the future. Go back to the 1950s and you’ll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.

Science fiction writers can’t actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we can’t say what’s definitely going to happen, is offer a range of scenarios informed by history.

There are a lot of myths about the future that people believe are going to come true right now. I think a lot of people—not just science fiction writers but people who are working on machine learning—believe that relatively soon we’re going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.

It seems unlikely that a human-equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.

I’m not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.

We’re going to have to develop much better technologies around disaster relief and emergency response, because we’ll be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I don’t mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.

Finale Doshi-Velez
Noah Willman

Finale Doshi-Velez

Associate professor of computer science, Harvard

Prediction for 2030: Humans and machines will make decisions together

In my lab, we’re trying to answer questions like “How might this patient respond to this antidepressant?” or “How might this patient respond to this vasopressor?” So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.

Some of it might be relevant to making predictions about their illnesses, some not, and we don’t know which is which. That’s why we ask for the large data set with everything.

There’s been about a decade of work trying to get unsupervised machine-­learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.

We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.

I’m excited about combining humans and AI to make predictions. Let’s say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.

All these predictive models were built and deployed and people didn’t think enough about potential biases. I’m hopeful that we’re going to have a future where these human-machine teams are making decisions that are better than either alone.

Abdoulaye Banire Diallo
Guillaume Simoneau

Abdoulaye Banire Diallo

Professor, director of the bioinformatics lab, University of Quebec at Montreal

Prediction for 2030: Machine-based forecasting will be regulated

When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. I’m involved in projects that add a layer of genetic and genomic data to help forecastingto help decision makers like the farmer to have a full picture when they’re thinking about replacing cows, improving management, resilience, and animal welfare.

With the emergence of machine learning and AI, what we’re showing is that we can help tackle problems in a way that hasn’t been done before. We are adapting it to the dairy sector, where we’ve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.

Until now AI and machine learning have been associated with domain expertise. It’s not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.

This story was part of our March 2020 issue.

The predictions issue

If DNA is like software, can we just fix the code? (MIT Technology Review)

technologyreview.com

In a race to cure his daughter, a Google programmer enters the world of hyper-personalized drugs.

Erika Check Hayden

February 26, 2020


To create atipeksen, Yu borrowed from recent biotech successes like gene therapy. Some new drugs, including cancer therapies, treat disease by directly manipulating genetic information inside a patient’s cells. Now doctors like Yu find they can alter those treatments as if they were digital programs. Change the code, reprogram the drug, and there’s a chance of treating many genetic diseases, even those as unusual as Ipek’s.

The new strategy could in theory help millions of people living with rare diseases, the vast majority of which are caused by genetic typos and have no treatment. US regulators say last year they fielded more than 80 requests to allow genetic treatments for individuals or very small groups, and that they may take steps to make tailor-made medicines easier to try. New technologies, including custom gene-editing treatments using CRISPR, are coming next.

Where it had taken decades for Ionis to perfect its drug, Yu now set a record: it took only eight months for Yu to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.

“I never thought we would be in a position to even contemplate trying to help these patients,” says Stanley Crooke, a biotechnology entrepreneur and founder of Ionis Pharmaceuticals, based in Carlsbad, California. “It’s an astonishing moment.”

Antisense drug

Right now, though, insurance companies won’t pay for individualized gene drugs, and no company is making them (though some plan to). Only a few patients have ever gotten them, usually after heroic feats of arm-twisting and fundraising. And it’s no mistake that programmers like Mehmet Kuzu, who works on data privacy, are among the first to pursue individualized drugs. “As computer scientists, they get it. This is all code,” says Ethan Perlstein, chief scientific officer at the Christopher and Dana Reeve Foundation.

A nonprofit, the A-T Children’s Project, funded most of the cost of designing and making Ipek’s drug. For Brad Margus, who created the foundation in 1993 after his two sons were diagnosed with A-T, the change between then and now couldn’t be more dramatic. “We’ve raised so much money, we’ve funded so much research, but it’s so frustrating that the biology just kept getting more and more complex,” he says. “Now, we’re suddenly presented with this opportunity to just fix the problem at its source.”

Ipek was only a few months old when her father began looking for a cure. A geneticist friend sent him a paper describing a possible treatment for her exact form of A-T, and Kuzu flew from Sunnyvale, California, to Los Angeles to meet the scientists behind the research. But they said no one had tried the drug in people: “We need many more years to make this happen,” they told him.

Timothy Yu, of Boston Children's Hospital
Timothy Yu, of Boston Children’s HospitalCourtesy Photo (Yu)

Kuzu didn’t have years. After he returned from Los Angeles, Margus handed him a thumb drive with a video of a talk by Yu, a doctor at Boston Children’s Hospital, who described how he planned to treat a young girl with Batten disease (a different neurodegenerative condition) in what press reports would later dub “a stunning illustration of personalized genomic medicine.” Kuzu realized Yu was using the very same gene technology the Los Angeles scientists had dismissed as a pipe dream.

That technology is called “antisense.” Inside a cell, DNA encodes information to make proteins. Between the DNA and the protein, though, come messenger molecules called RNA that ferry the gene information out of the nucleus. Think of antisense as mirror-image molecules that stick to specific RNA messages, letter for letter, blocking them from being made into proteins. It’s possible to silence a gene this way, and sometimes to overcome errors, too.

Though the first antisense drugs appeared 20 years ago, the concept achieved its first blockbuster success only in 2016. That’s when a drug called nusinersen, made by Ionis, was approved to treat children with spinal muscular atrophy, a genetic disease that would otherwise kill them by their second birthday.

Yu, a specialist in gene sequencing, had not worked with antisense before, but once he’d identified the genetic error causing Batten disease in his young patient, Mila Makovec, it became apparent to him he didn’t have to stop there. If he knew the gene error, why not create a gene drug? “All of a sudden a lightbulb went off,” Yu says. “Couldn’t one try to reverse this? It was such an appealing idea, and such a simple idea, that we basically just found ourselves unable to let that go.”

Yu admits it was bold to suggest his idea to Mila’s mother, Julia Vitarello. But he was not starting from scratch. In a demonstration of how modular biotech drugs may become, he based milasen on the same chemistry backbone as the Ionis drug, except he made Mila’s particular mutation the genetic target. Where it had taken decades for Ionis to perfect a drug, Yu now set a record: it took only eight months for him to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.

“What’s different now is that someone like Tim Yu can develop a drug with no prior familiarity with this technology,” says Art Krieg, chief scientific officer at Checkmate Pharmaceuticals, based in Cambridge, Massachusetts.

Source code

As word got out about milasen, Yu heard from more than a hundred families asking for his help. That’s put the Boston doctor in a tough position. Yu has plans to try antisense to treat a dozen kids with different diseases, but he knows it’s not the right approach for everyone, and he’s still learning which diseases might be most amenable. And nothing is ever simple—or cheap. Each new version of a drug can behave differently and requires costly safety tests in animals.

Kuzu had the advantage that the Los Angeles researchers had already shown antisense might work. What’s more, Margus agreed that the A-T Children’s Project would help fund the research. But it wouldn’t be fair to make the treatment just for Ipek if the foundation was paying for it. So Margus and Yu decided to test antisense drugs in the cells of three young A-T patients, including Ipek. Whichever kid’s cells responded best would get picked.

Ipek at play
Ipek may not survive past her 20s without treatment.Matthew Monteith

While he waited for the test results, Kuzu raised about $200,000 from friends and coworkers at Google. One day, an email landed in his in-box from another Google employee who was fundraising to help a sick child. As he read it, Kuzu felt a jolt of recognition: his coworker, Jennifer Seth, was also working with Yu.

Seth’s daughter Lydia was born in December 2018. The baby, with beautiful chubby cheeks, carries a mutation that causes seizures and may lead to severe disabilities. Seth’s husband Rohan, a well-connected Silicon Valley entrepreneur, refers to the problem as a “tiny random mutation” in her “source code.” The Seths have raised more than $2 million, much of it from co-workers.

Custom drug

By then, Yu was ready to give Kuzu the good news: Ipek’s cells had responded the best. So last September the family packed up and moved from California to Cambridge, Massachusetts, so Ipek could start getting atipeksen. The toddler got her first dose this January, under general anesthesia, through a lumbar puncture into her spine.

After a year, the Kuzus hope to learn whether or not the drug is helping. Doctors will track her brain volume and measure biomarkers in Ipek’s cerebrospinal fluid as a readout of how her disease is progressing. And a team at Johns Hopkins will help compare her movements with those of other kids, both with and without A-T, to observe whether the expected disease symptoms are delayed.

One serious challenge facing gene drugs for individuals is that short of a healing miracle, it may ultimately be impossible to be sure they really work. That’s because the speed with which diseases like A-T progress can vary widely from person to person. Proving a drug is effective, or revealing that it’s a dud, almost always requires collecting data from many patients, not just one. “It’s important for parents who are ready to pay anything, try anything, to appreciate that experimental treatments often don’t work,” says Holly Fernandez Lynch, a lawyer and ethicist at the University of Pennsylvania. “There are risks. Trying one could foreclose other options and even hasten death.”

Kuzu says his family weighed the risks and benefits. “Since this is the first time for this kind of drug, we were a little scared,” he says. But, he concluded, “there’s nothing else to do. This is the only thing that might give hope to us and the other families.”

Another obstacle to ultra-personal drugs is that insurance won’t pay for them. And so far, pharmaceutical companies aren’t interested either. They prioritize drugs that can be sold thousands of times, but as far as anyone knows, Ipek is the only person alive with her exact mutation. That leaves families facing extraordinary financial demands that only the wealthy, lucky, or well connected can meet. Developing Ipek’s treatment has already cost $1.9 million, Margus estimates.

Some scientists think agencies such as the US National Institutes of Health should help fund the research, and will press their case at a meeting in Bethesda, Maryland, in April. Help could also come from the Food and Drug Administration, which is developing guidelines that may speed the work of doctors like Yu. The agency will receive updates on Mila and other patients if any of them experience severe side effects.

The FDA is also considering giving doctors more leeway to modify genetic drugs to try in new patients without securing new permissions each time. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, likens traditional drug manufacturing to factories that mass-produce identical T-shirts. But, he points out, it’s now possible to order an individual basic T-shirt embroidered with a company logo. So drug manufacturing could become more customized too, Marks believes.

Custom drugs carrying exactly the message a sick kid’s body needs? If we get there, credit will go to companies like Ionis that developed the new types of gene medicine. But it should also go to the Kuzus—and to Brad Margus, Rohan Seth, Julia Vitarello, and all the other parents who are trying save their kids. In doing so, they are turning hyper-personalized medicine into reality.

Erika Check Hayden is director of the science communication program at the University of California, Santa Cruz.

This story was part of our March 2020 issue.

The predictions issue

An elegy for cash: the technology we might never replace (MIT Technology Review)

technologyreview.com

Cash is gradually dying out. Will we ever have a digital alternative that offers the same mix of convenience and freedom?

Mike Orcutt

January 3, 2020


If you’d rather keep all that to yourself, you’re in luck. The person in the store (or on the street corner) may remember your face, but as long as you didn’t reveal any identifying information, there is nothing that links you to the transaction.

This is a feature of physical cash that payment cards and apps do not have: freedom. Called “bearer instruments,” banknotes and coins are presumed to be owned by whoever holds them. We can use them to transact with another person without a third party getting in the way. Companies cannot build advertising profiles or credit ratings out of our data, and governments cannot track our spending or our movements. And while a credit card can be declined and a check mislaid, handing over money works every time, instantly.

We shouldn’t take this freedom for granted. Much of our commerce now happens online. It relies on banks and financial technology companies to serve as middlemen. Transactions are going digital in the physical world, too: electronic payment tools, from debit cards to Apple Pay to Alipay, are increasingly replacing cash. While notes and coins remain popular in many countries, including the US, Japan, and Germany, in others they are nearing obsolescence.

This trend has civil liberties groups worried. Without cash, there is “no chance for the kind of dignity-preserving privacy that undergirds an open society,” writes Jerry Brito, executive director of Coin Center, a policy advocacy group based in Washington, DC. In a recent report, Brito contends that we must “develop and foster electronic cash” that is as private as physical cash and doesn’t require permission to use.

The central question is who will develop and control the electronic payment systems of the future. Most of the existing ones, like Alipay, Zelle, PayPal, Venmo, and Kenya’s M-Pesa, are run by private firms. Afraid of leaving payments solely in their hands, many governments are looking to develop some sort of electronic stand-in for notes and coins. Meanwhile, advocates of stateless, ownerless cryptocurrencies like Bitcoin say they’re the only solution as surveillance-proof as cash—but can they be feasible at large scales?

We tend to take it for granted that new technologies work better than old ones—safer, faster, more accurate, more efficient, more convenient. Purists may extol the virtues of vinyl records, but nobody can dispute that a digital music collection is easier to carry and sounds almost exactly as good. Cash is a paradox—a technology thousands of years old that may just prove impossible to re-create in a more advanced form.

In (government) money we trust?

We call banknotes and coins “cash,” but the term really refers to something more abstract: cash is essentially money that your government owes you. In the old days this was a literal debt. “I promise to pay the bearer on demand the sum of …” still appears on British banknotes, a notional guarantee that the Bank of England will hand over the same value in gold in exchange for your note. Today it represents the more abstract guarantee that you will always be able to use that note to pay for things.

The digits in your bank account, on the other hand, refer to what your bank owes you. When you go to an ATM, you are effectively converting the bank’s promise to pay into a government promise.

Most people would say they trust the government’s promise more, says Gabriel Söderberg, an economist at the Riksbank, the central bank of Sweden. Their bet—correct, in most countries—is that their government is much less likely to go bust.

That’s why it would be a problem if Sweden were to go completely “cashless,” Söderberg says. He and his colleagues fear that if people lose the option to convert their bank money to government money at will and use it to pay for whatever they need, they might start to lose trust in the whole money system. A further worry is that if the private sector is left to dominate digital payments, people who can’t or won’t use these systems could be shut out of the economy.

This is fast becoming more than just a thought experiment in Sweden. Nearly everyone there uses a mobile app called Swish to pay for things. Economists have estimated that retailers in Sweden could completely stop accepting cash by 2023.

Creating an electronic version of Sweden’s sovereign currency—an “e-krona”—could mitigate these problems, Söderberg says. If the central bank were to issue digital money, it would design it to be a public good, not a profit-making product for a corporation. “Easily accessible, simple and user-friendly versions could be developed for those who currently have difficulty with digital technology,” the bank asserted in a November report covering Sweden’s payment landscape.

The Riksbank plans to develop and test an e-krona prototype. It has examined a number of technologies that might underlie it, including cryptocurrency systems like Bitcoin. But the central bank has also called on the Swedish government to lead a broad public inquiry into whether such a system should ever go live. “In the end, this decision is too big for a central bank alone, at least in the Swedish context,” Söderberg says.

The death of financial privacy

China, meanwhile, appears to have made its decision: the digital renminbi is coming. Mu Changchun, head of the People’s Bank of China’s digital currency research institute, said in September that the currency, which the bank has been working on for years, is “close to being out.” In December, a local news report suggested that the PBOC is nearly ready to start tests in the cities of Shenzhen and Suzhou. And the bank has been explicit about its intention to use it to replace banknotes and coins.

Cash is already dying out on its own in China, thanks to Alipay and WeChat Pay, the QR-code-based apps that have become ubiquitous in just a few years. It’s been estimated that mobile payments made up more than 80% of all payments in China in 2018, up from less than 20% in 2013.

Street Musician takes WeChat Pay
AP Images

It’s not clear how much access the government currently has to transaction data from WeChat Pay and Alipay. Once it issues a sovereign digital currency—which officials say will be compatible with those two services—it will likely have access to a lot more. Martin Chorzempa, a research fellow at the Peterson Institute for International Economics in Washington, DC, told the New York Times in October that the system will give the PBOC “extraordinary power and visibility into the financial system, more than any central bank has today.”

We don’t know for sure what technology the PBOC plans to use as the basis for its digital renminbi, but we have at least two revealing clues. First, the bank has been researching blockchain technology since 2014, and the government has called the development of this technology a priority. Second, Mu said in September that China’s system will bear similarities to Libra, the electronic currency Facebook announced last June. Indeed, PBOC officials have implied in public statements that the unveiling of Libra inspired them to accelerate the development of the digital renminbi, which has been in the works for years.

As currently envisioned, Libra will run on a blockchain, a type of accounting ledger that can be maintained by a network of computers instead of a single central authority. However, it will operate very differently from Bitcoin, the original blockchain system.

The computers in Bitcoin’s network use open-source software to automatically verify and record every single transaction. In the process, they generate a permanent public record of the currency’s entire transaction history: the blockchain. As envisioned, Libra’s network will do something similar. But whereas anyone with a computer and an internet connection can participate anonymously in Bitcoin’s network, the “nodes” that make up Libra’s network will be companies that have been vetted and given membership in a nonprofit association.

Unlike Bitcoin, which is notoriously volatile, Libra will be designed to maintain a stable value. To pull this off, the so-called Libra Association will be responsible for maintaining a reserve (pdf) of government-issued currencies (the latest plan is for it to be half US dollars, with the other half composed of British pounds, euros, Japanese yen, and Singapore dollars). This reserve is supposed to serve as backing for the digital units of value.

Both Libra and the digital renminbi, however, face serious questions about privacy. To start with, it’s not clear if people will be able to use them anonymously.

With Bitcoin, although transactions are public, users don’t have to reveal who they really are; each person’s “address” on the public blockchain is just a random string of letters and numbers. But in recent years, law enforcement officials have grown skilled at combining public blockchain data with other clues to unmask people using cryptocurrencies for illicit purposes. Indeed, in a July blog post, Libra project head David Marcus argued that the currency would be a boon for law enforcement, since it would help “move more cash transactions—where a lot of illicit activities happen—to a digital network.”

As for the Chinese digital currency, Mu has said it will feature some level of anonymity. “We know the demand from the general public is to keep anonymity by using paper money and coins … we will give those people who demand it anonymity,” he said at a November conference in Singapore. “But at the same time we will keep the balance between ‘controllable anonymity’ and anti-money-laundering, CTF [counter-terrorist financing], and also tax issues, online gambling, and any electronic criminal activities,” he added. He did not, however, explain how that “balance” would work.

Sweden and China are leading the charge to issue consumer-focused electronic money, but according to John Kiff, an expert on financial stability for the International Monetary Fund, more than 30 countries have explored or are exploring the idea.  In some, the rationale is similar to Sweden’s: dwindling cash and a growing private-sector payments ecosystem. Others are countries where commercial banks have decided not to set up shop. Many see an opportunity to better monitor for illicit transactions. All will have to wrestle with the same thorny privacy issues that Libra and the digital renminbi are raising.

Robleh Ali, a research scientist at MIT’s Digital Currency Initiative, says digital currency systems from central banks may need to be designed so that the government can “consciously blind itself” to the information. Something like that might be technically possible thanks to cutting-edge cryptographic tools like zero-knowledge proofs, which are used in systems like Zcash to shield blockchain transaction information from public view.

However, there’s no evidence that any governments are even thinking about deploying tools like this. And regardless, can any government—even Sweden’s—really be trusted to blind itself?

Cryptocurrency: A workaround for freedom

That’s wishful thinking, says Alex Gladstein, chief strategy officer for the Human Rights Foundation. While you may trust your government or think you’ve got nothing to hide, that might not always remain true. Politics evolves, governments get pushed out by elections or other events, what constitutes a “crime” changes, and civil liberties are not guaranteed. “Financial privacy is not going to be gifted to you by your government, regardless of how ‘free’ they are,” Gladstein says. He’s convinced that it has to come in the form of a stateless, decentralized digital currency like Bitcoin.

In fact, “electronic cash” was what Bitcoin’s still-unknown inventor, the pseudonymous Satoshi Nakamoto, claimed to be trying to create (before disappearing). Eleven years into its life, Nakamoto’s technology still lacks some of the signature features of cash. It is difficult to use, transactions can take more than an hour to process, and the currency’s value can fluctuate wildly. And as already noted, the supposedly anonymous transactions it enables can sometimes be traced.

But in some places people just need something that works, however imperfectly. Take Venezuela. Cash in the crisis-ridden country is scarce, and the Venezuelan bolivar is constantly losing value to hyperinflation. Many Venezuelans seek refuge in US dollars, storing them under the proverbial (and literal) mattress, but that also makes them vulnerable to thieves.

What many people want is access to stable cash in digital form, and there’s no easy way to get that, says Alejandro Machado, cofounder of the Open Money Initiative. Owing to government-imposed capital controls, Venezuelan banks have largely been cut off from foreign banks. And due to restrictions by US financial institutions, digital money services like PayPal and Zelle are inaccessible to most people.  So a small number of tech-savvy Venezuelans have turned to a service called LocalBitcoins.

It’s like Craigslist, except that the only things for sale are bitcoins and bolivars. On Venezuela’s LocalBitcoins site, people advertise varying quantities of currency for sale at varying exchange rates. The site holds the money in escrow until trades are complete, and tracks the sellers’ reputations.

It’s not for the masses, but it’s “very effective” for people who can make it work, says Machado. For instance, he and his colleagues met a young woman who mines Bitcoin and keeps her savings in the currency. She doesn’t have a foreign bank account, so she’s willing to deal with the constant fluctuations in Bitcoin’s price. Using LocalBitcoins, she can cash out into bolivars whenever she needs them—to buy groceries, for example. “Niche power users” like this are “leveraging the best features of Bitcoin, which is to be an asset that is permissionless and that is very easy to trade electronically,” Machado says.

However, this is possible only because there are enough people using LocalBitcoins to create what finance people call “local liquidity,” meaning you can easily find a buyer for your bitcoins or bolivars. Bitcoin is the only cryptocurrency that has achieved this in Venezuela, says Machado, and it’s mostly thanks to LocalBitcoins.

This is a long way from the dream of cryptocurrency as a widely used substitute for stable, government-issued money. Most Venezuelans can’t use Bitcoin, and few merchants there even know what it is, much less how to accept it.

Still, it’s a glimpse of what a cryptocurrency can offer—a functional financial system that anyone can join and that offers the kind of freedom cash provides in most other places.

Decentralize this

Could something like Bitcoin ever be as easy to use and reliable as today’s cash is for everyone else? The answer is philosophical as well as technical.

To begin with, what does it even mean for something to be like Bitcoin? Central banks and corporations will adapt certain aspects of Bitcoin and apply them to their own ends. Will those be cryptocurrencies? Not according to purists, who say that though Libra or some future central bank-issued digital currency may run on blockchain technology, they won’t be cryptocurrencies because they will be under centralized control.

True cryptocurrencies are “decentralized”—they have no one entity in charge and no single points of failure, no weak spots that an adversary (including a government) could attack. With no middleman like a bank attesting that a transaction took place, each transaction has to be validated by the nodes in a cryptocurrency’s network, which can number many thousands. But this requires an immense expenditure of computing power, and it’s the reason Bitcoin transactions can take more than an hour to settle.

A currency like Libra wouldn’t have this problem, because only a few authorized entities would be able to operate nodes. The trade-off is that its users wouldn’t be able to trust those entities to guarantee their privacy, any more than they can trust a bank, a government, or Facebook.

Is it technically possible to achieve Bitcoin’s level of decentralization and the speed, scale, privacy, and ease of use that we’ve come to expect from traditional payment methods? That’s a problem many talented researchers are still trying to crack. But some would argue that shouldn’t necessarily be the goal.  

In a recent essay, Jill Carlson, cofounder of the Open Money Initiative, argued that perhaps decentralized cryptocurrency systems were “never supposed to go mainstream.” Rather, they were created explicitly for “censored transactions,” from paying for drugs or sex to supporting political dissidents or getting money out of countries with restrictive currency controls. Their slowness is inherent, not a design flaw; they “forsake scale, speed, and cost in favor of one key feature: censorship resistance.” A world in which they went mainstream would be “a very scary place indeed,” she wrote.

In summary, we have three avenues for the future of digital money, none of which offers the same mix of freedom and ease of use that characterizes cash. Private companies have an obvious incentive to monetize our data and pursue profits over public interest. Digital government money may still be used to track us, even by well-intentioned governments, and for less benign ones it’s a fantastic tool for surveillance. And cryptocurrency can prove useful when freedoms are at risk, but it likely won’t work at scale anytime soon, if ever.

How big a problem is this? That depends on where you live, how much you trust your government and your fellow citizens, and why you wish to use cash. And if you’d rather keep that to yourself, you’re in luck. For now.

What AI still can’t do (MIT Technology Review)

technologyreview.com

Brian Bergstein

February 19, 2020


Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

Elias Bareinboim
Elias Bareinboim: AI systems are clueless when it comes to causation.

Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.

His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.

As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.

But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.

Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.

An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.

Performing miracles

The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant—the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.

Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.

Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.

Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.

Judea Pearl
Judea Pearl: His theory of causal reasoning has transformed science.

Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.

Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.

Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.

One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).

The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.

The last mile

Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.

He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.

Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”

He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.

Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”

That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.

He also doesn’t think it’s that far off: “This is the last mile before the victory.”

What if?

Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.

As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.

For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.

You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.”

Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.

This story was part of our March 2020 issue.

The predictions issue

We’re not prepared for the end of Moore’s Law (MIT Technology Review)

technologyreview.com

David Rotman


February 24, 2020

Moore’s argument was an economic one. Integrated circuits, with multiple transistors and other electronic devices interconnected with aluminum metal lines on a tiny square of silicon wafer, had been invented a few years earlier by Robert Noyce at Fairchild Semiconductor. Moore, the company’s R&D director, realized, as he wrote in 1965, that with these new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” It was a beautiful bargain—in theory, the more transistors you added, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip.

Soon these cheaper, more powerful chips would become what economists like to call a general purpose technology—one so fundamental that it spawns all sorts of other innovations and advances in multiple industries. A few years ago, leading economists credited the information technology made possible by integrated circuits with a third of US productivity growth since 1974. Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore’s prediction. It has also fueled today’s breakthroughs in artificial intelligence and genetic medicine, by giving machine-learning techniques the ability to chew through massive amounts of data to find answers.

But how did a simple prediction, based on extrapolating from a graph of the number of transistors by year—a graph that at the time had only a few data points—come to define a half-century of progress? In part, at least, because the semiconductor industry decided it would.

Cover of Electronics Magazine April, 1965
The April 1965 Electronics Magazine in which Moore’s article appeared.Wikimedia

Moore wrote that “cramming more components onto integrated circuits,” the title of his 1965 article, would “lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.” In other words, stick to his road map of squeezing ever more transistors onto chips and it would lead you to the promised land. And for the following decades, a booming industry, the government, and armies of academic and industrial researchers poured money and time into upholding Moore’s Law, creating a self-fulfilling prophecy that kept progress on track with uncanny accuracy. Though the pace of progress has slipped in recent years, the most advanced chips today have nearly 50 billion transistors.

Every year since 2001, MIT Technology Review has chosen the 10 most important breakthrough technologies of the year. It’s a list of technologies that, almost without exception, are possible only because of the computation advances described by Moore’s Law.

For some of the items on this year’s list the connection is obvious: consumer devices, including watches and phones, infused with AI; climate-change attribution made possible by improved computer modeling and data gathered from worldwide atmospheric monitoring systems; and cheap, pint-size satellites. Others on the list, including quantum supremacy, molecules discovered using AI, and even anti-aging treatments and hyper-personalized drugs, are due largely to the computational power available to researchers.

But what happens when Moore’s Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time?

RIP

“It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.” Numerous other prominent computer scientists have also declared Moore’s Law dead in recent years. In early 2019, the CEO of the large chipmaker Nvidia agreed.

In truth, it’s been more a gradual decline than a sudden death. Over the decades, some, including Moore himself at times, fretted that they could see the end in sight, as it got harder to make smaller and smaller transistors. In 1999, an Intel researcher worried that the industry’s goal of making transistors smaller than 100 nanometers by 2005 faced fundamental physical problems with “no known solutions,” like the quantum effects of electrons wandering where they shouldn’t be.

For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.

Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.

Finding successors to today’s silicon chips will take years of research.If you’re worried about what will replace moore’s Law, it’s time to panic.

Nonetheless, Intel—one of those three chipmakers—isn’t expecting a funeral for Moore’s Law anytime soon. Jim Keller, who took over as Intel’s head of silicon engineering in 2018, is the man with the job of keeping it alive. He leads a team of some 8,000 hardware engineers and chip designers at Intel. When he joined the company, he says, many were anticipating the end of Moore’s Law. If they were right, he recalls thinking, “that’s a drag” and maybe he had made “a really bad career move.”

But Keller found ample technical opportunities for advances. He points out that there are probably more than a hundred variables involved in keeping Moore’s Law going, each of which provides different benefits and faces its own limits. It means there are many ways to keep doubling the number of devices on a chip—innovations such as 3D architectures and new transistor designs.

These days Keller sounds optimistic. He says he has been hearing about the end of Moore’s Law for his entire career. After a while, he “decided not to worry about it.” He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.

Still, even if Intel and the other remaining chipmakers can squeeze out a few more generations of even more advanced microchips, the days when you could reliably count on faster, cheaper chips every couple of years are clearly over. That doesn’t, however, mean the end of computational progress.

Time to panic

Neil Thompson is an economist, but his office is at CSAIL, MIT’s sprawling AI and computer center, surrounded by roboticists and computer scientists, including his collaborator Leiserson. In a new paper, the two document ample room for improving computational performance through better software, algorithms, and specialized chip architecture.

One opportunity is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.

Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.

That sounds like good news for continuing progress, but Thompson worries it also signals the decline of computers as a general purpose technology. Rather than “lifting all boats,” as Moore’s Law has, by offering ever faster and cheaper chips that were universally available, advances in software and specialized architecture will now start to selectively target specific problems and business opportunities, favoring those with sufficient money and resources.

Indeed, the move to chips designed for specific applications, particularly in AI, is well under way. Deep learning and other AI applications increasingly rely on graphics processing units (GPUs) adapted from gaming, which can handle parallel operations, while companies like Google, Microsoft, and Baidu are designing AI chips for their own particular needs. AI, particularly deep learning, has a huge appetite for computer power, and specialized chips can greatly speed up its performance, says Thompson.

But the trade-off is that specialized chips are less versatile than traditional CPUs. Thompson is concerned that chips for more general computing are becoming a backwater, slowing “the overall pace of computer improvement,” as he writes in an upcoming paper, “The Decline of Computers as a General Purpose Technology.”

At some point, says Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon, those developing AI and other applications will miss the decreases in cost and increases in performance delivered by Moore’s Law. “Maybe in 10 years or 30 years—no one really knows when—you’re going to need a device with that additional computation power,” she says.

The problem, says Fuchs, is that the successors to today’s general purpose chips are unknown and will take years of basic research and development to create. If you’re worried about what will replace Moore’s Law, she suggests, “the moment to panic is now.” There are, she says, “really smart people in AI who aren’t aware of the hardware constraints facing long-term advances in computing.” What’s more, she says, because application–specific chips are proving hugely profitable, there are few incentives to invest in new logic devices and ways of doing computing.

Wanted: A Marshall Plan for chips

In 2018, Fuchs and her CMU colleagues Hassan Khan and David Hounshell wrote a paper tracing the history of Moore’s Law and identifying the changes behind today’s lack of the industry and government collaboration that fostered so much progress in earlier decades. They argued that “the splintering of the technology trajectories and the short-term private profitability of many of these new splinters” means we need to greatly boost public investment in finding the next great computer technologies.

If economists are right, and much of the growth in the 1990s and early 2000s was a result of microchips—and if, as some suggest, the sluggish productivity growth that began in the mid-2000s reflects the slowdown in computational progress—then, says Thompson, “it follows you should invest enormous amounts of money to find the successor technology. We’re not doing it. And it’s a public policy failure.”

There’s no guarantee that such investments will pay off. Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.

This story was part of our March 2020 issue.

The predictions issue

In Brazil’s Amazon, rivers rise to record levels (Associated Press)

apnews.com

By FERNANDO CRISPIM and DIANE JEANTET

June 1st, 2021


MANAUS, Brazil (AP) — Rivers around the biggest city in Brazil’s Amazon rainforest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus’ port authorities, straining a society that has grown weary of increasingly frequent flooding.

The Rio Negro was at its highest level since records began in 1902, with a depth of 29.98 meters (98 feet) at the port’s measuring station. The nearby Solimoes and Amazon rivers were also nearing all-time highs, flooding streets and houses in dozens of municipalities and affecting some 450,000 people in the region.

Higher-than-usual precipitation is associated with the La Nina phenomenon, when currents in the central and eastern Pacific Ocean affect global climate patterns. Environmental experts and organizations including the U.S. Environmental Protection Agency and the National Oceanic and Atmospheric Administration say there is strong evidence that human activity and global warming are altering the frequency and intensity of extreme weather events, including La Nina.

Seven of the 10 biggest floods in the Amazon basin have occurred in the past 13 years, data from Brazil’s state-owned Geological Survey shows.

“If we continue to destroy the Amazon the way we do, the climatic anomalies will become more and more accentuated,” said Virgílio Viana, director of the Sustainable Amazon Foundation, a nonprofit. ” Greater floods on the one hand, greater droughts on the other.”

Large swaths of Brazil are currently drying up in a severe drought, with a possible shortfall in power generation from the nation’s hydroelectric plants and increased electricity prices, government authorities have warned.

But in Manaus, 66-year-old Julia Simas has water ankle-deep in her home. Simas has lived in the working-class neighborhood of Sao Jorge since 1974 and is used to seeing the river rise and fall with the seasons. Simas likes her neighborhood because it is safe and clean. But the quickening pace of the floods in the last decade has her worried.

“From 1974 until recently, many years passed and we wouldn’t see any water. It was a normal place,” she said.

Aerial view of streets flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photos/Nelson Antoine)
Aerial view of streets flooded by the Negro River in downtown Manaus. (AP Photos/Nelson Antoine)
A man pushes a shopping cart loaded with bananas on a street flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photo/Edmar Barros)
A man pushes a shopping cart loaded with bananas on a street flooded by the Negro River, in downtown Manaus. (AP Photo/Edmar Barros)

When the river does overflow its banks and flood her street, she and other residents use boards and beams to build rudimentary scaffolding within their homes to raise their floors above the water.

“I think human beings have contributed a lot (to this situation,” she said. “Nature doesn’t forgive. She comes and doesn’t want to know whether you’re ready to face her or not.”

Flooding also has a significant impact on local industries such as farming and cattle ranching. Many family-run operations have seen their production vanish under water. Others have been unable to reach their shops, offices and market stalls or clients.

“With these floods, we’re out of work,” said Elias Gomes, a 38-year-old electrician in Cacau Pirera, on the other side of the Rio Negro, though noted he’s been able to earn a bit by transporting neighbors in his small wooden boat.

Gomes is now looking to move to a more densely populated area where floods won’t threaten his livelihood.

A man rides his motorcycle through a street flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photo/Edmar Barros)
A man rides his motorcycle through a street in downtown Manaus. (AP Photo/Edmar Barros)

Limited access to banking in remote parts of the Amazon can make things worse for residents, who are often unable to get loans or financial compensation for lost production, said Viana, of the Sustainable Amazon Foundation. “This is a clear case of climate injustice: Those who least contributed to global warming and climate change are the most affected.”

Meteorologists say Amazon water levels could continue to rise slightly until late June or July, when floods usually peak.

People walk on a wooden footbridge set up over a street flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photo/Edmar Barros)
People walk on a wooden footbridge set up over a street in downtown Manaus. (AP Photo/Edmar Barros)

___

Diana Jeantet reported from Rio de Janeiro.