Todos os posts de renzotaddei

Sobre renzotaddei

Anthropologist, professor at the Federal University of São Paulo

Indigenous People Advance a Dramatic Goal: Reversing Colonialism (New York Times)

nytimes.com

Max Fisher


The Interpreter

Fifty years of patient advocacy, including the shocking discovery of mass graves at Kamloops, have secured once-unthinkable gains.

A makeshift memorial to honor the 215 children whose remains have been discovered near the Kamloops Indian Residential School in British Columbia, earlier this month.
Credit: Darryl Dyck/The Canadian Press, via Associated Press

June 17, 2021

When an Indigenous community in Canada announced recently that it had discovered a mass burial site with the remains of 215 children, the location rang with significance.

Not just because it was on the grounds of a now-shuttered Indian Residential School, whose forcible assimilation of Indigenous children a 2015 truth and reconciliation report called “a key component of a Canadian government policy of cultural genocide.”

That school is in Kamloops, a city in British Columbia from which, 52 years ago, Indigenous leaders started a global campaign to reverse centuries of colonial eradication and reclaim their status as sovereign nations.

Their effort, waged predominantly in courts and international institutions, has accumulated steady gains ever since, coming further than many realize.

It has brought together groups from the Arctic to Australia. Those from British Columbia, in Canada’s mountainous west, have been at the forefront throughout.

Only two years ago, the provincial government there became the world’s first to adopt into law United Nations guidelines for heightened Indigenous sovereignty. On Wednesday, Canada’s Parliament passed a law, now awaiting a final rubber stamp, to extend those measures nationwide.

It was a stunning victory, decades in the making, that activists are working to repeat in New Zealand — and, perhaps one day, in more recalcitrant Australia, Latin America and even the United States.

“There’s been a lot of movement in the field. It’s happening with different layers of courts, with different legislatures,” said John Borrows, a prominent Canadian legal scholar and a member of the Chippewa of the Nawash Unceded First Nation.

The decades-long push for sovereignty has come with a rise in activism, legal campaigning and historical reckonings like the discovery at Kamloops. All serve the movement’s ultimate aim, which is nothing less than overturning colonial conquests that the world has long accepted as foregone.

A classroom at All Saints Residential School in Lac la Ronge, Saskatchewan, circa 1950.
Credit: Shingwauk Residential Schools Center, via Reuters

No one is sure precisely what that will look like or how long it might take. But advances once considered impossible “are happening now,” Dr. Borrows said, “and in an accelerating way.”

The Indigenous leaders who gathered in 1969 had been galvanized by an array of global changes.

The harshest assimilation policies were rolled back in most countries, but their effects remained visible in everyday life. Extractive and infrastructure megaprojects were provoking whole communities in opposition. The civil rights era was energizing a generation.

But two of the greatest motivators were gestures of ostensible reconciliation.

In 1960, world governments near-unanimously backed a United Nations declaration calling to roll back colonialism. European nations began withdrawing overseas, often under pressure from the Cold War powers.

But the declaration excluded the Americas, Australia and New Zealand, where colonization was seen as too deep-rooted to reverse. It was taken as effectively announcing that there would be no place in the modern world for Indigenous peoples.

Then, at the end of the decade, Canada’s progressive government issued a fateful “white paper” announcing that it would dissolve colonial-era policies, including reserves, and integrate Indigenous peoples as equal citizens. It was offered as emancipation.

A statue in Toronto of Egerton Ryerson, considered an architect of Canada’s residential indigenous school system, was toppled and defaced during a protest this month.
Credit: Chris Helgren/Reuters

Other countries were pursuing similar measures, with the United States’ inauspiciously named “termination policy.”

To the government’s shock, Indigenous groups angrily rejected the proposal. Like the United Nations declaration, it implied that colonial-era conquests were to be accepted as forgone.

Indigenous leaders gathered in Kamloops to organize a response. British Columbia was a logical choice. Colonial governments had never signed treaties with its original inhabitants, unlike in other parts of Canada, giving special weight to their claim to live under illegal foreign occupation.

“It’s really Quebec and British Columbia that have been the two epicenters, going back to the ’70s,” said Jérémie Gilbert, a human rights lawyer who works with Indigenous groups. Traditions of civil resistance run deep in both.

The Kamloops group began what became a campaign to impress upon the world that they were sovereign peoples with the rights of any nation, often by working through the law.

They linked up with others around the world, holding the first meeting of The World Council of Indigenous Peoples on Vancouver Island. Its first leader, George Manuel, had passed through the Kamloops residential school as a child.

The council’s charter implicitly treated countries like Canada and Australia as foreign powers. It began lobbying the United Nations to recognize Indigenous rights.

It was nearly a decade before the United Nations so much as established a working group. Court systems were little faster. But the group’s ambitions were sweeping.

Legal principles like terra nullius — “nobody’s land” — had long served to justify colonialism. The activists sought to overturn these while, in parallel, establishing a body of Indigenous law.

“The courts are very important because it’s part of trying to develop our jurisprudence,” Dr. Borrows said.

The movement secured a series of court victories that, over decades, stitched together a legal claim to the land, not just as its owners but as sovereign nations. One, in Canada, established that the government had an obligation to settle Indigenous claims to territory. In Australia, the high court backed a man who argued that his family’s centuries-long use of their land superseded the government’s colonial-era conquest.

Activists focused especially on Canada, Australia and New Zealand, which each draw on a legal system inherited from Britain. Laws and rulings in one can become precedent in the others, making them easier to present to the broader world as a global norm.

Irene Watson, an Australian scholar of international Indigenous law and First Nations member, described this effort, in a 2016 book, as “the development of international standards” that would pressure governments to address “the intergenerational impact of colonialism, which is a phenomenon that has never ended.”

It might even establish a legal claim to nationhood. But it is the international arena that ultimately confers acceptance on any sovereign state.

By the mid-1990s, the campaign was building momentum.

The United Nations began drafting a declaration of Indigenous rights. Several countries formally apologized, often alongside promises to settle old claims.

This period of truth and reconciliation was meant to address the past and, by educating the broader public, create support for further advances.

A sweeping 1996 report, chronicling many of Canada’s darkest moments, was followed by a second investigation, focused on residential schools. Completed 19 years after the first, the Truth and Reconciliation Commission spurred yet more federal policy recommendations and activism, including last month’s discovery at Kamloops.

Prime Minister Justin Trudeau visited a makeshift memorial near Canada’s Parliament honoring the children whose remains were found near the school in Kamloops.
Credit: Dave Chan/Agence France-Presse — Getty Images

Judicial advances have followed a similar process: yearslong efforts that bring incremental gains. But these add up. Governments face growing legal obligations to defer to Indigenous autonomy.

The United States has lagged. Major court rulings have been fewer. The government apologized only in 2010 for “past ill-conceived policies” against Indigenous people and did not acknowledge direct responsibility. Public pressure for reconciliation has been lighter.

Still, efforts are growing. In 2016, activists physically impeded construction of a North Dakota pipeline whose environmental impact, they said, would infringe on Sioux sovereignty. They later persuaded a federal judge to pause the project.

Native Americans marching against the Dakota Access oil pipeline near Cannon Ball, North Dakota, in 2017.
Credit: Terray Sylvester/Reuters

Latin America has often lagged as well, despite growing activism. Militaries in several countries have targeted Indigenous communities in living memory, leaving governments reluctant to self-incriminate.

In 2007, after 40 years of maneuvering, the United Nations adopted the declaration on Indigenous rights. Only the United States, Australia, New Zealand and Canada opposed, saying it elevated some Indigenous claims above those of other citizens. All four later reversed their positions.

“The Declaration’s right to self-determination is not a unilateral right to secede,” Dr. Claire Charters, a New Zealand Māori legal expert, wrote in a legal journal. However, its recognition of “Indigenous peoples’ collective land rights” could be “persuasive” in court systems, which often treat such documents as proof of an international legal principle.

Few have sought formal independence. But an Australian group’s 2013 declaration, brought to the United Nations and the International Court of Justice, inspired several others to follow. All failed. But, by demonstrating widening legal precedent and grass roots support, they highlighted that full nationhood is not as unthinkable as it once was.

It may not have seemed like a step in that direction when, in 2019, British Columbia enshrined the U.N. declaration’s terms into provincial law.

But Dr. Borrows called its provisions “quite significant,” including one requiring that the government win affirmative consent from Indigenous communities for policies that affect them. Conservatives and legal scholars have argued it would amount to an Indigenous veto, though Justin Trudeau, Canada’s prime minister, and his liberal government dispute this.

Mr. Trudeau promised to pass a similar law nationally in 2015, but faced objections from energy and resource industries that it would allow Indigenous communities to block projects. He continued trying, and Wednesday’s passage in Parliament all but ensures that Canada will fully adopt the U.N. terms.

Mr. Gilbert said that activists’ current focus is “getting this into the national systems.” Though hardly Indigenous independence, it would bring them closer than any step in generations.

Near the grounds of the former Kamloops Indian Residential School.
Credit: Jennifer Gauthier/Reuters

As the past 50 years show, this could help pressure others to follow (New Zealand is considered a prime candidate), paving the way for the next round of gradual but quietly historical advances.

It is why, Mr. Gilbert said, “All the eyes are on Canada.”

Greater than the sum of our parts: The evolution of collective intelligence (EurekaAlert!)

News Release 15-Jun-2021

University of Cambridge

Research News

The period preceding the emergence of behaviourally modern humans was characterised by dramatic climatic and environmental variability – it is these pressures, occurring over hundreds of thousands of years that shaped human evolution.

New research published today in the Cambridge Archaeological Journal proposes a new theory of human cognitive evolution entitled ‘Complementary Cognition’ which suggests that in adapting to dramatic environmental and climactic variabilities our ancestors evolved to specialise in different, but complementary, ways of thinking.

Lead author Dr Helen Taylor, Research Associate at the University of Strathclyde and Affiliated Scholar at the McDonald Institute for Archaeological Research, University of Cambridge, explained: “This system of complementary cognition functions in a way that is similar to evolution at the genetic level but instead of underlying physical adaptation, may underlay our species’ immense ability to create behavioural, cultural and technological adaptations. It provides insights into the evolution of uniquely human adaptations like language suggesting that this evolved in concert with specialisation in human cognition.”

The theory of complementary cognition proposes that our species cooperatively adapt and evolve culturally through a system of collective cognitive search alongside genetic search which enables phenotypic adaptation (Darwin’s theory of evolution through natural selection can be interpreted as a ‘search’ process) and cognitive search which enables behavioural adaptation.

Dr Taylor continued, “Each of these search systems is essentially a way of adapting using a mixture of building on and exploiting past solutions and exploring to update them; as a consequence, we see evolution in those solutions over time. This is the first study to explore the notion that individual members of our species are neurocognitively specialised in complementary cognitive search strategies.”

Complementary cognition could lie at the core of explaining the exceptional level of cultural adaptation in our species and provides an explanatory framework for the emergence of language. Language can be viewed as evolving both as a means of facilitating cooperative search and as an inheritance mechanism for sharing the more complex results of complementary cognitive search. Language is viewed as an integral part of the system of complementary cognition.

The theory of complementary cognition brings together observations from disparate disciplines, showing that they can be viewed as various faces of the same underlying phenomenon.

Dr Taylor continued: “For example, a form of cognition currently viewed as a disorder, dyslexia, is shown to be a neurocognitive specialisation whose nature in turn predicts that our species evolved in a highly variable environment. This concurs with the conclusions of many other disciplines including palaeoarchaeological evidence confirming that the crucible of our species’ evolution was highly variable.”

Nick Posford, CEO, British Dyslexia Association said, “As the leading charity for dyslexia, we welcome Dr Helen Taylor’s ground-breaking research on the evolution of complementary cognition. Whilst our current education and work environments are often not designed to make the most of dyslexia-associated thinking, we hope this research provides a starting point for further exploration of the economic, cultural and social benefits the whole of society can gain from the unique abilities of people with dyslexia.”

At the same time, this may also provide insights into understanding the kind of cumulative cultural evolution seen in our species. Specialisation in complementary search strategies and cooperatively adapting would have vastly increased the ability of human groups to produce adaptive knowledge, enabling us to continually adapt to highly variable conditions. But in periods of greater stability and abundance when adaptive knowledge did not become obsolete at such a rate, it would have instead accumulated, and as such Complementary Cognition may also be a key factor in explaining cumulative cultural evolution.

Complementary cognition has enabled us to adapt to different environments, and may be at the heart of our species’ success, enabling us to adapt much faster and more effectively than any other highly complex organism. However, this may also be our species’ greatest vulnerability.

Dr Taylor concluded: “The impact of human activity on the environment is the most pressing and stark example of this. The challenge of collaborating and cooperatively adapting at scale creates many difficulties and we may have unwittingly put in place a number of cultural systems and practices, particularly in education, which are undermining our ability to adapt. These self-imposed limitations disrupt our complementary cognitive search capability and may restrict our capacity to find and act upon innovative and creative solutions.”

“Complementary cognition should be seen as a starting point in exploring a rich area of human evolution and as a valuable tool in helping to create an adaptive and sustainable society. Our species may owe our spectacular technological and cultural achievements to neurocognitive specialisation and cooperative cognitive search, but our adaptive success so far may belie the importance of attaining an equilibrium of approaches. If this system becomes maladjusted, it can quickly lead to equally spectacular failures to adapt – and to survive, it is critical that this system be explored and understood further.”

Humans Are Evolving Faster Than Ever. The Reason Is Not Genetic, Study Claims (Science Alert)

sciencealert.com

Cameron Duke, Live Science – 15 JUNE 2021


At the mercy of natural selection since the dawn of life, our ancestors adapted, mated and died, passing on tiny genetic mutations that eventually made humans what we are today. 

But evolution isn’t bound strictly to genes anymore, a new study suggests. Instead, human culture may be driving evolution faster than genetic mutations can work.

In this conception, evolution no longer requires genetic mutations that confer a survival advantage being passed on and becoming widespread. Instead, learned behaviors passed on through culture are the “mutations” that provide survival advantages.

This so-called cultural evolution may now shape humanity’s fate more strongly than natural selection, the researchers argue.

“When a virus attacks a species, it typically becomes immune to that virus through genetic evolution,” study co-author Zach Wood, a postdoctoral researcher in the School of Biology and Ecology at the University of Maine, told Live Science.

Such evolution works slowly, as those who are more susceptible die off and only those who survive pass on their genes. 

But nowadays, humans mostly don’t need to adapt to such threats genetically. Instead, we adapt by developing vaccines and other medical interventions, which are not the results of one person’s work but rather of many people building on the accumulated “mutations” of cultural knowledge.

By developing vaccines, human culture improves its collective “immune system,” said study co-author Tim Waring, an associate professor of social-ecological systems modeling at the University of Maine.

And sometimes, cultural evolution can lead to genetic evolution. “The classic example is lactose tolerance,” Waring told Live Science. “Drinking cow’s milk began as a cultural trait that then drove the [genetic] evolution of a group of humans.”

In that case, cultural change preceded genetic change, not the other way around. 

The concept of cultural evolution began with the father of evolution himself, Waring said. Charles Darwin understood that behaviors could evolve and be passed to offspring just as physical traits are, but scientists in his day believed that changes in behaviors were inherited. For example, if a mother had a trait that inclined her to teach a daughter to forage for food, she would pass on this inherited trait to her daughter. In turn, her daughter might be more likely to survive, and as a result, that trait would become more common in the population. 

Waring and Wood argue in their new study, published June 2 in the journal Proceedings of the Royal Society B, that at some point in human history, culture began to wrest evolutionary control from our DNA. And now, they say, cultural change is allowing us to evolve in ways biological change alone could not.

Here’s why: Culture is group-oriented, and people in those groups talk to, learn from and imitate one another. These group behaviors allow people to pass on adaptations they learned through culture faster than genes can transmit similar survival benefits.

An individual can learn skills and information from a nearly unlimited number of people in a small amount of time and, in turn, spread that information to many others. And the more people available to learn from, the better. Large groups solve problems faster than smaller groups, and intergroup competition stimulates adaptations that might help those groups survive.

As ideas spread, cultures develop new traits.

In contrast, a person only inherits genetic information from two parents and racks up relatively few random mutations in their eggs or sperm, which takes about 20 years to be passed on to their small handful of children. That’s just a much slower pace of change.

“This theory has been a long time coming,” said Paul Smaldino, an associate professor of cognitive and information sciences at the University of California, Merced who was not affiliated with this study. “People have been working for a long time to describe how evolutionary biology interacts with culture.”

It’s possible, the researchers suggest, that the appearance of human culture represents a key evolutionary milestone.

“Their big argument is that culture is the next evolutionary transition state,” Smaldino told Live Science.

Throughout the history of life, key transition states have had huge effects on the pace and direction of evolution. The evolution of cells with DNA was a big transitional state, and then when larger cells with organelles and complex internal structures arrived, it changed the game again. Cells coalescing into plants and animals was another big sea change, as was the evolution of sex, the transition to life on land and so on.

Each of these events changed the way evolution acted, and now humans might be in the midst of yet another evolutionary transformation. We might still evolve genetically, but that may not control human survival very much anymore.

“In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,” Waring said in a statement.

But genetics drives bee colonies, while the human superorganism will exist in a category all its own. What that superorganism looks like in the distant future is unclear, but it will likely take a village to figure it out. 

Supercomputador do Inpe será desligado, afetando previsões do clima (Tecmundo)

tecmundo.com.br

Giovanna Fantinato, 14/06/2021


Em agosto, o Instituto Nacional de Pesquisas Espaciais (Inpe) deve desligar o supercomputador chamado Tupã, responsável por prever o tempo, emitir alertas climáticos, coletar e monitorar dados para pesquisas e desenvolvimento científico.

Segundo o Instituto, o desligamento — o primeiro da história — será realizado por falta de recursos. Neste ano, o Inpe recebeu o menor orçamento vindo do Governo Federal, totalizando R$ 44,7 milhões. No total, eram previstos o encaminhamento de R$ 76 milhões de verba. Para efeito de comparação, só o supercomputador consome R$ 5 milhões por ano de energia elétrica.

Como resposta, o Instituto Brasileiro de Proteção Ambiental (Proam) enviou um documento ao Ministério Público, pedindo a manutenção do monitoramento e um plano urgente para a gestão da crise. O mesmo documento também foi enviado ao Tribunal de Contas da União (TCU) e às defensorias públicas das regiões Sudeste, Sul e Centro-Oeste.

Consequências

“É inaceitável que em um momento como esse, diante da crise hídrica esperada no segundo semestre, com aumento dos preços da energia e risco de racionamento de água, o supercomputador seja desligado, com o argumento de falta de verbas”, afirma Carlos Bocuhy, presidente do Proam.

A professora da Universidade de São Paulo, Yara Schaeffer-Novelli, explica que o desligamento será extremamente prejudicial para os estudos do clima, dificultando, inclusive, o monitoramento de queimadas, estiagens e mudanças climáticas no Brasil.

Um parlamento para dar voz aos indígenas do Brasil (Sete Margens)

setemargens.com


Manifestação em Brasília durante o Acampamento Terra Livre de 2017. Foto © Guilherme Cavalli/Cimi.

Um parlamento indígena aberto, para dar voz e visibilidade política aos 305 povos originários do país, é o objectivo do Parlaíndio, fundado este mês no Brasil, anunciado nesta quarta-feira, 26 de Maio, e que terá assembleias mensais.

O Parlaíndio integra as lideranças indígenas brasileiras e tem já um portal com fotos dos seus líderes e notícias de assembleias ou de acontecimentos directa ou indirectamente relacionados com os povos indígenas.  

O cacique Raoni Metuktire, um importante líder indígena brasileiro, conhecido em todo o mundo pela sua luta pela preservação da Amazónia e dos povos nativos, é o seu presidente de honra, enquanto a coordenação executiva é da responsabilidade do cacique Almir Narayamoga Suruí, principal liderança do povo Paiter Suruí, da Rondónia, reconhecido internacionalmente pelos seus projectos de sustentabilidade em terras indígenas.

A primeira assembleia do Parlaíndio Brasil, noticia a Lusa citada pela TSF, decorreu virtualmente na última quinta-feira, 20 de Maio. Nessa altura, as lideranças indígenas discutiram os objectivos do movimento, bem como a sua estruturação e o modo como decorrerão as assembleias mensais.  

Entre as principais questões que o movimento abordará, ainda de acordo com a Lusa, estão a desflorestação e invasões das terras indígenas, projectos de mineração e hidroeléctricas em terras dos povos nativos, garimpo ilegal, poluição dos rios por mercúrio e contaminação das populações originárias e ribeirinhas.

O Parlaíndio tomou já uma primeira decisão política: a entrada com uma acção na justiça pedindo a exoneração do presidente da Funai (Fundação Nacional do Índio), órgão tutelado pelo Governo brasileiro, cuja missão deveria ser coordenar e pôr em prática políticas de protecção dos povos nativos.

“Foi aprovado, por unanimidade, o Parlaíndio Brasil entrar com uma acção na justiça pedindo a exoneração do presidente da Funai, delegado Marcelo Xavier, que à frente do órgão não tem cumprido a missão institucional de proteger e promover os direitos dos povos indígenas do país”, indicou o movimentou o em comunicado.

Em causa, refere a mesma fonte, está um pedido feito recentemente pelo presidente da Funai à Polícia Federal (PF), para a abertura de um inquérito contra lideranças indígenas, sob o pretexto de difamação do Governo de Jair Bolsonaro. 

“A Funai é um órgão que deveria promover assistência, protecção e garantias dos direitos dos povos indígenas brasileiros e, actualmente, faz o inverso. O inquérito teve carácter de intimidação e criminalização a partir de uma determinação do presidente da Funai”, explicou Almir Suruí, coordenador executivo do Parlaíndio Brasil.

Assembleia de indígenas. Foto da página do Parlaíndio.

O mesmo responsável considera que esta estrutura será importante para construir uma política de defesa dos povos indígenas, depois de a Constituição de 1988 ter consagrado um conjunto de políticas públicas e direitos para os indígenas brasileiros. “Um dos nossos objectivos é debater a construção do presente e do futuro a partir de uma cuidadosa avaliação do passado. Vamos discutir também as políticas públicas e fornecer subsídios para as organizações que integram o movimento indígena”, acrescentou o responsável na sessão de lançamento do movimento.

A ideia de criar o Parlamento Indígena do Brasil, como se pode ler no portal do Parlaíndio, surgiu numa reunião de lideranças indígenas realizada em Outubro de 2017, no Conselho Indigenista Missionário, uma organização da Igreja Católica de apoio aos povos indígenas. 

De acordo com a mesma informação, há actualmente no Brasil mais de 900 mil indígenas no Brasil, membros de 305 povos distintos, que falam mais de 180 línguas, segundo dados do Parlaíndio (a propósito do qual se pode ouvir aqui a crónica Outros Sinais, de Fernando Alves, na TSF, nesta quinta, 27). 

Cada vez mais pobres e indígenas em Manaus

Paolo Maria Braghini, franciscano em Manaus a ajudar famílias pobres. Foto © ACN Portugal.

Esta notícia surge em simultâneo com a denúncia de um frade católico franciscano, segundo o qual muitos indígenas e outras pessoas do interior do Amazonas estão a chegar a Manaus, a capital do Estado, sem nada para viver. 

“Temos famílias nos subúrbios que não têm nada para viver. Muitos vieram do interior do país e chegaram aqui na esperança de encontrar comida na cidade. Mas aqui só encontram fome e desemprego. Para cúmulo, agora nem sequer têm uma horta para cultivar ou o rio para pescar”, diz o padre Paolo Maria Braghini, franciscano capuchinho italiano, citado pela Ajuda à Igreja que Sofre

“No meio de tanta pobreza, escolhemos certas localidades na periferia e, com a ajuda de líderes comunitários locais, identificámos as famílias mais carenciadas”, explica frei Paolo, sobre o modo como a comunidade de franciscanos está a procurar minorar a situação. 

Manaus, um dos principais centros financeiros, industriais e económicos de toda a região norte, tem mais de dois milhões de habitantes e continua a atrair as populações da região. A cidade, que já tinha muitas bolsas de pobreza, viu a situação agravar-se com a pandemia do novo coronavírus e o colapso dos serviços de saúde.

As populações pobres e indígenas do Amazonas foram alguns dos sectores mais atingidos pela falta de estruturas. Em Janeiro, num dos picos da crise, o bispo de Manaus chegou mesmo a pedir ajuda para que fosse enviado oxigénio para os hospitais.

UMaine researchers: Culture drives human evolution more than genetics (Eureka Alert!)

News Release 2-Jun-2021

University of Maine

Research News

In a new study, University of Maine researchers found that culture helps humans adapt to their environment and overcome challenges better and faster than genetics.

After conducting an extensive review of the literature and evidence of long-term human evolution, scientists Tim Waring and Zach Wood concluded that humans are experiencing a “special evolutionary transition” in which the importance of culture, such as learned knowledge, practices and skills, is surpassing the value of genes as the primary driver of human evolution.

Culture is an under-appreciated factor in human evolution, Waring says. Like genes, culture helps people adjust to their environment and meet the challenges of survival and reproduction. Culture, however, does so more effectively than genes because the transfer of knowledge is faster and more flexible than the inheritance of genes, according to Waring and Wood.

Culture is a stronger mechanism of adaptation for a couple of reasons, Waring says. It’s faster: gene transfer occurs only once a generation, while cultural practices can be rapidly learned and frequently updated. Culture is also more flexible than genes: gene transfer is rigid and limited to the genetic information of two parents, while cultural transmission is based on flexible human learning and effectively unlimited with the ability to make use of information from peers and experts far beyond parents. As a result, cultural evolution is a stronger type of adaptation than old genetics.

Waring, an associate professor of social-ecological systems modeling, and Wood, a postdoctoral research associate with the School of Biology and Ecology, have just published their findings in a literature review in the Proceedings of the Royal Society B, the flagship biological research journal of The Royal Society in London.

“This research explains why humans are such a unique species. We evolve both genetically and culturally over time, but we are slowly becoming ever more cultural and ever less genetic,” Waring says.

Culture has influenced how humans survive and evolve for millenia. According to Waring and Wood, the combination of both culture and genes has fueled several key adaptations in humans such as reduced aggression, cooperative inclinations, collaborative abilities and the capacity for social learning. Increasingly, the researchers suggest, human adaptations are steered by culture, and require genes to accommodate.

Waring and Wood say culture is also special in one important way: it is strongly group-oriented. Factors like conformity, social identity and shared norms and institutions — factors that have no genetic equivalent — make cultural evolution very group-oriented, according to researchers. Therefore, competition between culturally organized groups propels adaptations such as new cooperative norms and social systems that help groups survive better together.

According to researchers, “culturally organized groups appear to solve adaptive problems more readily than individuals, through the compounding value of social learning and cultural transmission in groups.” Cultural adaptations may also occur faster in larger groups than in small ones.

With groups primarily driving culture and culture now fueling human evolution more than genetics, Waring and Wood found that evolution itself has become more group-oriented.

“In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,” Waring says. “The ‘society as organism’ metaphor is not so metaphorical after all. This insight can help society better understand how individuals can fit into a well-organized and mutually beneficial system. Take the coronavirus pandemic, for example. An effective national epidemic response program is truly a national immune system, and we can therefore learn directly from how immune systems work to improve our COVID response.”

###

Waring is a member of the Cultural Evolution Society, an international research network that studies the evolution of culture in all species. He applies cultural evolution to the study of sustainability in social-ecological systems and cooperation in organizational evolution.

Wood works in the UMaine Evolutionary Applications Laboratory managed by Michael Kinnison, a professor of evolutionary applications. His research focuses on eco-evolutionary dynamics, particularly rapid evolution during trophic cascades.

The professionals who predict the future for a living (MIT Technology Review)

technologyreview.com

Everywhere from business to medicine to the climate, forecasting the future is a complex and absolutely critical job. So how do you do it—and what comes next?

Bobbie Johnson

February 26, 2020


Inez Fung

Professor of atmospheric science, University of California, Berkeley

Inez Fung
Leah Fasten

Prediction for 2030: We’ll light up the world… safely

I’ve spoken to people who want climate model information, but they’re not really sure what they’re asking me for. So I say to them, “Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?”

I joined Jim Hansen’s group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What we’ve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.

Still, there are things we’re missing. We still don’t have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through history—and now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium-thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of  historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.

I don’t see us reducing fossil fuels by 2030. I don’t see us reducing CO2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so I’m looking forward to the growth in alternative energy going to parts of the world that have no electricity. That’s important because it’s education, health, everything associated with a Western standard of living. That’s where I’m putting my hopes.

Anne-Lise Kjaer
Dvora Photography

Anne Lise Kjaer

Futurist, Kjaer Global, London

Prediction for 2030: Adults will learn to grasp new ideas

As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.

When it comes to the future, you have two choices. You can sit back and think “It’s not happening to me” and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.

A lot of companies come to us and think they want to hear about the future, but really it’s just an exercise for them—let’s just tick that box, do a report, and put it on our bookshelf.

So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislation—how does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future?

What’s next? Obviously with technology we can educate much better than we could in the past. But it’s a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world?

Philip Tetlock
Courtesy Photo

Philip Tetlock

Coauthor of Superforecasting and professor, University of Pennsylvania

Prediction for 2030: We’ll get better at being uncertain

At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which it’s usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstractionbut from our point of view, it’s more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didn’t happen: driverless Ubers weren’t picking people up for fares in any major American city at the end of 2017. Watson didn’t defeat the world’s best oncologists in a medical diagnosis tournament. So I don’t think we’re on a fast track toward the singularity, put it that way.

Forecasts have the potential to be either self-fulfilling or self-negatingY2K was arguably a self-negating forecast. But it’s possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that?

What I’ve seen over the last 10 years, and it’s a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think there’s a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.

Keith Chen
Ryan Young

Keith Chen

Associate professor of economics, UCLA

Prediction for 2030: We’ll be more—and less—private

When I worked on Uber’s surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable times—like New Year’s—when we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. It’s like trying to predict the weather. Yes, the amount of weather data that we collect today—temperature, wind speed, barometric pressure, humidity data—is 10,000 times greater than what we were collecting 20 years ago. But we still can’t predict the weather 10,000 times further out than we could back then. And social movements—even in a very specific setting, such as where riders want to go at any given point in time—are, if anything, even more chaotic than weather systems.

These days what I’m doing is a little bit more like forensic economics. We look to see what we can find and predict from people’s movement patterns. We’re just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information?

I think the next big social tipping point is people actually starting to really care about their privacy. It’ll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I don’t quite know how to reconcile the two.

Annalee Newitz
Sarah Deragon

Annalee Newitz

Science fiction and nonfiction author, San Francisco

Prediction for 2030: We’re going to see a lot more humble technology

Every era has its own ideas about the future. Go back to the 1950s and you’ll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.

Science fiction writers can’t actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we can’t say what’s definitely going to happen, is offer a range of scenarios informed by history.

There are a lot of myths about the future that people believe are going to come true right now. I think a lot of people—not just science fiction writers but people who are working on machine learning—believe that relatively soon we’re going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.

It seems unlikely that a human-equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.

I’m not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.

We’re going to have to develop much better technologies around disaster relief and emergency response, because we’ll be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I don’t mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.

Finale Doshi-Velez
Noah Willman

Finale Doshi-Velez

Associate professor of computer science, Harvard

Prediction for 2030: Humans and machines will make decisions together

In my lab, we’re trying to answer questions like “How might this patient respond to this antidepressant?” or “How might this patient respond to this vasopressor?” So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.

Some of it might be relevant to making predictions about their illnesses, some not, and we don’t know which is which. That’s why we ask for the large data set with everything.

There’s been about a decade of work trying to get unsupervised machine-­learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.

We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.

I’m excited about combining humans and AI to make predictions. Let’s say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.

All these predictive models were built and deployed and people didn’t think enough about potential biases. I’m hopeful that we’re going to have a future where these human-machine teams are making decisions that are better than either alone.

Abdoulaye Banire Diallo
Guillaume Simoneau

Abdoulaye Banire Diallo

Professor, director of the bioinformatics lab, University of Quebec at Montreal

Prediction for 2030: Machine-based forecasting will be regulated

When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. I’m involved in projects that add a layer of genetic and genomic data to help forecastingto help decision makers like the farmer to have a full picture when they’re thinking about replacing cows, improving management, resilience, and animal welfare.

With the emergence of machine learning and AI, what we’re showing is that we can help tackle problems in a way that hasn’t been done before. We are adapting it to the dairy sector, where we’ve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.

Until now AI and machine learning have been associated with domain expertise. It’s not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.

This story was part of our March 2020 issue.

The predictions issue

If DNA is like software, can we just fix the code? (MIT Technology Review)

technologyreview.com

In a race to cure his daughter, a Google programmer enters the world of hyper-personalized drugs.

Erika Check Hayden

February 26, 2020


To create atipeksen, Yu borrowed from recent biotech successes like gene therapy. Some new drugs, including cancer therapies, treat disease by directly manipulating genetic information inside a patient’s cells. Now doctors like Yu find they can alter those treatments as if they were digital programs. Change the code, reprogram the drug, and there’s a chance of treating many genetic diseases, even those as unusual as Ipek’s.

The new strategy could in theory help millions of people living with rare diseases, the vast majority of which are caused by genetic typos and have no treatment. US regulators say last year they fielded more than 80 requests to allow genetic treatments for individuals or very small groups, and that they may take steps to make tailor-made medicines easier to try. New technologies, including custom gene-editing treatments using CRISPR, are coming next.

Where it had taken decades for Ionis to perfect its drug, Yu now set a record: it took only eight months for Yu to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.

“I never thought we would be in a position to even contemplate trying to help these patients,” says Stanley Crooke, a biotechnology entrepreneur and founder of Ionis Pharmaceuticals, based in Carlsbad, California. “It’s an astonishing moment.”

Antisense drug

Right now, though, insurance companies won’t pay for individualized gene drugs, and no company is making them (though some plan to). Only a few patients have ever gotten them, usually after heroic feats of arm-twisting and fundraising. And it’s no mistake that programmers like Mehmet Kuzu, who works on data privacy, are among the first to pursue individualized drugs. “As computer scientists, they get it. This is all code,” says Ethan Perlstein, chief scientific officer at the Christopher and Dana Reeve Foundation.

A nonprofit, the A-T Children’s Project, funded most of the cost of designing and making Ipek’s drug. For Brad Margus, who created the foundation in 1993 after his two sons were diagnosed with A-T, the change between then and now couldn’t be more dramatic. “We’ve raised so much money, we’ve funded so much research, but it’s so frustrating that the biology just kept getting more and more complex,” he says. “Now, we’re suddenly presented with this opportunity to just fix the problem at its source.”

Ipek was only a few months old when her father began looking for a cure. A geneticist friend sent him a paper describing a possible treatment for her exact form of A-T, and Kuzu flew from Sunnyvale, California, to Los Angeles to meet the scientists behind the research. But they said no one had tried the drug in people: “We need many more years to make this happen,” they told him.

Timothy Yu, of Boston Children's Hospital
Timothy Yu, of Boston Children’s HospitalCourtesy Photo (Yu)

Kuzu didn’t have years. After he returned from Los Angeles, Margus handed him a thumb drive with a video of a talk by Yu, a doctor at Boston Children’s Hospital, who described how he planned to treat a young girl with Batten disease (a different neurodegenerative condition) in what press reports would later dub “a stunning illustration of personalized genomic medicine.” Kuzu realized Yu was using the very same gene technology the Los Angeles scientists had dismissed as a pipe dream.

That technology is called “antisense.” Inside a cell, DNA encodes information to make proteins. Between the DNA and the protein, though, come messenger molecules called RNA that ferry the gene information out of the nucleus. Think of antisense as mirror-image molecules that stick to specific RNA messages, letter for letter, blocking them from being made into proteins. It’s possible to silence a gene this way, and sometimes to overcome errors, too.

Though the first antisense drugs appeared 20 years ago, the concept achieved its first blockbuster success only in 2016. That’s when a drug called nusinersen, made by Ionis, was approved to treat children with spinal muscular atrophy, a genetic disease that would otherwise kill them by their second birthday.

Yu, a specialist in gene sequencing, had not worked with antisense before, but once he’d identified the genetic error causing Batten disease in his young patient, Mila Makovec, it became apparent to him he didn’t have to stop there. If he knew the gene error, why not create a gene drug? “All of a sudden a lightbulb went off,” Yu says. “Couldn’t one try to reverse this? It was such an appealing idea, and such a simple idea, that we basically just found ourselves unable to let that go.”

Yu admits it was bold to suggest his idea to Mila’s mother, Julia Vitarello. But he was not starting from scratch. In a demonstration of how modular biotech drugs may become, he based milasen on the same chemistry backbone as the Ionis drug, except he made Mila’s particular mutation the genetic target. Where it had taken decades for Ionis to perfect a drug, Yu now set a record: it took only eight months for him to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.

“What’s different now is that someone like Tim Yu can develop a drug with no prior familiarity with this technology,” says Art Krieg, chief scientific officer at Checkmate Pharmaceuticals, based in Cambridge, Massachusetts.

Source code

As word got out about milasen, Yu heard from more than a hundred families asking for his help. That’s put the Boston doctor in a tough position. Yu has plans to try antisense to treat a dozen kids with different diseases, but he knows it’s not the right approach for everyone, and he’s still learning which diseases might be most amenable. And nothing is ever simple—or cheap. Each new version of a drug can behave differently and requires costly safety tests in animals.

Kuzu had the advantage that the Los Angeles researchers had already shown antisense might work. What’s more, Margus agreed that the A-T Children’s Project would help fund the research. But it wouldn’t be fair to make the treatment just for Ipek if the foundation was paying for it. So Margus and Yu decided to test antisense drugs in the cells of three young A-T patients, including Ipek. Whichever kid’s cells responded best would get picked.

Ipek at play
Ipek may not survive past her 20s without treatment.Matthew Monteith

While he waited for the test results, Kuzu raised about $200,000 from friends and coworkers at Google. One day, an email landed in his in-box from another Google employee who was fundraising to help a sick child. As he read it, Kuzu felt a jolt of recognition: his coworker, Jennifer Seth, was also working with Yu.

Seth’s daughter Lydia was born in December 2018. The baby, with beautiful chubby cheeks, carries a mutation that causes seizures and may lead to severe disabilities. Seth’s husband Rohan, a well-connected Silicon Valley entrepreneur, refers to the problem as a “tiny random mutation” in her “source code.” The Seths have raised more than $2 million, much of it from co-workers.

Custom drug

By then, Yu was ready to give Kuzu the good news: Ipek’s cells had responded the best. So last September the family packed up and moved from California to Cambridge, Massachusetts, so Ipek could start getting atipeksen. The toddler got her first dose this January, under general anesthesia, through a lumbar puncture into her spine.

After a year, the Kuzus hope to learn whether or not the drug is helping. Doctors will track her brain volume and measure biomarkers in Ipek’s cerebrospinal fluid as a readout of how her disease is progressing. And a team at Johns Hopkins will help compare her movements with those of other kids, both with and without A-T, to observe whether the expected disease symptoms are delayed.

One serious challenge facing gene drugs for individuals is that short of a healing miracle, it may ultimately be impossible to be sure they really work. That’s because the speed with which diseases like A-T progress can vary widely from person to person. Proving a drug is effective, or revealing that it’s a dud, almost always requires collecting data from many patients, not just one. “It’s important for parents who are ready to pay anything, try anything, to appreciate that experimental treatments often don’t work,” says Holly Fernandez Lynch, a lawyer and ethicist at the University of Pennsylvania. “There are risks. Trying one could foreclose other options and even hasten death.”

Kuzu says his family weighed the risks and benefits. “Since this is the first time for this kind of drug, we were a little scared,” he says. But, he concluded, “there’s nothing else to do. This is the only thing that might give hope to us and the other families.”

Another obstacle to ultra-personal drugs is that insurance won’t pay for them. And so far, pharmaceutical companies aren’t interested either. They prioritize drugs that can be sold thousands of times, but as far as anyone knows, Ipek is the only person alive with her exact mutation. That leaves families facing extraordinary financial demands that only the wealthy, lucky, or well connected can meet. Developing Ipek’s treatment has already cost $1.9 million, Margus estimates.

Some scientists think agencies such as the US National Institutes of Health should help fund the research, and will press their case at a meeting in Bethesda, Maryland, in April. Help could also come from the Food and Drug Administration, which is developing guidelines that may speed the work of doctors like Yu. The agency will receive updates on Mila and other patients if any of them experience severe side effects.

The FDA is also considering giving doctors more leeway to modify genetic drugs to try in new patients without securing new permissions each time. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, likens traditional drug manufacturing to factories that mass-produce identical T-shirts. But, he points out, it’s now possible to order an individual basic T-shirt embroidered with a company logo. So drug manufacturing could become more customized too, Marks believes.

Custom drugs carrying exactly the message a sick kid’s body needs? If we get there, credit will go to companies like Ionis that developed the new types of gene medicine. But it should also go to the Kuzus—and to Brad Margus, Rohan Seth, Julia Vitarello, and all the other parents who are trying save their kids. In doing so, they are turning hyper-personalized medicine into reality.

Erika Check Hayden is director of the science communication program at the University of California, Santa Cruz.

This story was part of our March 2020 issue.

The predictions issue

An elegy for cash: the technology we might never replace (MIT Technology Review)

technologyreview.com

Cash is gradually dying out. Will we ever have a digital alternative that offers the same mix of convenience and freedom?

Mike Orcutt

January 3, 2020


If you’d rather keep all that to yourself, you’re in luck. The person in the store (or on the street corner) may remember your face, but as long as you didn’t reveal any identifying information, there is nothing that links you to the transaction.

This is a feature of physical cash that payment cards and apps do not have: freedom. Called “bearer instruments,” banknotes and coins are presumed to be owned by whoever holds them. We can use them to transact with another person without a third party getting in the way. Companies cannot build advertising profiles or credit ratings out of our data, and governments cannot track our spending or our movements. And while a credit card can be declined and a check mislaid, handing over money works every time, instantly.

We shouldn’t take this freedom for granted. Much of our commerce now happens online. It relies on banks and financial technology companies to serve as middlemen. Transactions are going digital in the physical world, too: electronic payment tools, from debit cards to Apple Pay to Alipay, are increasingly replacing cash. While notes and coins remain popular in many countries, including the US, Japan, and Germany, in others they are nearing obsolescence.

This trend has civil liberties groups worried. Without cash, there is “no chance for the kind of dignity-preserving privacy that undergirds an open society,” writes Jerry Brito, executive director of Coin Center, a policy advocacy group based in Washington, DC. In a recent report, Brito contends that we must “develop and foster electronic cash” that is as private as physical cash and doesn’t require permission to use.

The central question is who will develop and control the electronic payment systems of the future. Most of the existing ones, like Alipay, Zelle, PayPal, Venmo, and Kenya’s M-Pesa, are run by private firms. Afraid of leaving payments solely in their hands, many governments are looking to develop some sort of electronic stand-in for notes and coins. Meanwhile, advocates of stateless, ownerless cryptocurrencies like Bitcoin say they’re the only solution as surveillance-proof as cash—but can they be feasible at large scales?

We tend to take it for granted that new technologies work better than old ones—safer, faster, more accurate, more efficient, more convenient. Purists may extol the virtues of vinyl records, but nobody can dispute that a digital music collection is easier to carry and sounds almost exactly as good. Cash is a paradox—a technology thousands of years old that may just prove impossible to re-create in a more advanced form.

In (government) money we trust?

We call banknotes and coins “cash,” but the term really refers to something more abstract: cash is essentially money that your government owes you. In the old days this was a literal debt. “I promise to pay the bearer on demand the sum of …” still appears on British banknotes, a notional guarantee that the Bank of England will hand over the same value in gold in exchange for your note. Today it represents the more abstract guarantee that you will always be able to use that note to pay for things.

The digits in your bank account, on the other hand, refer to what your bank owes you. When you go to an ATM, you are effectively converting the bank’s promise to pay into a government promise.

Most people would say they trust the government’s promise more, says Gabriel Söderberg, an economist at the Riksbank, the central bank of Sweden. Their bet—correct, in most countries—is that their government is much less likely to go bust.

That’s why it would be a problem if Sweden were to go completely “cashless,” Söderberg says. He and his colleagues fear that if people lose the option to convert their bank money to government money at will and use it to pay for whatever they need, they might start to lose trust in the whole money system. A further worry is that if the private sector is left to dominate digital payments, people who can’t or won’t use these systems could be shut out of the economy.

This is fast becoming more than just a thought experiment in Sweden. Nearly everyone there uses a mobile app called Swish to pay for things. Economists have estimated that retailers in Sweden could completely stop accepting cash by 2023.

Creating an electronic version of Sweden’s sovereign currency—an “e-krona”—could mitigate these problems, Söderberg says. If the central bank were to issue digital money, it would design it to be a public good, not a profit-making product for a corporation. “Easily accessible, simple and user-friendly versions could be developed for those who currently have difficulty with digital technology,” the bank asserted in a November report covering Sweden’s payment landscape.

The Riksbank plans to develop and test an e-krona prototype. It has examined a number of technologies that might underlie it, including cryptocurrency systems like Bitcoin. But the central bank has also called on the Swedish government to lead a broad public inquiry into whether such a system should ever go live. “In the end, this decision is too big for a central bank alone, at least in the Swedish context,” Söderberg says.

The death of financial privacy

China, meanwhile, appears to have made its decision: the digital renminbi is coming. Mu Changchun, head of the People’s Bank of China’s digital currency research institute, said in September that the currency, which the bank has been working on for years, is “close to being out.” In December, a local news report suggested that the PBOC is nearly ready to start tests in the cities of Shenzhen and Suzhou. And the bank has been explicit about its intention to use it to replace banknotes and coins.

Cash is already dying out on its own in China, thanks to Alipay and WeChat Pay, the QR-code-based apps that have become ubiquitous in just a few years. It’s been estimated that mobile payments made up more than 80% of all payments in China in 2018, up from less than 20% in 2013.

Street Musician takes WeChat Pay
AP Images

It’s not clear how much access the government currently has to transaction data from WeChat Pay and Alipay. Once it issues a sovereign digital currency—which officials say will be compatible with those two services—it will likely have access to a lot more. Martin Chorzempa, a research fellow at the Peterson Institute for International Economics in Washington, DC, told the New York Times in October that the system will give the PBOC “extraordinary power and visibility into the financial system, more than any central bank has today.”

We don’t know for sure what technology the PBOC plans to use as the basis for its digital renminbi, but we have at least two revealing clues. First, the bank has been researching blockchain technology since 2014, and the government has called the development of this technology a priority. Second, Mu said in September that China’s system will bear similarities to Libra, the electronic currency Facebook announced last June. Indeed, PBOC officials have implied in public statements that the unveiling of Libra inspired them to accelerate the development of the digital renminbi, which has been in the works for years.

As currently envisioned, Libra will run on a blockchain, a type of accounting ledger that can be maintained by a network of computers instead of a single central authority. However, it will operate very differently from Bitcoin, the original blockchain system.

The computers in Bitcoin’s network use open-source software to automatically verify and record every single transaction. In the process, they generate a permanent public record of the currency’s entire transaction history: the blockchain. As envisioned, Libra’s network will do something similar. But whereas anyone with a computer and an internet connection can participate anonymously in Bitcoin’s network, the “nodes” that make up Libra’s network will be companies that have been vetted and given membership in a nonprofit association.

Unlike Bitcoin, which is notoriously volatile, Libra will be designed to maintain a stable value. To pull this off, the so-called Libra Association will be responsible for maintaining a reserve (pdf) of government-issued currencies (the latest plan is for it to be half US dollars, with the other half composed of British pounds, euros, Japanese yen, and Singapore dollars). This reserve is supposed to serve as backing for the digital units of value.

Both Libra and the digital renminbi, however, face serious questions about privacy. To start with, it’s not clear if people will be able to use them anonymously.

With Bitcoin, although transactions are public, users don’t have to reveal who they really are; each person’s “address” on the public blockchain is just a random string of letters and numbers. But in recent years, law enforcement officials have grown skilled at combining public blockchain data with other clues to unmask people using cryptocurrencies for illicit purposes. Indeed, in a July blog post, Libra project head David Marcus argued that the currency would be a boon for law enforcement, since it would help “move more cash transactions—where a lot of illicit activities happen—to a digital network.”

As for the Chinese digital currency, Mu has said it will feature some level of anonymity. “We know the demand from the general public is to keep anonymity by using paper money and coins … we will give those people who demand it anonymity,” he said at a November conference in Singapore. “But at the same time we will keep the balance between ‘controllable anonymity’ and anti-money-laundering, CTF [counter-terrorist financing], and also tax issues, online gambling, and any electronic criminal activities,” he added. He did not, however, explain how that “balance” would work.

Sweden and China are leading the charge to issue consumer-focused electronic money, but according to John Kiff, an expert on financial stability for the International Monetary Fund, more than 30 countries have explored or are exploring the idea.  In some, the rationale is similar to Sweden’s: dwindling cash and a growing private-sector payments ecosystem. Others are countries where commercial banks have decided not to set up shop. Many see an opportunity to better monitor for illicit transactions. All will have to wrestle with the same thorny privacy issues that Libra and the digital renminbi are raising.

Robleh Ali, a research scientist at MIT’s Digital Currency Initiative, says digital currency systems from central banks may need to be designed so that the government can “consciously blind itself” to the information. Something like that might be technically possible thanks to cutting-edge cryptographic tools like zero-knowledge proofs, which are used in systems like Zcash to shield blockchain transaction information from public view.

However, there’s no evidence that any governments are even thinking about deploying tools like this. And regardless, can any government—even Sweden’s—really be trusted to blind itself?

Cryptocurrency: A workaround for freedom

That’s wishful thinking, says Alex Gladstein, chief strategy officer for the Human Rights Foundation. While you may trust your government or think you’ve got nothing to hide, that might not always remain true. Politics evolves, governments get pushed out by elections or other events, what constitutes a “crime” changes, and civil liberties are not guaranteed. “Financial privacy is not going to be gifted to you by your government, regardless of how ‘free’ they are,” Gladstein says. He’s convinced that it has to come in the form of a stateless, decentralized digital currency like Bitcoin.

In fact, “electronic cash” was what Bitcoin’s still-unknown inventor, the pseudonymous Satoshi Nakamoto, claimed to be trying to create (before disappearing). Eleven years into its life, Nakamoto’s technology still lacks some of the signature features of cash. It is difficult to use, transactions can take more than an hour to process, and the currency’s value can fluctuate wildly. And as already noted, the supposedly anonymous transactions it enables can sometimes be traced.

But in some places people just need something that works, however imperfectly. Take Venezuela. Cash in the crisis-ridden country is scarce, and the Venezuelan bolivar is constantly losing value to hyperinflation. Many Venezuelans seek refuge in US dollars, storing them under the proverbial (and literal) mattress, but that also makes them vulnerable to thieves.

What many people want is access to stable cash in digital form, and there’s no easy way to get that, says Alejandro Machado, cofounder of the Open Money Initiative. Owing to government-imposed capital controls, Venezuelan banks have largely been cut off from foreign banks. And due to restrictions by US financial institutions, digital money services like PayPal and Zelle are inaccessible to most people.  So a small number of tech-savvy Venezuelans have turned to a service called LocalBitcoins.

It’s like Craigslist, except that the only things for sale are bitcoins and bolivars. On Venezuela’s LocalBitcoins site, people advertise varying quantities of currency for sale at varying exchange rates. The site holds the money in escrow until trades are complete, and tracks the sellers’ reputations.

It’s not for the masses, but it’s “very effective” for people who can make it work, says Machado. For instance, he and his colleagues met a young woman who mines Bitcoin and keeps her savings in the currency. She doesn’t have a foreign bank account, so she’s willing to deal with the constant fluctuations in Bitcoin’s price. Using LocalBitcoins, she can cash out into bolivars whenever she needs them—to buy groceries, for example. “Niche power users” like this are “leveraging the best features of Bitcoin, which is to be an asset that is permissionless and that is very easy to trade electronically,” Machado says.

However, this is possible only because there are enough people using LocalBitcoins to create what finance people call “local liquidity,” meaning you can easily find a buyer for your bitcoins or bolivars. Bitcoin is the only cryptocurrency that has achieved this in Venezuela, says Machado, and it’s mostly thanks to LocalBitcoins.

This is a long way from the dream of cryptocurrency as a widely used substitute for stable, government-issued money. Most Venezuelans can’t use Bitcoin, and few merchants there even know what it is, much less how to accept it.

Still, it’s a glimpse of what a cryptocurrency can offer—a functional financial system that anyone can join and that offers the kind of freedom cash provides in most other places.

Decentralize this

Could something like Bitcoin ever be as easy to use and reliable as today’s cash is for everyone else? The answer is philosophical as well as technical.

To begin with, what does it even mean for something to be like Bitcoin? Central banks and corporations will adapt certain aspects of Bitcoin and apply them to their own ends. Will those be cryptocurrencies? Not according to purists, who say that though Libra or some future central bank-issued digital currency may run on blockchain technology, they won’t be cryptocurrencies because they will be under centralized control.

True cryptocurrencies are “decentralized”—they have no one entity in charge and no single points of failure, no weak spots that an adversary (including a government) could attack. With no middleman like a bank attesting that a transaction took place, each transaction has to be validated by the nodes in a cryptocurrency’s network, which can number many thousands. But this requires an immense expenditure of computing power, and it’s the reason Bitcoin transactions can take more than an hour to settle.

A currency like Libra wouldn’t have this problem, because only a few authorized entities would be able to operate nodes. The trade-off is that its users wouldn’t be able to trust those entities to guarantee their privacy, any more than they can trust a bank, a government, or Facebook.

Is it technically possible to achieve Bitcoin’s level of decentralization and the speed, scale, privacy, and ease of use that we’ve come to expect from traditional payment methods? That’s a problem many talented researchers are still trying to crack. But some would argue that shouldn’t necessarily be the goal.  

In a recent essay, Jill Carlson, cofounder of the Open Money Initiative, argued that perhaps decentralized cryptocurrency systems were “never supposed to go mainstream.” Rather, they were created explicitly for “censored transactions,” from paying for drugs or sex to supporting political dissidents or getting money out of countries with restrictive currency controls. Their slowness is inherent, not a design flaw; they “forsake scale, speed, and cost in favor of one key feature: censorship resistance.” A world in which they went mainstream would be “a very scary place indeed,” she wrote.

In summary, we have three avenues for the future of digital money, none of which offers the same mix of freedom and ease of use that characterizes cash. Private companies have an obvious incentive to monetize our data and pursue profits over public interest. Digital government money may still be used to track us, even by well-intentioned governments, and for less benign ones it’s a fantastic tool for surveillance. And cryptocurrency can prove useful when freedoms are at risk, but it likely won’t work at scale anytime soon, if ever.

How big a problem is this? That depends on where you live, how much you trust your government and your fellow citizens, and why you wish to use cash. And if you’d rather keep that to yourself, you’re in luck. For now.

What AI still can’t do (MIT Technology Review)

technologyreview.com

Brian Bergstein

February 19, 2020


Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

Elias Bareinboim
Elias Bareinboim: AI systems are clueless when it comes to causation.

Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.

His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.

As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.

But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.

Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.

An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.

Performing miracles

The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant—the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.

Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.

Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.

Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.

Judea Pearl
Judea Pearl: His theory of causal reasoning has transformed science.

Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.

Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.

Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.

One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).

The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.

The last mile

Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.

He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.

Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”

He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.

Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”

That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.

He also doesn’t think it’s that far off: “This is the last mile before the victory.”

What if?

Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.

As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.

For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.

You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.”

Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.

This story was part of our March 2020 issue.

The predictions issue

We’re not prepared for the end of Moore’s Law (MIT Technology Review)

technologyreview.com

David Rotman


February 24, 2020

Moore’s argument was an economic one. Integrated circuits, with multiple transistors and other electronic devices interconnected with aluminum metal lines on a tiny square of silicon wafer, had been invented a few years earlier by Robert Noyce at Fairchild Semiconductor. Moore, the company’s R&D director, realized, as he wrote in 1965, that with these new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” It was a beautiful bargain—in theory, the more transistors you added, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip.

Soon these cheaper, more powerful chips would become what economists like to call a general purpose technology—one so fundamental that it spawns all sorts of other innovations and advances in multiple industries. A few years ago, leading economists credited the information technology made possible by integrated circuits with a third of US productivity growth since 1974. Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore’s prediction. It has also fueled today’s breakthroughs in artificial intelligence and genetic medicine, by giving machine-learning techniques the ability to chew through massive amounts of data to find answers.

But how did a simple prediction, based on extrapolating from a graph of the number of transistors by year—a graph that at the time had only a few data points—come to define a half-century of progress? In part, at least, because the semiconductor industry decided it would.

Cover of Electronics Magazine April, 1965
The April 1965 Electronics Magazine in which Moore’s article appeared.Wikimedia

Moore wrote that “cramming more components onto integrated circuits,” the title of his 1965 article, would “lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.” In other words, stick to his road map of squeezing ever more transistors onto chips and it would lead you to the promised land. And for the following decades, a booming industry, the government, and armies of academic and industrial researchers poured money and time into upholding Moore’s Law, creating a self-fulfilling prophecy that kept progress on track with uncanny accuracy. Though the pace of progress has slipped in recent years, the most advanced chips today have nearly 50 billion transistors.

Every year since 2001, MIT Technology Review has chosen the 10 most important breakthrough technologies of the year. It’s a list of technologies that, almost without exception, are possible only because of the computation advances described by Moore’s Law.

For some of the items on this year’s list the connection is obvious: consumer devices, including watches and phones, infused with AI; climate-change attribution made possible by improved computer modeling and data gathered from worldwide atmospheric monitoring systems; and cheap, pint-size satellites. Others on the list, including quantum supremacy, molecules discovered using AI, and even anti-aging treatments and hyper-personalized drugs, are due largely to the computational power available to researchers.

But what happens when Moore’s Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time?

RIP

“It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.” Numerous other prominent computer scientists have also declared Moore’s Law dead in recent years. In early 2019, the CEO of the large chipmaker Nvidia agreed.

In truth, it’s been more a gradual decline than a sudden death. Over the decades, some, including Moore himself at times, fretted that they could see the end in sight, as it got harder to make smaller and smaller transistors. In 1999, an Intel researcher worried that the industry’s goal of making transistors smaller than 100 nanometers by 2005 faced fundamental physical problems with “no known solutions,” like the quantum effects of electrons wandering where they shouldn’t be.

For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.

Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.

Finding successors to today’s silicon chips will take years of research.If you’re worried about what will replace moore’s Law, it’s time to panic.

Nonetheless, Intel—one of those three chipmakers—isn’t expecting a funeral for Moore’s Law anytime soon. Jim Keller, who took over as Intel’s head of silicon engineering in 2018, is the man with the job of keeping it alive. He leads a team of some 8,000 hardware engineers and chip designers at Intel. When he joined the company, he says, many were anticipating the end of Moore’s Law. If they were right, he recalls thinking, “that’s a drag” and maybe he had made “a really bad career move.”

But Keller found ample technical opportunities for advances. He points out that there are probably more than a hundred variables involved in keeping Moore’s Law going, each of which provides different benefits and faces its own limits. It means there are many ways to keep doubling the number of devices on a chip—innovations such as 3D architectures and new transistor designs.

These days Keller sounds optimistic. He says he has been hearing about the end of Moore’s Law for his entire career. After a while, he “decided not to worry about it.” He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.

Still, even if Intel and the other remaining chipmakers can squeeze out a few more generations of even more advanced microchips, the days when you could reliably count on faster, cheaper chips every couple of years are clearly over. That doesn’t, however, mean the end of computational progress.

Time to panic

Neil Thompson is an economist, but his office is at CSAIL, MIT’s sprawling AI and computer center, surrounded by roboticists and computer scientists, including his collaborator Leiserson. In a new paper, the two document ample room for improving computational performance through better software, algorithms, and specialized chip architecture.

One opportunity is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.

Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.

That sounds like good news for continuing progress, but Thompson worries it also signals the decline of computers as a general purpose technology. Rather than “lifting all boats,” as Moore’s Law has, by offering ever faster and cheaper chips that were universally available, advances in software and specialized architecture will now start to selectively target specific problems and business opportunities, favoring those with sufficient money and resources.

Indeed, the move to chips designed for specific applications, particularly in AI, is well under way. Deep learning and other AI applications increasingly rely on graphics processing units (GPUs) adapted from gaming, which can handle parallel operations, while companies like Google, Microsoft, and Baidu are designing AI chips for their own particular needs. AI, particularly deep learning, has a huge appetite for computer power, and specialized chips can greatly speed up its performance, says Thompson.

But the trade-off is that specialized chips are less versatile than traditional CPUs. Thompson is concerned that chips for more general computing are becoming a backwater, slowing “the overall pace of computer improvement,” as he writes in an upcoming paper, “The Decline of Computers as a General Purpose Technology.”

At some point, says Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon, those developing AI and other applications will miss the decreases in cost and increases in performance delivered by Moore’s Law. “Maybe in 10 years or 30 years—no one really knows when—you’re going to need a device with that additional computation power,” she says.

The problem, says Fuchs, is that the successors to today’s general purpose chips are unknown and will take years of basic research and development to create. If you’re worried about what will replace Moore’s Law, she suggests, “the moment to panic is now.” There are, she says, “really smart people in AI who aren’t aware of the hardware constraints facing long-term advances in computing.” What’s more, she says, because application–specific chips are proving hugely profitable, there are few incentives to invest in new logic devices and ways of doing computing.

Wanted: A Marshall Plan for chips

In 2018, Fuchs and her CMU colleagues Hassan Khan and David Hounshell wrote a paper tracing the history of Moore’s Law and identifying the changes behind today’s lack of the industry and government collaboration that fostered so much progress in earlier decades. They argued that “the splintering of the technology trajectories and the short-term private profitability of many of these new splinters” means we need to greatly boost public investment in finding the next great computer technologies.

If economists are right, and much of the growth in the 1990s and early 2000s was a result of microchips—and if, as some suggest, the sluggish productivity growth that began in the mid-2000s reflects the slowdown in computational progress—then, says Thompson, “it follows you should invest enormous amounts of money to find the successor technology. We’re not doing it. And it’s a public policy failure.”

There’s no guarantee that such investments will pay off. Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.

This story was part of our March 2020 issue.

The predictions issue

In Brazil’s Amazon, rivers rise to record levels (Associated Press)

apnews.com

By FERNANDO CRISPIM and DIANE JEANTET

June 1st, 2021


MANAUS, Brazil (AP) — Rivers around the biggest city in Brazil’s Amazon rainforest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus’ port authorities, straining a society that has grown weary of increasingly frequent flooding.

The Rio Negro was at its highest level since records began in 1902, with a depth of 29.98 meters (98 feet) at the port’s measuring station. The nearby Solimoes and Amazon rivers were also nearing all-time highs, flooding streets and houses in dozens of municipalities and affecting some 450,000 people in the region.

Higher-than-usual precipitation is associated with the La Nina phenomenon, when currents in the central and eastern Pacific Ocean affect global climate patterns. Environmental experts and organizations including the U.S. Environmental Protection Agency and the National Oceanic and Atmospheric Administration say there is strong evidence that human activity and global warming are altering the frequency and intensity of extreme weather events, including La Nina.

Seven of the 10 biggest floods in the Amazon basin have occurred in the past 13 years, data from Brazil’s state-owned Geological Survey shows.

“If we continue to destroy the Amazon the way we do, the climatic anomalies will become more and more accentuated,” said Virgílio Viana, director of the Sustainable Amazon Foundation, a nonprofit. ” Greater floods on the one hand, greater droughts on the other.”

Large swaths of Brazil are currently drying up in a severe drought, with a possible shortfall in power generation from the nation’s hydroelectric plants and increased electricity prices, government authorities have warned.

But in Manaus, 66-year-old Julia Simas has water ankle-deep in her home. Simas has lived in the working-class neighborhood of Sao Jorge since 1974 and is used to seeing the river rise and fall with the seasons. Simas likes her neighborhood because it is safe and clean. But the quickening pace of the floods in the last decade has her worried.

“From 1974 until recently, many years passed and we wouldn’t see any water. It was a normal place,” she said.

Aerial view of streets flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photos/Nelson Antoine)
Aerial view of streets flooded by the Negro River in downtown Manaus. (AP Photos/Nelson Antoine)
A man pushes a shopping cart loaded with bananas on a street flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photo/Edmar Barros)
A man pushes a shopping cart loaded with bananas on a street flooded by the Negro River, in downtown Manaus. (AP Photo/Edmar Barros)

When the river does overflow its banks and flood her street, she and other residents use boards and beams to build rudimentary scaffolding within their homes to raise their floors above the water.

“I think human beings have contributed a lot (to this situation,” she said. “Nature doesn’t forgive. She comes and doesn’t want to know whether you’re ready to face her or not.”

Flooding also has a significant impact on local industries such as farming and cattle ranching. Many family-run operations have seen their production vanish under water. Others have been unable to reach their shops, offices and market stalls or clients.

“With these floods, we’re out of work,” said Elias Gomes, a 38-year-old electrician in Cacau Pirera, on the other side of the Rio Negro, though noted he’s been able to earn a bit by transporting neighbors in his small wooden boat.

Gomes is now looking to move to a more densely populated area where floods won’t threaten his livelihood.

A man rides his motorcycle through a street flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photo/Edmar Barros)
A man rides his motorcycle through a street in downtown Manaus. (AP Photo/Edmar Barros)

Limited access to banking in remote parts of the Amazon can make things worse for residents, who are often unable to get loans or financial compensation for lost production, said Viana, of the Sustainable Amazon Foundation. “This is a clear case of climate injustice: Those who least contributed to global warming and climate change are the most affected.”

Meteorologists say Amazon water levels could continue to rise slightly until late June or July, when floods usually peak.

People walk on a wooden footbridge set up over a street flooded by the Negro River, in downtown Manaus, Amazonas state, Brazil, Tuesday, June 1, 2021. Rivers around Brazil's biggest city in the Amazon rain forest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus' port authorities. (AP Photo/Edmar Barros)
People walk on a wooden footbridge set up over a street in downtown Manaus. (AP Photo/Edmar Barros)

___

Diana Jeantet reported from Rio de Janeiro.

‘Belonging Is Stronger Than Facts’: The Age of Misinformation (The New York Times)

nytimes.com

Max Fisher


The Interpreter

Social and psychological forces are combining to make the sharing and believing of misinformation an endemic problem with no easy solution.

An installation of protest art outside the Capitol in Washington.
Credit: Jonathan Ernst/Reuters

Published May 7, 2021; Updated May 13, 2021

There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.

As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.

This need can emerge especially out of a sense of social destabilization. As a result, misinformation is often prevalent among communities that feel destabilized by unwanted change or, in the case of some minorities, powerless in the face of dominant forces.

Framing everything as a grand conflict against scheming enemies can feel enormously reassuring. And that’s why perhaps the greatest culprit of our era of misinformation may be, more than any one particular misinformer, the era-defining rise in social polarization.

“At the mass level, greater partisan divisions in social identity are generating intense hostility toward opposition partisans,” which has “seemingly increased the political system’s vulnerability to partisan misinformation,” Dr. Nyhan wrote in an earlier paper.

Growing hostility between the two halves of America feeds social distrust, which makes people more prone to rumor and falsehood. It also makes people cling much more tightly to their partisan identities. And once our brains switch into “identity-based conflict” mode, we become desperately hungry for information that will affirm that sense of us versus them, and much less concerned about things like truth or accuracy.

Border officials are not mass-purchasing copies of Vice President Kamala Harris’s book, though the false rumor drew attention.
Credit: Gabriela Bhaskar for The New York Times

In an email, Dr. Nyhan said it could be methodologically difficult to nail down the precise relationship between overall polarization in society and overall misinformation, but there is abundant evidence that an individual with more polarized views becomes more prone to believing falsehoods.

The second driver of the misinformation era is the emergence of high-profile political figures who encourage their followers to indulge their desire for identity-affirming misinformation. After all, an atmosphere of all-out political conflict often benefits those leaders, at least in the short term, by rallying people behind them.

Then there is the third factor — a shift to social media, which is a powerful outlet for composers of disinformation, a pervasive vector for misinformation itself and a multiplier of the other risk factors.

“Media has changed, the environment has changed, and that has a potentially big impact on our natural behavior,” said William J. Brady, a Yale University social psychologist.

“When you post things, you’re highly aware of the feedback that you get, the social feedback in terms of likes and shares,” Dr. Brady said. So when misinformation appeals to social impulses more than the truth does, it gets more attention online, which means people feel rewarded and encouraged for spreading it.

How do we fight disinformation? Join Times tech reporters as they untangle the roots of disinformation and how to combat it. Plus we speak to special guest comedian Sarah Silverman. R.S.V.P. to this subscriber-exclusive event.

“Depending on the platform, especially, humans are very sensitive to social reward,” he said. Research demonstrates that people who get positive feedback for posting inflammatory or false statements become much more likely to do so again in the future. “You are affected by that.”

In 2016, the media scholars Jieun Shin and Kjerstin Thorson analyzed a data set of 300 million tweets from the 2012 election. Twitter users, they found, “selectively share fact-checking messages that cheerlead their own candidate and denigrate the opposing party’s candidate.” And when users encountered a fact-check that revealed their candidate had gotten something wrong, their response wasn’t to get mad at the politician for lying. It was to attack the fact checkers.

“We have found that Twitter users tend to retweet to show approval, argue, gain attention and entertain,” researcher Jon-Patrick Allem wrote last year, summarizing a study he had co-authored. “Truthfulness of a post or accuracy of a claim was not an identified motivation for retweeting.”

In another study, published last month in Nature, a team of psychologists tracked thousands of users interacting with false information. Republican test subjects who were shown a false headline about migrants trying to enter the United States (“Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests”) mostly identified it as false; only 16 percent called it accurate. But if the experimenters instead asked the subjects to decide whether to share the headline, 51 percent said they would.

“Most people do not want to spread misinformation,” the study’s authors wrote. “But the social media context focuses their attention on factors other than truth and accuracy.”

In a highly polarized society like today’s United States — or, for that matter, India or parts of Europe — those incentives pull heavily toward ingroup solidarity and outgroup derogation. They do not much favor consensus reality or abstract ideals of accuracy.

As people become more prone to misinformation, opportunists and charlatans are also getting better at exploiting this. That can mean tear-it-all-down populists who rise on promises to smash the establishment and control minorities. It can also mean government agencies or freelance hacker groups stirring up social divisions abroad for their benefit. But the roots of the crisis go deeper.

“The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone,” the sociologist Zeynep Tufekci wrote in a much-circulated MIT Technology Review article. “It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one.”

In an ecosystem where that sense of identity conflict is all-consuming, she wrote, “belonging is stronger than facts.”

Study finds humans are directly influencing wind and weather over North Atlantic (EurekaAlert!)

News Release 17-Apr-2021

The findings suggest that winters in Europe and in eastern US may get warmer and wetter

University of Miami Rosenstiel School of Marine & Atmospheric Science

Research News

IMAGE
IMAGE: The Positive NAO index phase shows a stronger than usual subtropical high pressure center and a deeper than normal Icelandic low. The increased pressure difference results in more and stronger… view more  Credit: Columbia University Lamont-Doherty Earth Observatory.

MIAMI–A new study led by scientists at the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science provides evidence that humans are influencing wind and weather patterns across the eastern United States and western Europe by releasing CO2 and other pollutants into Earth’s atmosphere.

In the new paper, published in the journal npj Climate and Atmospheric Science, the research team found that changes in the last 50 years to an important weather phenomenon in the North Atlantic–known as the North Atlantic Oscillation–can be traced back to human activities that impact the climate system.

“Scientists have long understood that human actions are warming the planet,” said the study’s lead author Jeremy Klavans, a UM Rosenstiel School alumnus. “However, this human-induced signal on weather patterns is much harder to identify.”

“In this study, we show that humans are influencing patterns of weather and climate over the Atlantic and that we may be able to use this information predict changes in weather and climate up to a decade in advance,” said Klavans.

The North Atlantic Oscillation, the result of fluctuations in air pressure across the Atlantic, affects weather by influencing the intensity and location of the jet stream. This oscillation has a strong effect on winter weather in Europe, Greenland, the northeastern U.S. and North Africa and the quality of crop yields and productivity of fisheries in the North Atlantic.

The researchers used multiple large climate model ensembles, compiled by researchers at the National Center for Atmospheric Research, to predict the North Atlantic Oscillation. The analysis consisted of 269 model runs, which is over 14,000 simulated model years.

The study, titled “NAO Predictability from External Forcing in the Late Twentieth Century,” was published on March 25 in the journal npj Climate and Atmospheric Science. The study’s authors include: Klavans, Amy Clement and Lisa Murphy from the UM Rosenstiel School, and Mark Cane from Columbia University’s Lamont-Doherty Earth Observatory.

The study was supported by the National Science Foundation (NSF) Climate and Large-Scale Dynamics program (grant # AGS 1735245 and AGS 1650209), NSF Paleo Perspectives on Climate Change program (grant # AGS 1703076) and NOAA’s Climate Variability and Predictability Program.

Human Brain Limit of ‘150 Friends’ Doesn’t Check Out, New Study Claims (Science Alert)

Peter Dockrill – 5 MAY 2021


It’s called Dunbar’s number: an influential and oft-repeated theory suggesting the average person can only maintain about 150 stable social relationships with other people.

Proposed by British anthropologist and evolutionary psychologist Robin Dunbar in the early 1990s, Dunbar’s number, extrapolated from research into primate brain sizes and their social groups, has since become a ubiquitous part of the discourse on human social networks.

But just how legitimate is the science behind Dunbar’s number anyway? According to a new analysis by researchers from Stockholm University in Sweden, Dunbar’s famous figure doesn’t add up.

“The theoretical foundation of Dunbar’s number is shaky,” says zoologist and cultural evolution researcher Patrik Lindenfors.

“Other primates’ brains do not handle information exactly as human brains do, and primate sociality is primarily explained by other factors than the brain, such as what they eat and who their predators are.”

Dunbar’s number was originally predicated on the idea that the volume of the neocortex in primate brains functions as a constraint on the size of the social groups they circulate amongst.

“It is suggested that the number of neocortical neurons limits the organism’s information-processing capacity and that this then limits the number of relationships that an individual can monitor simultaneously,” Dunbar explained in his foundational 1992 study.

“When a group’s size exceeds this limit, it becomes unstable and begins to fragment. This then places an upper limit on the size of groups which any given species can maintain as cohesive social units through time.”

Dunbar began extrapolating the theory to human networks in 1993, and in the decades since has authored and co-authored copious related research output examining the behavioral and cognitive mechanisms underpinning sociality in both humans and other primates.

But as to the original question of whether neocortex size serves as a valid constraint on group size beyond non-human primates, Lindenfors and his team aren’t so sure.

While a number of studies have offered support for Dunbar’s ideas, the new study debunks the claim that neocortex size in primates is equally pertinent to human socialization parameters.

“It is not possible to make an estimate for humans with any precision using available methods and data,” says evolutionary biologist Andreas Wartel.

In their study, the researchers used modern statistical methods including Bayesian and generalized least-squares (GLS) analyses to take another look at the relationship between group size and brain/neocortex sizes in primate brains, with the advantage of updated datasets on primate brains.

The results suggested that stable human group sizes might ultimately be much smaller than 150 individuals – with one analysis suggesting up to 42 individuals could be the average limit, with another estimate ranging between a group of 70 to 107.

Ultimately, however, enormous amounts of imprecision in the statistics suggest that any method like this – trying to compute an average number of stable relationships for any human individual based off brain volume considerations – is unreliable at best.

“Specifying any one number is futile,” the researchers write in their study. “A cognitive limit on human group size cannot be derived in this manner.”

Despite the mainstream attention Dunbar’s number enjoys, the researchers say the majority of primate social evolution research focuses on socio-ecological factors, including foraging and predation, infanticide, and sexual selection – not so much calculations dependent on brain or neocortex volume.

Further, the researchers argue that Dunbar’s number ignores other significant differences in brain physiology between human and non-human primate brains – including that humans develop cultural mechanisms and social structures that can counter socially limiting cognitive factors that might otherwise apply to non-human primates.

“Ecological research on primate sociality, the uniqueness of human thinking, and empirical observations all indicate that there is no hard cognitive limit on human sociality,” the team explains.

“It is our hope, though perhaps futile, that this study will put an end to the use of ‘Dunbar’s number’ within science and in popular media.”

The findings are reported in Biology Letters.

Your Immune System Could Be Hurting You as a Way of Signalling to Others (Science Alert)

sciencealert.com

Jonathan R Goodman, The Conversation – 13 May 2021


A major debate during the pandemic, and in infectious disease research more broadly, is why infected people die. No virus “wants” to kill anyone, as an epidemiologist once said to me. Like any other form of life, a virus’s goal is only to survive and reproduce.

A growing body of evidence instead suggests that the human immune system – which the science writer Ed Yong says is “where intuition goes to die” – may itself be responsible for many people’s deaths.

In an effort to find and kill the invading virus, the body can harm major organs, including the lungs and heart. This has led some doctors to focus on attenuating an infected patient’s immune response to help save them.

This brings up an evolutionary puzzle: what’s the point of the immune system if its overzealousness can kill the same people it evolved to defend?

The answer may lie in humanity’s evolutionary history: immunity may be as much about communication and behavior as it is about cellular biology. And to the degree that researchers can understand these broad origins of the immune system, they may be better positioned to improve responses to it.

The concept of the behavioral immune system is not new. Almost all humans sometimes feel disgust or revulsion – usually because whatever has made us feel that way poses a threat to our health.

And we aren’t alone in these reactions. Research shows that some animals avoid others that are showing symptoms of illness.

Eliciting care

However, more recent theoretical research suggests something more: humans, in particular, are likely to show compassion to those showing symptoms of illness or injury.

There’s a reason, this thinking goes, why people tend to exclaim when in pain, rather than just silently pull away from whatever is hurting them, and why fevers are linked to sluggish behavior.

Some psychologists argue that this is because immune responses are as much about communication as they are about self-maintenance. People who received care, over humanity’s history, probably tended to do better than those who tried to survive on their own.

In the broader evolutionary literature, researchers refer to these kinds of displays as “signals”. And like many of the innumerable signals we see across the natural world, immune-related signals can be used – or faked – to exploit the world around us, and each other.

Some birds, for example, feign injury to distract predators from their nests; rats suppress disease symptoms so that potential mates won’t ignore them.

We also see many illustrations of immune-signal use and misuse in human cultures. In The Adventure of the Dying Detective (1913), for example, Sherlock Holmes starves himself for three days to elicit a confession from a murder suspect. The suspect confesses only when he is convinced that his attempt to infect Holmes with a rare disease has been successful, misreading Holmes’s signs of illness.

This is an extreme example, but people feign signals of pain or illness all the time to avoid obligations, to elicit support from others, or even to avoid submitting an article by an agreed deadline. And this is an essential element of any signalling system.

Once a signal, be it a wince or a jaundiced complexion, elicits a response from whoever sees it, that response will start to drive how and why the signal is used.

Even germs use – and abuse – immune signals for their own gain. In fact, some viruses actually hijack our own immune responses, such as coughs and sneezes, to pass themselves on to new hosts, using our own evolved functions to further their interests.

Other germs, like SARS-CoV-2 (the virus that causes COVID-19) and Yersinia pestis (the bacterium that causes plague), can prevent our signalling to others when we are sick and pass themselves on without anyone realizing.

This perspective of immunity – one that takes into account biology, behavior and the social effects of illness – paints a starkly different picture from the more traditional view of the immune system as a collection of biological and chemical defenses against sickness.

Germs use different strategies, just as animals do, to exploit immune signals for their own purposes. And perhaps that’s what has made asymptomatically transmitted COVID-19 so damaging: people can’t rely on reading other people’s immune signals to protect themselves.

Insofar as doctors can predict how a particular infection – whether SARS-CoV-2, influenza, malaria or the next pathogen with pandemic potential – will interact with a patient’s immune system, they’ll be better positioned to tailor treatments for it. Future research will help us sort through the germs that hijack our immune signals – or suppress them – for their own purposes.

Viewing immunity not just as biological, but as a broader signalling system, may help us to understand our complex relationships with pathogens more effectively.

Jonathan R Goodman, PhD Candidate, Human Evolutionary Studies, University of Cambridge.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Ten million reasons to vaccinate the world (The Economist)

economist.com

Our model reveals the true course of the pandemic. Here is what to do next

May 15th 2021 8-10 minutos


THIS WEEK we publish our estimate of the true death toll from covid-19. It tells the real story of the pandemic. But it also contains an urgent warning. Unless vaccine supplies reach poorer countries, the tragic scenes now unfolding in India risk being repeated elsewhere. Millions more will die.

Using known data on 121 variables, from recorded deaths to demography, we have built a pattern of correlations that lets us fill in gaps where numbers are lacking. Our model suggests that covid-19 has already claimed 7.1m-12.7m lives. Our central estimate is that 10m people have died who would otherwise be living. This tally of “excess deaths” is over three times the official count, which nevertheless is the basis for most statistics on the disease, including fatality rates and cross-country comparisons.

The most important insight from our work is that covid-19 has been harder on the poor than anyone knew. Official figures suggest that the pandemic has struck in waves, and that the United States and Europe have been hit hard. Although South America has been ravaged, the rest of the developing world seemed to get off lightly.

Our modelling tells another story. When you count all the bodies, you see that the pandemic has spread remorselessly from the rich, connected world to poorer, more isolated places. As it has done so, the global daily death rate has climbed steeply.

Death rates have been very high in some rich countries, but the overwhelming majority of the 6.7m or so deaths that nobody counted were in poor and middle-income ones. In Romania and Iran excess deaths are more than double the number officially put down to covid-19. In Egypt they are 13 times as big. In America the difference is 7.1%.

India, where about 20,000 are dying every day, is not an outlier. Our figures suggest that, in terms of deaths as a share of population, Peru’s pandemic has been 2.5 times worse than India’s. The disease is working its way through Nepal and Pakistan. Infectious variants spread faster and, because of the tyranny of exponential growth, overwhelm health-care systems and fill mortuaries even if the virus is no more lethal.

Ultimately the way to stop this is vaccination. As an example of collaboration and pioneering science, covid-19 vaccines rank with the Apollo space programme. Within just a year of the virus being discovered, people could be protected from severe disease and death. Hundreds of millions of them have benefited.

However, in the short run vaccines will fuel the divide between rich and poor. Soon, the only people to die from covid-19 in rich countries will be exceptionally frail or exceptionally unlucky, as well as those who have spurned the chance to be vaccinated. In poorer countries, by contrast, most people will have no choice. They will remain unprotected for many months or years.

The world cannot rest while people perish for want of a jab costing as little as $4 for a two-dose course. It is hard to think of a better use of resources than vaccination. Economists’ central estimate for the direct value of a course is $2,900—if you include factors like long covid and the effect of impaired education, the total is much bigger. The benefit from an extra 1bn doses supplied by July would be worth hundreds of billions of dollars. Less circulating virus means less mutation, and so a lower chance of a new variant that reinfects the vaccinated.

Supplies of vaccines are already growing. By the end of April, according to Airfinity, an analytics firm, vaccine-makers produced 1.7bn doses, 700m more than the end of March and ten times more than January. Before the pandemic, annual global vaccine capacity was roughly 3.5bn doses. The latest estimates are that total output in 2021 will be almost 11bn. Some in the industry predict a global surplus in 2022.

And yet the world is right to strive to get more doses in more arms sooner. Hence President Joe Biden has proposed waiving intellectual-property claims on covid-19 vaccines. Many experts argue that, because some manufacturing capacity is going begging, millions more doses might become available if patent-owners shared their secrets, including in countries that today are at the back of the queue. World-trade rules allow for a waiver. When invoke them if not in the throes of a pandemic?

We believe that Mr Biden is wrong. A waiver may signal that his administration cares about the world, but it is at best an empty gesture and at worst a cynical one.

A waiver will do nothing to fill the urgent shortfall of doses in 2021. The head of the World Trade Organisation, the forum where it will be thrashed out, warns there may be no vote until December. Technology transfer would take six months or so to complete even if it started today. With the new mRNA vaccines made by Pfizer and Moderna, it may take longer. Supposing the tech transfer was faster than that, experienced vaccine-makers would be unavailable for hire and makers could not obtain inputs from suppliers whose order books are already bursting. Pfizer’s vaccine requires 280 inputs from suppliers in 19 countries. No firm can recreate that in a hurry.

In any case, vaccine-makers do not appear to be hoarding their technology—otherwise output would not be increasing so fast. They have struck 214 technology-transfer agreements, an unprecedented number. They are not price-gouging: money is not the constraint on vaccination. Poor countries are not being priced out of the market: their vaccines are coming through COVAX, a global distribution scheme funded by donors.

In the longer term, the effect of a waiver is unpredictable. Perhaps it will indeed lead to technology being transferred to poor countries; more likely, though, it will cause harm by disrupting supply chains, wasting resources and, ultimately, deterring innovation. Whatever the case, if vaccines are nearing a surplus in 2022, the cavalry will arrive too late.

A needle in time

If Mr Biden really wants to make a difference, he can donate vaccine right now through COVAX. Rich countries over-ordered because they did not know which vaccines would work. Britain has ordered more than nine doses for each adult, Canada more than 13. These will be urgently needed elsewhere. It is wrong to put teenagers, who have a minuscule risk of dying from covid-19, before the elderly and health-care workers in poor countries. The rich world should not stockpile boosters to cover the population many times over on the off-chance that they may be needed. In the next six months, this could yield billions of doses of vaccine.

Countries can also improve supply chains. The Serum Institute, an Indian vaccine-maker, has struggled to get parts such as filters from America because exports were gummed up by the Defence Production Act (DPA), which puts suppliers on a war-footing. Mr Biden authorised a one-off release, but he should be focusing the DPA on supplying the world instead. And better use needs to be made of finished vaccine. In some poor countries, vaccine languishes unused because of hesitancy and chaotic organisation. It makes sense to prioritise getting one shot into every vulnerable arm, before setting about the second.

Our model is not predictive. However it does suggest that some parts of the world are particularly vulnerable—one example is South-East Asia, home to over 650m people, which has so far been spared mass fatalities for no obvious reason. Covid-19 has not yet run its course. But vaccines have created the chance to save millions of lives. The world must not squander it. ■

Dig deeper

All our stories relating to the pandemic and the vaccines can be found on our coronavirus hub. You can also listen to The Jab, our podcast on the race between injections and infections, and find trackers showing the global roll-out of vaccines, excess deaths by country and the virus’s spread across Europe and America.

This article appeared in the Leaders section of the print edition under the headline “Vaccinating the world”

Neanderthals carb loaded, helping grow their big brains (Science)

sciencemag.org

By Ann GibbonsMay. 10, 2021 , 3:00 PM 5-7 minutos


A reconstruction of Neanderthal mealtime Mauricio Anton/Science Source

Here’s another blow to the popular image of Neanderthals as brutish meat eaters: A new study of bacteria collected from Neanderthal teeth shows that our close cousins ate so many roots, nuts, or other starchy foods that they dramatically altered the type of bacteria in their mouths. The finding suggests our ancestors had adapted to eating lots of starch by at least 600,000 years ago—about the same time as they needed more sugars to fuel a big expansion of their brains.

The study is “groundbreaking,” says Harvard University evolutionary biologist Rachel Carmody, who was not part of the research. The work suggests the ancestors of both humans and Neanderthals were cooking lots of starchy foods at least 600,000 years ago. And they had already adapted to eating more starchy plants long before the invention of agriculture 10,000 years ago, she says.

The brains of our ancestors doubled in size between 2 million and 700,000 years ago. Researchers have long credited better stone tools and cooperative hunting: As early humans got better at killing animals and processing meat, they ate a higher quality diet, which gave them more energy more rapidly to fuel the growth of their hungrier brains.

Still, researchers have puzzled over how meat did the job. “For human ancestors to efficiently grow a bigger brain, they needed energy dense foods containing glucose”—a type of sugar—says molecular archaeologist Christina Warinner of Harvard and the Max Planck Institute for the Science of Human History. “Meat is not a good source of glucose.”

Researchers analyzed the bacterial DNA preserved in dental plaque of fossilized teeth, such as this one from a prehistoric human. Werner Siemens Foundation/Felix Wey

The starchy plants gathered by many living hunter-gatherers are an excellent source of glucose, however. To figure out whether oral bacteria track changes in diet or the environment, Warinner, Max Planck graduate student James Fellows Yates, and a large international team looked at the oral bacteria stuck to the teeth of Neanderthals, preagriculture modern humans that lived more than 10,000 years ago, chimps, gorillas, and howler monkeys. The researchers analyzed billions of DNA fragments from long-dead bacteria still preserved on the teeth of 124 individuals. One was a Neanderthal who lived 100,000 years ago at Pešturina Cave in Serbia, which produced the oldest oral microbiome genome reconstructed to date.

The communities of bacteria in the mouths of preagricultural humans and Neanderthals strongly resembled each other, the team reports today in the Proceedings of the National Academy of Sciences. In particular, humans and Neanderthals harbored an unusual group of Streptococcus bacteria in their mouths. These microbes had a special ability to bind to an abundant enzyme in human saliva called amylase, which frees sugars from starchy foods. The presence of the strep bacteria that consume sugar on the teeth of Neanderthals and ancient modern humans, but not chimps, shows they were eating more starchy foods, the researchers conclude.

Finding the streptococci on the teeth of both ancient humans and Neanderthals also suggests they inherited these microbes from their common ancestor, who lived more than 600,000 years ago. Although earlier studies found evidence that Neanderthals ate grasses and tubers and cooked barley, the new study indicates they ate so much starch that it dramatically altered the composition of their oral microbiomes.

“This pushes the importance of starch in the diet further back in time,” to when human brains were still expanding, Warinner says. Because the amylase enzyme is much more efficient at digesting cooked rather than raw starch, the finding also suggests cooking, too, was common by 600,000 years ago, Carmody says. Researchers have debated whether cooking became common when the big brain began to expand almost 2 million years ago or it spread later, during a second surge of growth.

The study offers a new way to detect major shifts in diet, says geneticist Ran Blekhman of the University of Minnesota, Twin Cities. In the case of Neanderthals, it reveals how much they depended on plants.

“We sometimes have given short shrift to the plant components of the diet,” says anthropological geneticist Anne Stone of Arizona State University, Tempe. “As we know from modern hunter-gatherers, it’s often the gathering that ends up providing a substantial portion of the calories.”

Foto de criança expõe crise na assistência à saúde dos yanomamis (Folha de S.Paulo)

Território sofre com o aumento da malária e com a desnutrição infantil crônica

9.mai.2021 às 12h00 Atualizado: 9.mai.2021 às 20h02

Fabiano Maisonnave

Manaus Na aldeia Maimasi, em Roraima, uma criança yanomami jaz sobre a rede. Com as costelas expostas pela desnutrição, ela foi diagnosticada com malária e verminose. Mas a primeira equipe médica no local em seis meses não dispunha de medicamentos suficientes para tratar toda a aldeia.

A foto dessa criança e a história por trás dela foram obtidas pelo missionário católico Carlo Zacquini, 84, que atua entre os yanomamis desde 1968. Ele é cofundador da Comissão pela Criação do Parque Yanomami (CCPY), que deu visibilidade aos problemas causados pelos brancos, promoveu atendimento em saúde e lutou pela demarcação, concluída em 1992.

O território yanomami sofre com o aumento da malária e com a desnutrição infantil crônica, que atinge 80% das crianças até 5 anos, segundo estudo recente financiado pela Unicef e realizado em parceria com a Fiocruz e o Ministério da Saúde.

Os indígenas também enfrentam uma grande invasão de garimpeiros, incentivados por promessas do presidente Jair Bolsonaro de legalizá-los e pelo alto preço do minério. São cerca de 20 mil não indígenas morando ilegalmente na Terra Indígena Yanomami, contaminando os rios com mercúrio e contribuindo para espalhar Covid-19 e malária, além do álcool e da prostituição.

Procurado, o Distrito Sanitário Especial Indígena (Dsei) Yanomami, do Ministério da Saúde, informou que a criança, do sexo feminino, foi transferida a Boa Vista (RR) dois dias após a visita médica, acompanhada dos pais e dos irmãos.

Ela tem 8 anos e pesa 12,5 kg. Internada desde 23 de abril, está em tratamento para pneumonia, anemia e desnutrição grave —a malária foi curada. Ela está estável e em acompanhamento pelo serviço social. Segundo o órgão, trata-se de um caso isolado.

O Dsei negou a escassez de medicamentos e afirma que a quantidade é definida de acordo com a demanda prevista pela semana epidemiológica. O órgão não informou sobre como está o tratamento de outros yanomamis doentes na mesma região, mas alega que o atendimento de saúde é dificultado pelo fluxo constante dos indígenas e atribuiu a alta de incidência de malária à presença do garimpo ilegal.

A seguir, o depoimento de Zacquini:

É uma criança da aldeia Maimasi, a dois dias a pé da Missão Catrimani. Ela está sem assistência há muito tempo, com malária e verminose.

A fotografia foi feita por volta de 17 de abril. O pessoal das equipes de saúde tem receio de denunciar essa situação, pois podem ser punidos, colocados em lugares mais penosos ou ser demitidos. Vários polos de saúde estão abandonados. Não há estoque de medicamentos para verminose na sede do Dsei (Distrito Sanitário Especial Indígena Yanomami), em Boa Vista. Até para malária a quantidade é limitada.

O posto de saúde tem muita dificuldade para conseguir medicamentos. Faltam profissionais para revezamento e falta gasolina para deslocamento. Há três meses, eles usam a canoa com rabeta [motor] dos próprios yanomamis.

Criança magra ao ponto de ter ossos visíveis sob a pele, em rede
Com quadro de verminose e malária, criança yanomami dorme em rede na aldeia Maimasi, perto da Missão Catrimani, na Terra Indígena Yanomami, em Roraima – Divulgação

Para chegar a Maimasi, seriam oito minutos de helicóptero, mas, a princípio, isso só ocorre em casos de emergência. Evidentemente, essa criança é um caso de emergência!

Para levar medicamento ao pólo-base, foram deslocados um avião com uma equipe médica, porém eles ficaram aguardando inutilmente a chegada do helicóptero.

Havia seis meses que ninguém visitava a aldeia. Dessa vez, foram medicamentos para malária, mas não deu para repetir a dose. Uma equipe da Sesai (Secretaria Especial de Saúde Indígena, do Ministério da Saúde), incluindo médico, foi de avião até a Missão Catrimani para levar esses medicamentos.

O pessoal da saúde faz tratamentos com medicamentos, mas o tratamento não tem continuidade quando trocam de equipe. Assim, quando possível, fazem a primeira dose de tratamento, mas depois de um tempo os doentes devem recomeçar a partir da primeira dose.

Estou revoltado e com o sangue fervendo. É uma situação que parece estar se generalizando na Terra Indígena Yanomami.

O vaivém de garimpeiros é contínuo e isso implica voos de avião, barcos, helicópteros e a pé. São milhares os invasores da Terra Indígena Yanomami, e o presidente da República anuncia que irá pessoalmente falar com os militares que estão ali e com os garimpeiros também. Faz questão de dizer que não vai prender estes últimos, mas somente conversar.

Até para malária os medicamentos são contados, incluindo a cloroquina. Tem cloroquina para Covid, mas não para malária. A criança desnutrida está numa aldeia a oito minutos de helicóptero de um posto de saúde, mas leva um dia a pé. E depois dessa aldeia há outras, que na época estavam reunidas para o cerimonial funerário em outra aldeia mais afastada.

A equipe do pólo-base se deslocou a pé para a aldeia e encontrou um grupo grande de yanomamis que fazia um ritual funerário para uma criança que tinha morrido sem assistência. Eles ministraram medicamentos para verminose a todos, mas esse medicamento acabou e não puderam dar uma outra dose, o que é a praxe.

Aliás, havia mais de um ano que aquelas aldeias não recebiam atendimento contra verminose. A criança da foto e outros 16 indígenas presentes estavam com malária, a maioria deles com falciparum, a variedade mais agressiva. Os demais 84 estavam todos com sintomas de gripe e de febre.

A Diversity of Wildlife Is Good for Our Health: To Prevent Future Pandemics, We Must Restore and Protect Nature (SciTechDaily)

scitechdaily-com.cdn.ampproject.org

By Cary Institute of Ecosystem Studies on May 08, 2021

Ecosystems with a diversity of mammals, including larger-bodies and longer lived creatures like foxes, are better for our health. Credit: Ali Rajabali / Flickr

A growing body of evidence suggests that biodiversity loss increases our exposure to both new and established zoonotic pathogens. Restoring and protecting nature is essential to preventing future pandemics. So reports a new Proceedings of the National Academy of Sciences (PNAS) paper that synthesizes current understanding about how biodiversity affects human health and provides recommendations for future research to guide management.

Lead author Felicia Keesing is a professor at Bard College and a Visiting Scientist at Cary Institute of Ecosystem Studies. She explains, “There’s a persistent myth that wild areas with high levels of biodiversity are hotspots for disease. More animal diversity must equal more dangerous pathogens. But this turns out to be wrong. Biodiversity isn’t a threat to us, it’s actually protecting us from the species most likely to make us sick.”

Zoonotic diseases like COVID-19, SARS, and Ebola are caused by pathogens that are shared between humans and other vertebrate animals. But animal species differ in their ability to pass along pathogens that make us sick.

Rick Ostfeld is a disease ecologist at Cary Institute and a co-author on the paper. He explains, “Research is mounting that species that thrive in developed and degraded landscapes are often much more efficient at harboring pathogens and transmitting them to people. In less-disturbed landscapes with more animal diversity, these risky reservoirs are less abundant and biodiversity has a protective effect.”

Rodents, bats, primates, cloven-hooved mammals like sheep and deer, and carnivores have been flagged as the mammal taxa most likely to transmit pathogens to humans. Keesing and Ostfeld note, “The next emerging pathogen is far more likely to come from a rat than a rhino.”

This is because animals with fast life histories tend to be more efficient at transmitting pathogens. Keesing explains, “Animals that live fast, die young, and have early sexual maturity with lots of offspring tend to invest less in their adaptive immune responses. They are often better at transmitting diseases, compared to longer-lived animals with stronger adaptive immunity.”

When biodiversity is lost from ecological communities, long-lived, larger-bodied species tend to disappear first, while smaller-bodied species with fast life histories tend to proliferate. Research has found that mammal hosts of zoonotic viruses are less likely to be species of conservation concern (i.e. they are more common), and that for both mammals and birds, human development tends to increase the abundance of zoonotic host species, bringing people and risky animals closer together.

“When we erode biodiversity, we favor species that are more likely to be zoonotic hosts, increasing our risk of spillover events,” Ostfeld notes. Adding that, “Managing this risk will require a better understanding of how things like habitat conversion, climate change, and overharvesting affect zoonotic hosts, and how restoring biodiversity to degraded areas might reduce their abundance.”

To predict and prevent spillover, Keesing and Ostfeld highlight the need to focus on host attributes associated with disease transmission rather than continuing to debate the prime importance of one taxon or another. Ostfeld explains, “We should stop assuming that there is a single animal source for each emerging pathogen. The pathogens that jump from animals to people tend to be found in many animal species, not just one. They’re jumpers, after all, and they typically move between species readily.”

Disentangling the characteristics of effective zoonotic hosts – such as their immune strategies, resilience to disturbance, and habitat preferences – is key to protecting public health. Forecasting the locations where these species thrive, and where pathogen transmission and emergence are likely, can guide targeted interventions.

Keesing notes, “Restoration of biodiversity is an important frontier in the management of zoonotic disease risk. Those pathogens that do spill over to infect humans–zoonotic pathogens–often proliferate as a result of human impacts.” Concluding, “As we rebuild our communities after COVID-19, we need to have firmly in mind that one of our best strategies to prevent future pandemics is to protect, preserve, and restore biodiversity.”

Reference: “Impacts of biodiversity and biodiversity loss on zoonotic diseases” by Felicia Keesing and Richard S. Ostfeld, 5 April 2021, Proceedings of National Academy of Sciences.
DOI: 10.1073/pnas.2023540118

This research was supported by a National Science Foundation Grant OPUS 1948419 to Keesing.

Cary Institute of Ecosystem Studies is an independent nonprofit center for environmental research. Since 1983, our scientists have been investigating the complex interactions that govern the natural world and the impacts of climate change on these systems. Our findings lead to more effective management and policy actions and increased environmental literacy. Staff are global experts in the ecology of: cities, disease, forests, and freshwater.

Inventing the Universe (The New Atlantis)

Winter 2020

David Kordahl

Two new books on quantum theory could not, at first glance, seem more different. The first, Something Deeply Hidden, is by Sean Carroll, a physicist at the California Institute of Technology, who writes, “As far as we currently know, quantum mechanics isn’t just an approximation of the truth; it is the truth.” The second, Einstein’s Unfinished Revolution, is by Lee Smolin of the Perimeter Institute for Theoretical Physics in Ontario, who insists that “the conceptual problems and raging disagreements that have bedeviled quantum mechanics since its inception are unsolved and unsolvable, for the simple reason that the theory is wrong.”

Given this contrast, one might expect Carroll and Smolin to emphasize very different things in their books. Yet the books mirror each other, down to chapters that present the same quantum demonstrations and the same quantum parables. Carroll and Smolin both agree on the facts of quantum theory, and both gesture toward the same historical signposts. Both consider themselves realists, in the tradition of Albert Einstein. They want to finish his work of unifying physical theory, making it offer one coherent description of the entire world, without ad hoc exceptions to cover experimental findings that don’t fit. By the end, both suggest that the completion of this project might force us to abandon the idea of three-dimensional space as a fundamental structure of the universe.

But with Carroll claiming quantum mechanics as literally true and Smolin claiming it as literally false, there must be some underlying disagreement. And of course there is. Traditional quantum theory describes things like electrons as smeary waves whose measurable properties only become definite in the act of measurement. Sean Carroll is a supporter of the “Many Worlds” interpretation of this theory, which claims that the multiple measurement possibilities all simultaneously exist. Some proponents of Many Worlds describe the existence of a “multiverse” that contains many parallel universes, but Carroll prefers to describe a single, radically enlarged universe that contains all the possible outcomes running alongside each other as separate “worlds.” But the trouble, says Lee Smolin, is that in the real world as we observe it, these multiple possibilities never appear — each measurement has a single outcome. Smolin takes this fact as evidence that quantum theory must be wrong, and argues that any theory that supersedes quantum mechanics must do away with these multiple possibilities.

So how can such similar books, informed by the same evidence and drawing upon the same history, reach such divergent conclusions? Well, anyone who cares about politics knows that this type of informed disagreement happens all the time, especially, as with Carroll and Smolin, when the disagreements go well beyond questions that experiments could possibly resolve.

But there is another problem here. The question that both physicists gloss over is that of just how much we should expect to get out of our best physical theories. This question pokes through the foundation of quantum mechanics like rusted rebar, often luring scientists into arguments over parables meant to illuminate the obscure.

With this in mind, let’s try a parable of our own, a cartoon of the quantum predicament. In the tradition of such parables, it’s a story about knowing and not knowing.

We fade in on a scientist interviewing for a job. Let’s give this scientist a name, Bobby Alice, that telegraphs his helplessness to our didactic whims. During the part of the interview where the Reality Industries rep asks him if he has any questions, none of them are answered, except the one about his starting salary. This number is high enough to convince Bobby the job is right for him.

Knowing so little about Reality Industries, everything Bobby sees on his first day comes as a surprise, starting with the campus’s extensive security apparatus of long gated driveways, high tree-lined fences, and all the other standard X-Files elements. Most striking of all is his assigned building, a structure whose paradoxical design merits a special section of the morning orientation. After Bobby is given his project details (irrelevant for us), black-suited Mr. Smith–types tell him the bad news: So long as he works at Reality Industries, he may visit only the building’s fourth floor. This, they assure him, is standard, for all employees but the top executives. Each project team has its own floor, and the teams are never allowed to intermix.

The instructors follow this with what they claim is the good news. Yes, they admit, this tightly tiered approach led to worker distress in the old days, back on the old campus, where the building designs were brutalist and the depression rates were high. But the new building is designed to subvert such pressures. The trainers lead Bobby up to the fourth floor, up to his assignment, through a construction unlike any research facility he has ever seen. The walls are translucent and glow on all sides. So do the floor and ceiling. He is guided to look up, where he can see dark footprints roving about, shadows from the project team on the next floor. “The goal here,” his guide remarks, “is to encourage a sort of cultural continuity, even if we can’t all communicate.”

Over the next weeks, Bobby Alice becomes accustomed to the silent figures floating above him. Eventually, he comes to enjoy the fourth floor’s communal tracking of their fifth-floor counterparts, complete with invented names, invented personalities, invented purposes. He makes peace with the possibility that he is himself a fantasy figure for the third floor.

Then, one day, strange lights appear in a corner of the ceiling.

Naturally phlegmatic, Bobby Alice simply takes notes. But others on the fourth floor are noticeably less calm. The lights seem not to follow any known standard of the physics of footfalls, with lights of different colors blinking on and off seemingly at random, yet still giving the impression not merely of a constructed display but of some solid fixture in the fifth-floor commons. Some team members, formerly of the same anti-philosophical bent as most hires, now spend their coffee breaks discussing increasingly esoteric metaphysics. Productivity declines.

Meanwhile, Bobby has set up a camera to record data. As a work-related extracurricular, he is able in the following weeks to develop a general mathematical description that captures an unexpected order in the flashing lights. This description does not predict exactly which lights will blink when, but, by telling a story about what’s going on between the frames captured by the camera, he can predict what sorts of patterns are allowed, how often, and in what order.

Does this solve the mystery? Apparently it does. Conspiratorial voices on the fourth floor go quiet. The “Alice formalism” immediately finds other applications, and Reality Industries gives Dr. Alice a raise. They give him everything he could want — everything except access to the fifth floor.

In time, Bobby Alice becomes a fourth-floor legend. Yet as the years pass — and pass with the corner lights as an apparently permanent fixture — new employees occasionally massage the Alice formalism to unexpected ends. One worker discovers that he can rid the lights of their randomness if he imagines them as the reflections from a tank of iridescent fish, with the illusion of randomness arising in part because it’s a 3-D projection on a 2-D ceiling, and in part because the fish swim funny. The Alice formalism offers a series of color maps showing the different possible light patterns that might appear at any given moment, and another prominent interpreter argues, with supposed sincerity (although it’s hard to tell), that actually not one but all of the maps occur at once — each in parallel branching universes generated by that spooky alien light source up on the fifth floor.

As the interpretations proliferate, Reality Industries management occasionally finds these side quests to be a drain on corporate resources. But during the Alice decades, the fourth floor has somehow become the company’s most productive. Why? Who knows. Why fight it?

The history of quantum mechanics, being a matter of record, obviously has more twists than any illustrative cartoon can capture. Readers interested in that history are encouraged to read Adam Becker’s recent retelling, What Is Real?, which was reviewed in these pages (“Make Physics Real Again,” Winter 2019). But the above sketch is one attempt to capture the unusual flavor of this history.

Like the fourth-floor scientists in our story who, sight unseen, invented personas for all their fifth-floor counterparts, nineteenth-century physicists are often caricatured as having oversold their grasp on nature’s secrets. But longstanding puzzles — puzzles involving chemical spectra and atomic structure rather than blinking ceiling lights — led twentieth-century pioneers like Niels Bohr, Wolfgang Pauli, and Werner Heisenberg to invent a new style of physical theory. As with the formalism of Bobby Alice, mature quantum theories in this tradition were abstract, offering probabilistic predictions for the outcomes of real-world measurements, while remaining agnostic about what it all meant, about what fundamental reality undergirded the description.

From the very beginning, a counter-tradition associated with names like Albert Einstein, Louis de Broglie, and Erwin Schrödinger insisted that quantum models must ultimately capture something (but probably not everything) about the real stuff moving around us. This tradition gave us visions of subatomic entities as lumps of matter vibrating in space, with the sorts of orbital visualizations one first sees in high school chemistry.

But once the various quantum ideas were codified and physicists realized that they worked remarkably well, most research efforts turned away from philosophical agonizing and toward applications. The second generation of quantum theorists, unburdened by revolutionary angst, replaced every part of classical physics with a quantum version. As Max Planck famously wrote, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Since this inherited framework works well enough to get new researchers started, the question of what it all means is usually left alone.

Of course, this question is exactly what most non-experts want answered. For past generations, books with titles like The Tao of Physics and Quantum Reality met this demand, with discussions that wildly mixed conventions of scientific reportage with wisdom literature. Even once quantum theories themselves became familiar, interpretations of them were still new enough to be exciting.

Today, even this thrill is gone. We are now in the part of the story where no one can remember what it was like not to have the blinking lights on the ceiling. Despite the origins of quantum theory as an empirical framework — a container flexible enough to wrap around whatever surprises experiments might uncover — its success has led today’s theorists to regard it as fundamental, a base upon which further speculations might be built.

Regaining that old feeling of disorientation now requires some extra steps.

As interlopers in an ongoing turf war, modern explainers of quantum theory must reckon both with arguments like Niels Bohr’s, which emphasize the theory’s limits on knowledge, and with criticisms like Albert Einstein’s, which demand that the theory represent the real world. Sean Carroll’s Something Deeply Hidden pitches itself to both camps. The title stems from an Einstein anecdote. As “a child of four or five years,” Einstein was fascinated by his father’s compass. He concluded, “Something deeply hidden had to be behind things.” Carroll agrees with this, but argues that the world at its roots is quantum. We only need courage to apply that old Einsteinian realism to our quantum universe.

Carroll is a prolific popularizer — alongside his books, his blog, and his Twitter account, he has also recorded three courses of lectures for general audiences, and for the last year has released a weekly podcast. His new book is appealingly didactic, providing a sustained defense of the Many Worlds interpretation of quantum mechanics, first offered by Hugh Everett III as a graduate student in the 1950s. Carroll maintains that Many Worlds is just quantum mechanics, and he works hard to convince us that supporters aren’t merely perverse. In the early days of electrical research, followers of James Clerk Maxwell were called Maxwellians, but today all physicists are Maxwellians. If Carroll’s project pans out, someday we’ll all be Everettians.

Standard applications of quantum theory follow a standard logic. A physical system is prepared in some initial condition, and modeled using a mathematical representation called a “wave function.” Then the system changes in time, and these changes, governed by the Schrödinger equation, are tracked in the system’s wave function. But when we interpret the wave function in order to generate a prediction of what we will observe, we get only probabilities of possible experimental outcomes.

Carroll insists that this quantum recipe isn’t good enough. It may be sufficient if we care only to predict the likelihood of various outcomes for a given experiment, but it gives us no sense of what the world is like. “Quantum mechanics, in the form in which it is currently presented in physics textbooks,” he writes, “represents an oracle, not a true understanding.”

Most of the quantum mysteries live in the process of measurement. Questions of exactly how measurements force determinate outcomes, and of exactly what we sweep under the rug with that bland word “measurement,” are known collectively in quantum lore as the “measurement problem.” Quantum interpretations are distinguished by how they solve this problem. Usually, solutions involve rejecting some key element of common belief. In the Many Worlds interpretation, the key belief we are asked to reject is that of one single world, with one single future.

The version of the Many Worlds solution given to us in Something Deeply Hidden sidesteps the history of the theory in favor of a logical reconstruction. What Carroll enunciates here is something like a quantum minimalism: “There is only one wave function, which describes the entire system we care about, all the way up to the ‘wave function of the universe’ if we’re talking about the whole shebang.”

Putting this another way, Carroll is a realist about the quantum wave function, and suggests that this mathematical object simply is the deep-down thing, while everything else, from particles to planets to people, are merely its downstream effects. (Sorry, people!) The world of our experience, in this picture, is just a tiny sliver of the real one, where all possible outcomes — all outcomes for which the usual quantum recipe assigns a non-zero probability — continue to exist, buried somewhere out of view in the universal wave function. Hence the “Many Worlds” moniker. What we experience as a single world, chock-full of foreclosed opportunities, Many Worlders understand as but one swirl of mist foaming off an ever-breaking wave.

The position of Many Worlds may not yet be common, but neither is it new. Carroll, for his part, is familiar enough with it to be blasé, presenting it in the breezy tone of a man with all the answers. The virtue of his presentation is that whether or not you agree with him, he gives you plenty to consider, including expert glosses on ongoing debates in cosmology and field theory. But Something Deeply Hidden still fails where it matters. “If we train ourselves to discard our classical prejudices, and take the lessons of quantum mechanics at face value,” Carroll writes near the end, “we may eventually learn how to extract our universe from the wave function.”

But shouldn’t it be the other way around? Why should we have to work so hard to “extract our universe from the wave function,” when the wave function itself is an invention of physicists, not the inerrant revelation of some transcendental truth? Interpretations of quantum theory live or die on how well they are able to explain its success, and the most damning criticism of the Many Worlds interpretation is that it’s hard to see how it improves on the standard idea that probabilities in quantum theory are just a way to quantify our expectations about various measurement outcomes.

Carroll argues that, in Many Worlds, probabilities arise from self-locating uncertainty: “You know everything there is to know about the universe, except where you are within it.” During a measurement, “a single world splits into two, and there are now two people where I used to be just one.” “For a brief while, then, there are two copies of you, and those two copies are precisely identical. Each of them lives on a distinct branch of the wave function, but neither of them knows which one it is on.” The job of the physicist is then to calculate the chance that he has ended up on one branch or another — which produces the probabilities of the various measurement outcomes.

If, alongside Carroll, you convince yourself that it is reasonable to suppose that these worlds exist outside our imaginations, you still might conclude, as he does, that “at the end of the day it doesn’t really change how we should go through our lives.” This conclusion comes in a chapter called “The Human Side,” where Carroll also dismisses the possibility that humans might have a role in branching the wave function, or indeed that we have any ultimate agency: “While you might be personally unsure what choice you will eventually make, the outcome is encoded in your brain.” These views are rewarmed arguments from his previous book, The Big Picture, which I reviewed in these pages (“Pop Goes the Physics,” Spring 2017) and won’t revisit here.

Although this book is unlikely to turn doubters of Many Worlds into converts, it is a credit to Carroll that he leaves one with the impression that the doctrine is probably consistent, whether or not it is true. But internal consistency has little power against an idea that feels unacceptable. For doctrines like Many Worlds, with key claims that are in principle unobservable, some of us will always want a way out.

Lee Smolin is one such seeker for whom Many Worlds realism — or “magical realism,” as he likes to call it — is not real enough. In his new book, Einstein’s Unfinished Revolution, Smolin assures us that “however weird the quantum world may be, it need not threaten anyone’s belief in commonsense realism. It is possible to be a realist while living in the quantum universe.” But if you expect “commonsense realism” by the end of his book, prepare for a surprise.

Smolin is less congenial than Carroll, with a brooding vision of his fellow scientists less as fellow travelers and more as members of an “orthodoxy of the unreal,” as Smolin stirringly puts it. Smolin is best known for his role as doomsayer about string theory — his 2006 book The Trouble with Physics functioned as an entertaining jeremiad. But while his books all court drama and are never boring, that often comes at the expense of argumentative care.

Einstein’s Unfinished Revolution can be summarized briefly. Smolin states early on that quantum theory is wrong: It gives probabilities for many and various measurement outcomes, whereas the world of our observation is solid and singular. Nevertheless, quantum theory can still teach us important lessons about nature. For instance, Smolin takes at face value the claim that entangled particles far apart in the universe can communicate information to each other instantaneously, unbounded by the speed of light. This ability of quantum entities to be correlated while separated in space is technically called “nonlocality,” which Smolin enshrines as a fundamental principle. And while he takes inspiration from an existing nonlocal quantum theory, he rejects it for violating other favorite physical principles. Instead, he elects to redo physics from scratch, proposing partial theories that would allow his favored ideals to survive.

This is, of course, an insane act of hubris. But no red line separates the crackpot from the visionary in theoretical physics. Because Smolin presents himself as a man up against the status quo, his books are as much autobiography as popular science, with personality bleeding into intellectual commitments. Smolin’s last popular book, Time Reborn (2013), showed him changing his mind about the nature of time after doing bedtime with his son. This time around, Smolin tells us in the preface about how he came to view the universe as nonlocal:

I vividly recall that when I understood the proof of the theorem, I went outside in the warm afternoon and sat on the steps of the college library, stunned. I pulled out a notebook and immediately wrote a poem to a girl I had a crush on, in which I told her that each time we touched there were electrons in our hands which from then on would be entangled with each other. I no longer recall who she was or what she made of my poem, or if I even showed it to her. But my obsession with penetrating the mystery of nonlocal entanglement, which began that day, has never since left me.

The book never seriously questions whether the arguments for nonlocality should convince us; Smolin’s experience of conviction must stand in for our own. These personal detours are fascinating, but do little to convince skeptics.

Once you start turning the pages of Einstein’s Unfinished Revolution, ideas fly by fast. First, Smolin gives us a tour of the quantum fundamentals — entanglement, nonlocality, and all that. Then he provides a thoughtful overview of solutions to the measurement problem, particularly those of David Bohm, whose complex legacy he lingers over admiringly. But by the end, Smolin abandons the plodding corporate truth of the scientist for the hope of a private perfection.

Many physicists have never heard of Bohm’s theory, and some who have still conclude that it’s worthless. Bohm attempted to salvage something like the old classical determinism, offering a way to understand measurement outcomes as caused by the motion of particles, which in turn are guided by waves. This conceptual simplicity comes at the cost of brazen nonlocality, and an explicit dualism of particles and waves. Einstein called the theory a “physical fairy-tale for children”; Robert Oppenheimer declared about Bohm that “we must agree to ignore him.”

Bohm’s theory is important to Smolin mainly as a prototype, to demonstrate that it’s possible to situate quantum mechanics within a single world — unlike Many Worlds, which Smolin seems to dislike less for physical than for ethical reasons: “It seems to me that the Many Worlds Interpretation offers a profound challenge to our moral thinking because it erases the distinction between the possible and the actual.” In his survey, Smolin sniffs each interpretation as he passes it, looking for a whiff of the real quantum story, which will preserve our single universe while also maintaining the virtues of all the partial successes.

When Smolin finally explains his own idiosyncratic efforts, his methods — at least in the version he has dramatized here — resemble some wild descendant of Cartesian rationalism. From his survey, Smolin lists the principles he would expect from an acceptable alternative to quantum theory. He then reports back to us on the incomplete models he has found that will support these principles.

Smolin’s tour leads us all over the place, from a review of Leibniz’s Monadology (“shockingly modern”), to a new law of physics he proposes (the “principle of precedence”), to a solution to the measurement problem involving nonlocal interactions among all similar systems everywhere in the universe. Smolin concludes with the grand claim that “the universe consists of nothing but views of itself, each from an event in its history.” Fine. Maybe there’s more to these ideas than a casual reader might glean, but after a few pages of sentences like, “An event is something that happens,” hope wanes.

For all their differences, Carroll and Smolin similarly insist that, once the basic rules governing quantum systems are properly understood, the rest should fall into place. “Once we understand what’s going on for two particles, the generalization to 1088 particles is just math,” Carroll assures us. Smolin is far less certain that physics is on the right track, but he, too, believes that progress will come with theoretical breakthroughs. “I have no better answer than to face the blank notebook,” Smolin writes. This was the path of Bohr, Einstein, Bohm and others. “Ask yourself which of the fundamental principles of the present canon must survive the coming revolution. That’s the first page. Then turn again to a blank page and start thinking.”

Physicists are always tempted to suppose that successful predictions prove that a theory describes how the world really is. And why not? Denying that quantum theory captures something essential about the character of those entities outside our heads that we label with words like “atoms” and “molecules” and “photons” seems far more perverse, as an interpretive strategy, than any of the mainstream interpretations we’ve already discussed. Yet one can admit that something is captured by quantum theory without jumping immediately to the assertion that everything must flow from it. An invented language doesn’t need to be universal to be useful, and it’s smart to keep on honing tools for thinking that have historically worked well.

As an old mentor of mine, John P. Ralston, wrote in his book How to Understand Quantum Mechanics, “We don’t know what nature is, and it is not clear whether quantum theory fully describes it. However, it’s not the worst thing. It has not failed yet.” This seems like the right attitude to take. Quantum theory is a fabulously rich subject, but the fact that it has not failed yet does not allow us to generalize its results indefinitely.

There is value in the exercises that Carroll and Smolin perform, in their attempts to imagine principled and orderly universes, to see just how far one can get with a straitjacketed imagination. But by assuming that everything is captured by the current version of quantum theory, Carroll risks credulity, foreclosing genuinely new possibilities. And by assuming that everything is up for grabs, Smolin risks paranoia, ignoring what is already understood.

Perhaps the agnostics among us are right to settle in as permanent occupants of Reality Industries’ fourth floor. We can accept that scientists have a role in creating stories that make sense, while also appreciating the possibility that the world might not be made of these stories. To the big, unresolved questions — questions about where randomness enters in the measurement process, or about how much of the world our physical theories might capture — we can offer only a laconic who knows? The world is filled with flashing lights, and we should try to find some order in them. Scientific success often involves inventing a language that makes the strange sensible, warping intuitions along the way. And while this process has allowed us to make progress, we should never let our intuitions get so strong that we stop scanning the ceiling for unexpected dazzlements.

David Kordahl is a graduate student in physics at Arizona State University. David Kordahl, “Inventing the Universe,” The New Atlantis, Number 61, Winter 2020, pp. 114-124.

Judith Butler: To Save the Earth, Dismantle Individuality (Time)

time.com

Judith Butler, April 21,2021


However differently we register this pandemic we understand it as global; it brings home the fact that we are implicated in a shared world. The capacity of living human creatures to affect one another can be a matter of life or death. Because so many resources are not equitably shared, and so many have only a small or vanished share of the world, we cannot recognize the pandemic as global without facing those inequalities.

Some people work for the common world, keep it going, but are not, for that reason, of it. They might lack property or papers, be sidelined by racism or even disdained as refuse—those who are poor, Black or brown, those with unpayable debts that preclude a sense of an open future.

The shared world is not equally shared. The French philosopher Jacques Rancière refers to “the part of those who have no part”—those for whom participation in the commons is not possible, never was, or no longer is. For it is not just resources and companies in which a share is to be had, but a sense of the common, a sense of belonging to a world equally, a trust that the world is organized to support everyone’s flourishing.

The pandemic has illuminated and intensified racial and economic inequalities at the same time that it heightens the global sense of our obligations to one another and the earth. There is movement in a global direction, one based on a new sense of mortality and interdependency. The experience of finitude is coupled with a keen sense of inequalities: Who dies early and why, and for whom is there no infrastructural or social promise of life’s continuity?

This sense of the interdependency of the world, strengthened by a common immunological predicament, challenges the notion of ourselves as isolated individuals encased in discrete bodies, bound by established borders. Who now could deny that to be a body at all is to be bound up with other living creatures, with surfaces, and the elements, including the air that belongs to no one and everyone?

Within these pandemic times, air, water, shelter, clothing and access to health care are sites of individual and collective anxiety. But all these were already imperiled by climate change. Whether or not one is living a livable life is not only a private existential question, but an urgent economic one, incited by the life-and-death consequences of social inequality: Are there health services and shelters and clean enough water for all those who should have an equal share of this world? The question is made more urgent by conditions of heightened economic precarity during the pandemic, exposing as well the ongoing climate catastrophe for the threat to livable life that it is.

Pandemic is etymologically pandemos, all the people, or perhaps more precisely, the people everywhere, or something that spreads over or through the people. The “demos” is all the people despite the legal barriers that seek to separate them. A pandemic, then, links all the people through the potentials of infection and recovery, suffering and hope, immunity and fatality. No border stops the virus from traveling if humans travel; no social category secures absolute immunity for those
it includes.

“The political in our time must start from the imperative to reconstruct the world in common,” argues Cameroonian philosopher Achille Mbembe. If we consider the plundering of the earth’s resources for the purposes of corporate profit, privatization and colonization itself as planetary project or enterprise, then it makes sense to devise a movement that does not send us back to our egos and identities, our cut-off lives.

Such a movement will be, for Mbembe, “a decolonization [which] is by definition a planetary enterprise, a radical openness of and to the world, a deep breathing for the world as opposed to insulation.” The planetary opposition to extraction and systemic racism ought to then deliver us back to the world, or let the world arrive, as if for the first time, a shared place for “deep breathing”—a desire we all now know.

And yet, an inhabitable world for humans depends on a flourishing earth that does not have humans at its center. We oppose environmental toxins not only so that we humans can live and breathe without fear of being poisoned, but also because the water and the air must have lives that are not centered on our own.

As we dismantle the rigid forms of individuality in these interconnected times, we can imagine the smaller part that human worlds must play on this earth whose regeneration we depend upon—and which, in turn, depends upon our smaller and more mindful role.

O bolsonarismo como ecossistema, explica Hamilton Carvalho (Poder360)

poder360.com.br

Fenômeno é mais que um movimento

A produção de certezas é um alívio

Sistema agrupa segmentos distintos

Mortos da covid se tornam um detalhe

O presidente com apoiadores no Palácio da Alvorada: bolsonarismo é melhor entendido como um sistema político-social

Hamilton Carvalho 24.abr.2021 (sábado) – 5h50 atualizado: 24.abr.2021 (sábado) – 7h10 5-6 minutos


Google, Nespresso, Amazon e Magalu. Na chamada economia da atenção, a concorrência hoje é, cada vez mais, entre ecossistemas, geralmente capitaneados por uma grande empresa e que abrigam várias organizações em uma rede de dependência e complementaridade.

Ganha quem conseguir satisfazer mais necessidades dos consumidores dentro do mesmo sistema. Para usar o jargão, quem consegue oferecer uma proposição de valor superior.

A ideia em si não é tão nova assim. O impulso veio com a economia digital, mas é possível identificar ecossistemas nos mais diversos contextos, do mundo do futebol e do crime aos sistemas sociais de educação e saúde. Inclusive no conglomerado de organizações que tem se dedicado ao combate à pandemia, que inclui atores do setor privado (como no caso da recente compra do kit intubação) e que deveria ter sido adequadamente capitaneado pelo governo federal.

Mas cá estamos, rumo a meio milhão de mortos. Bolsonaro poderia ter saído como herói da coisa toda, como Bibi em Israel, mas, vivendo da lógica de bunker, preferiu jogar areia nessas engrenagens desde o início, enquanto o Brasil regride institucionalmente a olhos vistos.

Curiosamente, isso não tem sido suficiente para corroer o lastro que o presidente mantém no pedaço conservador de Brasil, que tem racionalizado sem grandes dificuldades o mar de chorume produzido pela covid.

Encarar o bolsonarismo como ecossistema –mais do que um movimento social apoiado por um exército digital– ajuda a entender o fenômeno. Primeiro porque, como sabemos, a atenção das pessoas se tornou superfragmentada e o mundo não anda fácil de ser entendido.

Ecossistemas político-sociais levam vantagem quando conseguem satisfazer uma necessidade humana básica, o conforto das grandes certezas. Uma boa e sólida certeza vale como um barbitúrico irresistível, dizia Nelson Rodrigues. Em um país com nível educacional baixo, essas certezas podem se dar ao luxo de sapatear na cara da realidade.

O bolsonarismo também dá de bandeja aos seguidores uma identidade carregada de tintas morais e, novamente, não há nada de novo aqui –basta lembrar de exemplos próximos, como o chavismo e o lulopetismo. Em outras palavras, o sujeito se sente superior e ganha uma tribo para chamar de sua.

É essa a atual proposição de valor do ecossistema criado em torno do presidente. Não é pouco, ainda que o conjunto já tenha tido mais força quando esgrimia o discurso contra a corrupção e a lábia liberal.

Em torno desse valor, diversos segmentos se agrupam. Tem aquilo que reportagem no El País chamou de QAnon tupiniquim, gente produzindo fake news e usando robôs para influenciar o discurso nas redes sociais.

Tem aquele segmento empresarial “raiz”, madeireiros na Amazônia, por exemplo, fora aquelas grandes empresas que, assim como o Centrão, estão quase sempre à disposição para uma ovacionada, no matter what.

Tem os políticos, os apoiadores de nicho (como os atiradores), os produtores de conteúdo lacrador, os canais de comunicação e parte (presumo) dos militares e policiais. E se a mexerica toda perdeu os lavajatistas, ganhou de presente um gomo suculento que tem sido crucial para sua resiliência, o dos médicos e influenciadores cloroquiners.

Cada segmento desses têm recursos e competências que usa em prol da causa. Por exemplo, a audiência cativa de uma rádio ou a credibilidade extraterrestre que os brasileiros atribuem aos médicos, mesmo que sejam leigos em medicina baseada em evidências.

Cada um deles desempenha atividades diversas mas complementares, reforçando a proposição de valor (lembremos: grandes certezas e identidade moral superior). A lista é longa e inclui organização de protestos, veiculação de programas de opinião em rádio e os encontros empresariais que lustram a legitimidade do governo com o gel do capitalismo de compadrio.

No que é crítico, cada segmento se apropria de uma parte do valor gerado pelo conjunto. Políticos se apropriam de capital eleitoral. Emissoras, de exclusivas com o presidente e audiência. Médicos cloroquiners ganham chuvas de pacientes. Influenciadores e manipuladores de conteúdo ganham seguidores ou, como suspeita a CPMI das fake news, empregos em gabinetes. Entidades empresariais mantem abertos os canais com Brasília. Os mortos são só um detalhe incômodo na paisagem.

Minha percepção é que a disputa de 2022 deve ocorrer mais nesse nível amplificado. Concorrentes precisam começar a colocar de pé seus ecossistemas desde já, de preferência em torno de valores mais racionais e menos divisivos. Não vai ser fácil.

Humanity Now Lives in The Anthropocene. But What Does That Actually Mean? (Science Alert)

sciencealert.com

Carly Cassella, 24 April 2021


Robert Landau/Getty Images

In the last two decades, the Anthropocene has become an informal buzzword to describe the numerous and unprecedented ways humans have come to modify the planet. 

As the concept has become more widely adopted, however, definitions have begun to blur. Today, the very meaning of the Anthropocene and its timeline differs considerably depending on who is doing the talking.

To geologists and Earth system scientists, the Industrial Revolution is often considered the dawn of the Anthropocene – when human influence on Earth’s systems became predominant worldwide. 

Many anthropologists, historians, and archaeologists, however, consider the 18th century as more of a sunrise, when the era of humans truly began to heat up in some regions. Before that, there were already glimmers of human domination.

Since the Late Pleistocene, right through to the Holocene (our current epoch), humans have been producing “distinct, detectable and unprecedented transformations of Earth’s environments,” states a new paper on the subject.

And while these changes might not be enough to be technically defined as a new geological epoch, we need terms to describe this earlier influence, too. Because right now, people from various disciplines are using the term with subtly different meanings.

“Dissecting the many interpretations of the Anthropocene suggests that a range of quite distinct, but variably overlapping, concepts are in play,” says geologist Colin Waters from the University of Leicester in the UK.

Thousands of years before the boom of industrialization, globalization, nuclear bombs, and modern climate change, humans were already in the first stages of becoming a dominant planetary force.

The rise of crop domestication and hunting, the spread of livestock and mining, and the move to urbanization, for instance, have all caused great changes to Earth’s soil signature and its fossil record, setting us on a course to the modern day. 

As far back as 3400 BCE, for instance, people in China were already smelting copper, and 3,000 years ago, most of the planet was already transformed by hunter-gatherers and farmers. 

While these smaller and slower regional changes did not destabilize Earth’s entire system as more modern actions have, some researchers think we are underestimating the climate effects of these earlier land-use changes.

As such, some have considered using the terms “pre-Anthropocene” or “proto‐Anthropocene” to describe significant human impacts before the mid‐twentieth century.

Others argue a capitalized “Anthropocene” should represent the tightly defined geological concept of an epoch, while the uncapitalized version should be used for broader interpretations.

Even after the Industrial Revolution, when human influence is clear to see, some argue we need to define further advances of the Anthropocene.

The “Great Acceleration” of the mid-twentieth century, for instance, has been proposed as a “second stage” to the Anthropocene, when human enterprise and influence began growing exponentially. 

This second stage not only encompasses rapid geological changes, but it also refers to socioeconomic factors and modern biophysical processes that humans have also begun to alter with our actions.

“This shows an exemplar of ways in which ideas and terms move between disciplines, as is true for the Anthropocene,” researchers write.

It’s unclear what the next stage of the Anthropocene will look like, but many of the changes we have made are currently irreversible and may continue long after our species is gone. 

Still, the authors argue, one thing is clear. The exceptionally rapid transformations humans have made to our planet since the Great Acceleration “vastly outweigh” earlier climactic events of the Holocene.

“Given both the rate and scale of change marking the onset of the chronostratigraphic Anthropocene, it would be difficult to justify a rank lower than series/epoch,” the authors conclude.

The study was published in Earth’s Future.