Arquivo da categoria: Uncategorized

Quem é Beto Ricardo, que mudou imagem do índio no Brasil (Folha de S.Paulo)

MARCELO LEITE – 14 ABRIL 2024

Artigo original

[RESUMO] Livro “Uma Enciclopédia nos Trópicos – Memórias de um Socioambientalista” conta a história de Beto Ricardo, do Instituto Socioambiental, ONG que Ailton Krenak descreve como fortaleza civil contra a desinformação sobre os indígenas no Brasil, animando o debate público com mapas e dados de qualidade. Militante discreto e solidário, mas firme, Beto teve participação destacada em momentos decisivos, como a mobilização que rendeu um capítulo inteiro na Constituição de 1988 sobre a questão indígena.

Na segunda metade dos anos 1980, provavelmente em 1987, uma sugestão do jornalista Leão Serva me levou a subir pela primeira vez a escadaria do anexo do Colégio Sion na avenida Higienópolis, 983. Foi um encontro com duas forças da natureza: Beto e Fany Ricardo, na primeira aula de um curso intensivo de indigenismo que não terminaria tão cedo.

Ali, no Centro Ecumênico de Documentação e Informação (Cedi), se publicava desde 1980 um compêndio que seria de consulta obrigatória nas quatro décadas seguintes, “Povos Indígenas no Brasil”. Apelidado de PIBão, pelo número de páginas e pela ambição, a obra já conta 13 edições, com 6.000 páginas de análises e notícias, além de 2.500 mapas, 1.700 vídeos e 100 mil fotos no acervo digital.

Comunidade Tucumã Rupitã, na região do Alto Rio Negro, no Amazonas, em 1999
Comunidade Tucumã Rupitã, na região do Alto Rio Negro, no Amazonas, em 1999 – Pedro Martinelli/ISA

O PIBão está no epicentro do volume autobiográfico de Beto Ricardo, “Uma Enciclopédia nos Trópicos – Memórias de um Socioambientalista”, escrito com Ricardo Arnt, à venda também na loja do Instituto Socioambiental. Como afirma Serva no posfácio, “o Programa Povos Indígenas no Brasil assegurou a informação necessária para que o Estado racista não apagasse de vez a existência dos índios”.

Avesso a holofotes, Beto, articulador paciente, solidário, determinado e firme a ponto de soar incômodo, foi o protagonista de bastidores (passe o oximoro) de avanços marcantes na questão indígena brasileira, assim como na pauta ambiental. Não sem retrocessos amargos, pelo menos de 2007 para cá, mas de toda maneira com uma derivada ascendente, como fica claro no livro.

O ponto de virada se deu com a Constituição de 1988 e seu oitavo capítulo, Dos Índios. Logo no primeiro artigo (231), essa parte da Carta reconhece “sua organização social, costumes, línguas, crenças e tradições, e os direitos originários sobre as terras que tradicionalmente ocupam, competindo à União demarcá-las, proteger e fazer respeitar todos os seus bens”.

Invertia-se nessa época o que antes se considerava uma tendência irreversível de extinção dos povos indígenas, por sua “aculturação” ou “integração” à sociedade brasileira. A reviravolta se iniciara anos antes, e Beto estava lá, trabalhando em articulação que levaria um jovem Ailton Krenak ao gesto icônico de pintar o rosto de preto na tribuna do Congresso Constituinte.

Só na primeira metade do século 20 haviam desaparecido 83 etnias, constatara Darcy Ribeiro em “Os Índios e a Civilização” (1970), com base em dados recolhidos por inspetorias do Serviço de Proteção aos Índios (SPI), precursor da Funai, quando foi diretor de sua seção de estudos, em 1957. Beto, já formado em antropologia na USP e integrante do Cedi desde 1974, achava crucial retomar o levantamento das populações, mas não com a metodologia antiga.

Em lugar de órgãos do Estado na ditadura militar, o jovem discípulo encasquetou de reunir informações por meio de uma rede ampla da sociedade civil, com antropólogos, padres católicos, pessoal de saúde e assistentes sociais. Darcy Ribeiro não gostou. “Me recusar a trabalhar com Darcy era uma opção aflitiva. Eu o admirava muito”, conta Beto em “Uma Enciclopédia nos Trópicos”.

Já professor de antropologia na Unicamp, ele contava arregimentar estudantes para colher dados nas aldeias. Em três anos na universidade paulista, consolidou amizades com pesquisadores da estatura de Peter Fry e Manuela Carneiro da Cunha e travou contato com um antropólogo que se tornaria o maior parceiro intelectual, Eduardo Viveiros de Castro –ainda que tenha participado da banca campineira que preteriu o estudioso carioca em concurso para professor de etnologia.

Em 1978, tornou-se secretário-geral do Cedi. Dois anos depois sairia a primeira edição do PIBão, ainda com o título Aconteceu e periodicidade anual, expandida a partir de 1985 até virar quinquenal. A última versão, 2017/22, saiu no ano passado com 828 páginas de dados e registros de 252 povos falantes de mais de 160 línguas, ocupando 13,7% do território nacional em terras protegidas.

A transformação do cenário desolador dos anos 1970 foi incubada no Cedi e nas organizações que seu grupo ajudou a criar. Em 1979 tinha nascido a União das Nações Indígenas (UNI), com lideranças xavantes, terenas e kadiwéus —e um jovem editor de publicações, Ailton Krenak.

No prefácio de “Uma Enciclopédia”, o agora imortal da Academia Brasileira de Letras descreve o Cedi como “uma fortaleza civil contra a desinformação sobre os indígenas no Brasil, com Beto Ricardo animando os debates públicos”.

Animação não faltava. Nos anos 1980, logo após a anistia e o retorno de exilados em 1979, indígenas e ambientalistas brasileiros começaram a levar suas demandas para fóruns nos Estados Unidos, o centro do capitalismo mundial, como as 17 audiências em comitês do Congresso norte-americano entre 1983 e 1986, e reuniões do Banco Mundial.

Com ajuda de Steve Schwartzman (Environmental Defense Fund), Barbara Bramble (National Wildlife Federation) e Jason Clay (Cultural Survival), viajaram Krenak, José Lutzenberger, Mary Allegretti, Chico Mendes e Paulo Paiakan. As campanhas internacionais começaram a afetar recursos de auxílio ao desenvolvimento, enfurecendo o governo militar brasileiro, nos estertores ditadura, e a recém-inaugurada Nova República com José Sarney, como na suspensão de desembolsos para o Polonoroeste que devastava Rondônia.

Vencidas algumas das batalhas que culminaram no artigo 231 da Constituição —com a ajuda do antigo coronel Jarbas Passarinho, ex-ministro da ditadura militar—, a luta indígena ganhou momento. Em 1988, surge uma nova organização na esfera do Cedi, desta vez em Brasília: o Núcleo de Direitos Indígenas (NDI), com Krenak, Marcos Terena, Paiakan, Manuela, Carlos Frederico Marés, Márcio Santilli (estes dois futuros presidentes da Funai), André Villas-Bôas e José Carlos Libânio.

No mesmo ano, o etnobiólogo norte-americano Darrell Posey foi processado com base no Estatuto do Estrangeiro por levar os líderes Paiakan e Kube-I aos EUA. O líder seringueiro Chico Mendes foi assassinado em dezembro, atraindo mais atenção para a Amazônia. Paiakan pediu na Europa suspensão de meio bilhão de dólares que o Banco Mundial carrearia para o setor elétrico no Brasil.

Em fevereiro de 1989, Beto se empenhou com Márcio e André na organização do Encontro das Nações Indígenas do Xingu convocado por Paiakan. Compareceram o roqueiro Sting, Anita Roddick, dona da Body Shop, a atriz Lucélia Santos e os deputados Fabio Feldmann, Benedita da Silva, Haroldo Lima e Fernando Gabeira. Pelo menos 60 órgãos de imprensa estrangeira se deslocaram até Altamira (PA).

O encontro tinha por mote central protestar contra a construção das hidrelétricas Kararaô e Babaquara no rio Xingu. Uma fotografia feita ali estampou a capa do Jornal do Brasil e correu o mundo: um facão encostado pela índia Tuíra na bochecha de José Muniz Lopes, representante da empresa Eletronorte. Repaginado como Belo Monte, o represamento do Xingu terminaria sendo efetivado nos governos petistas de Lula e Dilma.

“Altamira foi um primeiro ensaio de articulação socioambiental”, avalia Beto em suas memórias. “Indígenas, ativistas dos direitos humanos e ambientalistas se aproximaram, superando preconceitos. A experiência comum seria replicada, mais tarde, no Fórum Global das ONGs da Eco-92. Os dois eventos cozinharam um caldo de cultura que ferveria até consolidar um dos alicerces da fundação do Instituto Socioambiental [ISA], em 1994.”

O antropólogo-ambientalista, entretanto, não acompanharia pessoalmente a maior reunião multilateral já realizada em favor da saúde do planeta. A sua própria lhe pregou uma peça: logo após retornar dos EUA, onde recebeu o Prêmio Goldman e teve um encontro como o então presidente George Bush (pai), caiu doente com uma amebíase que o internou no hospital Oswaldo Cruz, quando emagreceu 14 quilos.

Após o assassinato de Chico Mendes, outra desgraça amazônica obteve destaque em 1993: o massacre de dezenas de yanomamis em Haximu, na fronteira com a Venezuela. Foi resultado direto da invasão da área por garimpeiros, dezenas de milhares deles, conforme descrito no PIBão pelo antropólogo Bruce Albert.

Albert é o interlocutor nas gravações com o ianomâmi Davi Kopenawa que resultariam no best-seller “A Queda do Céu”, no qual o xamã alerta que os brancos estão destruindo o planeta com o aquecimento global. Em 1992, às vésperas da Cúpula da Terra, o então presidente Fernando Collor havia homologado a Terra Indígena Yanomami, com 94 mil km2, maior que Portugal, mas os garimpeiros ainda andam por lá.

A confluência das pautas indigenista e ambiental levou à fundação do ISA como uma das quatro instituições em que se desmembrara o Cedi. As outras foram Ação Educativa, Koinonia Presença Ecumênica e Serviço e Núcleo de Estudos Trabalho e Sociedade.

Um dos lemas da organização viria a ser: “Socioambiental se escreve junto”. Era um jogo de palavras que recorria à omissão do hífen para indicar a interpenetração de movimentos militantes que costumavam andar às turras: conservacionistas que não se preocupavam com populações tradicionais e indigenistas que só tinham olhos para a preservação de línguas e rituais.

Entre os 33 sócios fundadores estavam os antropólogos do Cedi e figuras de proa da organização SOS Mata Atlântica, como Mário Mantovani e João Paulo Capobianco (atual secretário-executivo do Ministério do Meio Ambiente/MMA, cargo que já ocupara na primeira encarnação de Marina Silva como ministra). O ISA cresceria como uma fábrica hiperativa de estudos e dados sobre temas indígenas e ambientais, alimentando militantes, políticos e jornalistas.

Os índios panarás, que em 1975 haviam sido deslocados compulsoriamente de suas terras na região para o Parque Indígena do Xingu, manifestaram a André Villas-Bôas, em 1991, o desejo de retomar seu antigo território. Para embasar as ações que esse povo impetraria com o NDI contra a União, em 1994, o ISA fez um levantamento por satélite dos trechos de floresta ainda não derrubados pelos brancos.

Os panarás terminaram indenizados pelo Estado brasileiro, uma jurisprudência inédita, e tiveram 4.900 km2 de terras devolvidas em 1996, nos municípios de Guarantã do Norte (MT) e Altamira (PA). Em março de 1997, com apoio do ISA, eles se mudaram para a nova aldeia Nacypotire, no rio Iriri. A saga deu origem ao livro “A Volta dos Índios Gigantes”, com textos dos jornalistas Lúcio Flávio Pinto, Raimundo Pinto e Ricardo Arnt, mais fotos de Pedro Martinelli, outro companheiro de longa data.

Daí por diante, a sigla ISA se converteu em sinônimo de muitos sucessos (e uns poucos fracassos). Entre os êxitos, contra todas as probabilidades, figura o estabelecimento de sistemas e rotas comerciais para fazer a cestaria baniwa e a pimenta jiquitaia, da longínqua região da Cabeça do Cachorro, chegarem aos balcões das melhores lojas do Sudeste.

O programa no Alto Rio Negro, aliás, foi a obra dileta de Beto como antropólogo militante. O ISA transformou São Gabriel da Cachoeira (AM) em dínamo de atividades visitado por indígenas, pesquisadores, militares e celebridades como Milton Nascimento, Gilberto Gil, Fernando Henrique Cardoso, Lula, Bela Gil, Alex Atala e Sebastião Salgado.

A sede do instituto na cidade, o Curupirão, hospeda legiões de interessados nas dezenas de projetos de educação, piscicultura, agroflorestas e ecoturismo. Foi dali que parti para algumas coberturas jornalísticas sugeridas por Beto.

Entre as mais memoráveis: “A exceção e a regra” (2010), sobre ensino médio indígena entre os tuyukas do Alto Rio Tiquié, com o antropólogo Aloisio Cabalzar, e “Yaripo, a montanha sagrada dos ianomâmis” (2017), sobre programa de renda em que membros da etnia levam turistas até o pico da Neblina (2.995 m), o mais alto do Brasil, com Marcos Wesley Oliveira.

Muitas outras matérias vieram antes e depois: “Evento discute biodiversidade amazônica” (1999), sobre áreas prioritárias para preservação; “Plano ameaça 180 mil km2 de florestas” (2000), sobre o desenvolvimentismo de FHC; “Livro põe antropólogos em pé de guerra” (2000), sobre controvérsia envolvendo a obra de Napoleon Chagnon; “Sementes da concórdia” (2009), a respeito da Rede de Sementes do Xingu; “Ianomâmis ensinam quais cogumelos podem ser comidos sem risco” (2016).

Houve distanciamentos e divergências, por certo, atritos normais entre jornalistas e militantes movidos por objetivos nem sempre conciliáveis. Foi assim, por exemplo, com o esforço de manter equilíbrio entre defensores e adversários de obras de infraestrutura em regiões sensíveis, como a rodovia BR-163 e Belo Monte. Nada, porém, capaz de abalar a confiança na “fortaleza civil” comandada por Beto.

O antropólogo, afinal, é de uma honestidade ímpar, pessoal e intelectualmente. Ao narrar no livro um atrito com FHC sobre norma que admitia a terceiros contestar demarcações de terras indígenas, ele afirma sem meias palavras: “Na verdade, estávamos errados. Exageramos. O decreto número 1.775 não reduziu nenhuma terra indígena e acabou dando consistência às demarcações posteriores”.

A mesma honestidade não mede elogios a alguns poucos militares de boa vontade com que cruzou na Amazônia. Nem deixa de lamentar os retrocessos ambientais que começaram com as autorizações para hidrelétricas no rio Madeira (2007) e a saída de Marina Silva do MMA (2008), nos primeiros governos Lula, e culminaram com o licenciamento de Belo Monte (2010) e a aprovação de um novo Código Florestal (2011) que anistiou 470 mil km2 de florestas derrubadas irregularmente. Para nada dizer da hecatombe advinda com o ecocida Jair Bolsonaro, claro.

Beto, em que pesem os revezes, nunca abandonou o otimismo. Em 2007, capitaneou uma convocatória para estender os mapas temáticos do Cedi/ISA para incluir todos os outros sete países com floresta amazônica (Bolívia, Colômbia, Equador, Guiana, Peru, Venezuela e Suriname).

Tinha tudo para não dar certo, mas Beto se provou, mais uma vez, um articulador paciente, solidário, determinado e firme. “Mapas exprimem poder”, foi a mensagem que levou. Pôs de pé, com recursos iniciais da fundação norueguesa Rainforest, a Rede Amazônica de Informação Socioambiental Georreferenciada (Raisg). Cinco anos após a convocação, saiu o “Atlas Amazônia sob Pressão”.

É mais um monumento ao estilo do enciclopédico PIBão, erguido por um militante discreto, que segue a sabedoria indígena amazônica quando esta diz que quem aparece demais atrai feitiçaria. Hoje o ISA, uma obra coletiva, como gosta de dizer, tem duas centenas de funcionários em oito escritórios pelo país, mil afiliados e 450 mil seguidores em redes sociais. E uma vida inteira para se admirar.

Uma Enciclopédia nos Trópicos: Memórias de um Socioambientalista

  • Preço R$ 109,90 (328 págs.); 44,90 (ebook)
  • Autoria Beto Ricardo e Ricardo Arnt
  • Editora Zahar

Why can’t the world’s greatest minds solve the mystery of consciousness? (Guardian)

Peter Gamlen cover3

Illustration by Pete Gamlen

Original article

The long read

Philosophers and scientists have been at war for decades over the question of what makes human beings more than complex robots

by Oliver Burkeman

Wed 21 Jan 2015 06.00 GMTShare

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

brain on a blackboard illustration

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

In all seriousness, Koch said he thought it wasn’t impossible that his iPhone might have feelings


By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Illustration by Pete Gamlen
Illustration by Pete Gamlen

“People thought I was crazy to be getting involved,” Koch recalled. “A senior colleague took me out to lunch and said, yes, he had the utmost respect for Francis, but Francis was a Nobel laureate and a half-god and he could do whatever he wanted, whereas I didn’t have tenure yet, so I should be incredibly careful. Stick to more mainstream science! These fringey things – why not leave them until retirement, when you’re coming close to death, and you can worry about the soul and stuff like that?”

It was around this time that David Chalmers started talking about zombies.


As a child, Chalmers was short-sighted in one eye, and he vividly recalls the day he was first fitted with glasses to rectify the problem. “Suddenly I had proper binocular vision,” he said. “And the world just popped out. It was three-dimensional to me in a way it hadn’t been.” He thought about that moment frequently as he grew older. Of course, you could tell a simple mechanical story about what was going on in the lens of his glasses, his eyeball, his retina, and his brain. “But how does that explain the way the world just pops out like that?” To a physicalist, the glasses-eyeball-retina story is the only story. But to a thinker of Chalmers’s persuasion, it was clear that it wasn’t enough: it told you what the machinery of the eye was doing, but it didn’t begin to explain that sudden, breathtaking experience of depth and clarity. Chalmers’s “zombie” thought experiment is his attempt to show why the mechanical account is not enough – why the mystery of conscious awareness goes deeper than a purely material science can explain.

“Look, I’m not a zombie, and I pray that you’re not a zombie,” Chalmers said, one Sunday before Christmas, “but the point is that evolution could have produced zombies instead of conscious creatures – and it didn’t!” We were drinking espressos in his faculty apartment at New York University, where he recently took up a full-time post at what is widely considered the leading philosophy department in the Anglophone world; boxes of his belongings, shipped over from Australia, lay unpacked around his living-room. Chalmers, now 48, recently cut his hair in a concession to academic respectability, and he wears less denim, but his ideas remain as heavy-metal as ever. The zombie scenario goes as follows: imagine that you have a doppelgänger. This person physically resembles you in every respect, and behaves identically to you; he or she holds conversations, eats and sleeps, looks happy or anxious precisely as you do. The sole difference is that the doppelgänger has no consciousness; this – as opposed to a groaning, blood-spattered walking corpse from a movie – is what philosophers mean by a “zombie”.

Such non-conscious humanoids don’t exist, of course. (Or perhaps it would be better to say that I know I’m not one, anyhow; I could never know for certain that you aren’t.) But the point is that, in principle, it feels as if they could. Evolution might have produced creatures that were atom-for-atom the same as humans, capable of everything humans can do, except with no spark of awareness inside. As Chalmers explained: “I’m talking to you now, and I can see how you’re behaving; I could do a brain scan, and find out exactly what’s going on in your brain – yet it seems it could be consistent with all that evidence that you have no consciousness at all.” If you were approached by me and my doppelgänger, not knowing which was which, not even the most powerful brain scanner in existence could tell us apart. And the fact that one can even imagine this scenario is sufficient to show that consciousness can’t just be made of ordinary physical atoms. So consciousness must, somehow, be something extra – an additional ingredient in nature.

Chalmers recently cut his hair and he wears less denim, but his ideas remain as heavy-metal as ever

It would be understating things a bit to say that this argument wasn’t universally well-received when Chalmers began to advance it, most prominently in his 1996 book The Conscious Mind. The withering tone of the philosopher Massimo Pigliucci sums up the thousands of words that have been written attacking the zombie notion: “Let’s relegate zombies to B-movies and try to be a little more serious about our philosophy, shall we?” Yes, it may be true that most of us, in our daily lives, think of consciousness as something over and above our physical being – as if your mind were “a chauffeur inside your own body”, to quote the spiritual author Alan Watts. But to accept this as a scientific principle would mean rewriting the laws of physics. Everything we know about the universe tells us that reality consists only of physical things: atoms and their component particles, busily colliding and combining. Above all, critics point out, if this non-physical mental stuff did exist, how could it cause physical things to happen – as when the feeling of pain causes me to jerk my fingers away from the saucepan’s edge?

Nonetheless, just occasionally, science has dropped tantalising hints that this spooky extra ingredient might be real. In the 1970s, at what was then the National Hospital for Nervous Diseases in London, the neurologist Lawrence Weiskrantz encountered a patient, known as “DB”, with a blind spot in his left visual field, caused by brain damage. Weiskrantz showed him patterns of striped lines, positioned so that they fell on his area of blindness, then asked him to say whether the stripes were vertical or horizontal. Naturally, DB protested that he could see no stripes at all. But Weiskrantz insisted that he guess the answers anyway – and DB got them right almost 90% of the time. Apparently, his brain was perceiving the stripes without his mind being conscious of them. One interpretation is that DB was a semi-zombie, with a brain like any other brain, but partially lacking the magical add-on of consciousness.

Chalmers knows how wildly improbable his ideas can seem, and takes this in his stride: at philosophy conferences, he is fond of clambering on stage to sing The Zombie Blues, a lament about the miseries of having no consciousness. (“I act like you act / I do what you do / But I don’t know / What it’s like to be you.”) “The conceit is: wouldn’t it be a drag to be a zombie? Consciousness is what makes life worth living, and I don’t even have that: I’ve got the zombie blues.” The song has improved since its debut more than a decade ago, when he used to try to hold a tune. “Now I’ve realised it sounds better if you just shout,” he said.


Pete Gamlen
Illustration by Pete Gamlen

The consciousness debates have provoked more mudslinging and fury than most in modern philosophy, perhaps because of how baffling the problem is: opposing combatants tend not merely to disagree, but to find each other’s positions manifestly preposterous. An admittedly extreme example concerns the Canadian-born philosopher Ted Honderich, whose book On Consciousness was described, in an article by his fellow philosopher Colin McGinn in 2007, as “banal and pointless”, “excruciating”, “absurd”, running “the full gamut from the mediocre to the ludicrous to the merely bad”. McGinn added, in a footnote: “The review that appears here is not as I originally wrote it. The editors asked me to ‘soften the tone’ of the original [and] I have done so.” (The attack may have been partly motivated by a passage in Honderich’s autobiography, in which he mentions “my small colleague Colin McGinn”; at the time, Honderich told this newspaper he’d enraged McGinn by referring to a girlfriend of his as “not as plain as the old one”.)

McGinn, to be fair, has made a career from such hatchet jobs. But strong feelings only slightly more politely expressed are commonplace. Not everybody agrees there is a Hard Problem to begin with – making the whole debate kickstarted by Chalmers an exercise in pointlessness. Daniel Dennett, the high-profile atheist and professor at Tufts University outside Boston, argues that consciousness, as we think of it, is an illusion: there just isn’t anything in addition to the spongy stuff of the brain, and that spongy stuff doesn’t actually give rise to something called consciousness. Common sense may tell us there’s a subjective world of inner experience – but then common sense told us that the sun orbits the Earth, and that the world was flat. Consciousness, according to Dennett’s theory, is like a conjuring trick: the normal functioning of the brain just makes it look as if there is something non-physical going on. To look for a real, substantive thing called consciousness, Dennett argues, is as silly as insisting that characters in novels, such as Sherlock Holmes or Harry Potter, must be made up of a peculiar substance named “fictoplasm”; the idea is absurd and unnecessary, since the characters do not exist to begin with. This is the point at which the debate tends to collapse into incredulous laughter and head-shaking: neither camp can quite believe what the other is saying. To Dennett’s opponents, he is simply denying the existence of something everyone knows for certain: their inner experience of sights, smells, emotions and the rest. (Chalmers has speculated, largely in jest, that Dennett himself might be a zombie.) It’s like asserting that cancer doesn’t exist, then claiming you’ve cured cancer; more than one critic of Dennett’s most famous book, Consciousness Explained, has joked that its title ought to be Consciousness Explained Away. Dennett’s reply is characteristically breezy: explaining things away, he insists, is exactly what scientists do. When physicists first concluded that the only difference between gold and silver was the number of subatomic particles in their atoms, he writes, people could have felt cheated, complaining that their special “goldness” and “silveriness” had been explained away. But everybody now accepts that goldness and silveriness are really just differences in atoms. However hard it feels to accept, we should concede that consciousness is just the physical brain, doing what brains do.

“The history of science is full of cases where people thought a phenomenon was utterly unique, that there couldn’t be any possible mechanism for it, that we might never solve it, that there was nothing in the universe like it,” said Patricia Churchland of the University of California, a self-described “neurophilosopher” and one of Chalmers’s most forthright critics. Churchland’s opinion of the Hard Problem, which she expresses in caustic vocal italics, is that it is nonsense, kept alive by philosophers who fear that science might be about to eliminate one of the puzzles that has kept them gainfully employed for years. Look at the precedents: in the 17th century, scholars were convinced that light couldn’t possibly be physical – that it had to be something occult, beyond the usual laws of nature. Or take life itself: early scientists were convinced that there had to be some magical spirit – the élan vital – that distinguished living beings from mere machines. But there wasn’t, of course. Light is electromagnetic radiation; life is just the label we give to certain kinds of objects that can grow and reproduce. Eventually, neuroscience will show that consciousness is just brain states. Churchland said: “The history of science really gives you perspective on how easy it is to talk ourselves into this sort of thinking – that if my big, wonderful brain can’t envisage the solution, then it must be a really, really hard problem!”

Solutions have regularly been floated: the literature is awash in references to “global workspace theory”, “ego tunnels”, “microtubules”, and speculation that quantum theory may provide a way forward. But the intractability of the arguments has caused some thinkers, such as Colin McGinn, to raise an intriguing if ultimately defeatist possibility: what if we’re just constitutionally incapable of ever solving the Hard Problem? After all, our brains evolved to help us solve down-to-earth problems of survival and reproduction; there is no particular reason to assume they should be capable of cracking every big philosophical puzzle we happen to throw at them. This stance has become known as “mysterianism” – after the 1960s Michigan rock’n’roll band ? and the Mysterians, who themselves borrowed the name from a work of Japanese sci-fi – but the essence of it is that there’s actually no mystery to why consciousness hasn’t been explained: it’s that humans aren’t up to the job. If we struggle to understand what it could possibly mean for the mind to be physical, maybe that’s because we are, to quote the American philosopher Josh Weisberg, in the position of “squirrels trying to understand quantum mechanics”. In other words: “It’s just not going to happen.”


Or maybe it is: in the last few years, several scientists and philosophers, Chalmers and Koch among them, have begun to look seriously again at a viewpoint so bizarre that it has been neglected for more than a century, except among followers of eastern spiritual traditions, or in the kookier corners of the new age. This is “panpsychism”, the dizzying notion that everything in the universe might be conscious, or at least potentially conscious, or conscious when put into certain configurations. Koch concedes that this sounds ridiculous: when he mentions panpsychism, he has written, “I often encounter blank stares of incomprehension.” But when it comes to grappling with the Hard Problem, crazy-sounding theories are an occupational hazard. Besides, panpsychism might help unravel an enigma that has attached to the study of consciousness from the start: if humans have it, and apes have it, and dogs and pigs probably have it, and maybe birds, too – well, where does it stop?

Illustration by Pete Gamlen
Illustration by Pete Gamlen

Growing up as the child of German-born Catholics, Koch had a dachshund named Purzel. According to the church, because he was a dog, that meant he didn’t have a soul. But he whined when anxious and yelped when injured – “he certainly gave every appearance of having a rich inner life”. These days we don’t much speak of souls, but it is widely assumed that many non-human brains are conscious – that a dog really does feel pain when he is hurt. The problem is that there seems to be no logical reason to draw the line at dogs, or sparrows or mice or insects, or, for that matter, trees or rocks. Since we don’t know how the brains of mammals create consciousness, we have no grounds for assuming it’s only the brains of mammals that do so – or even that consciousness requires a brain at all. Which is how Koch and Chalmers have both found themselves arguing, in the pages of the New York Review of Books, that an ordinary household thermostat or a photodiode, of the kind you might find in your smoke detector, might in principle be conscious.

The argument unfolds as follows: physicists have no problem accepting that certain fundamental aspects of reality – such as space, mass, or electrical charge – just do exist. They can’t be explained as being the result of anything else. Explanations have to stop somewhere. The panpsychist hunch is that consciousness could be like that, too – and that if it is, there is no particular reason to assume that it only occurs in certain kinds of matter.

Koch’s specific twist on this idea, developed with the neuroscientist and psychiatrist Giulio Tononi, is narrower and more precise than traditional panpsychism. It is the argument that anything at all could be conscious, providing that the information it contains is sufficiently interconnected and organised. The human brain certainly fits the bill; so do the brains of cats and dogs, though their consciousness probably doesn’t resemble ours. But in principle the same might apply to the internet, or a smartphone, or a thermostat. (The ethical implications are unsettling: might we owe the same care to conscious machines that we bestow on animals? Koch, for his part, tries to avoid stepping on insects as he walks.)

Unlike the vast majority of musings on the Hard Problem, moreover, Tononi and Koch’s “integrated information theory” has actually been tested. A team of researchers led by Tononi has designed a device that stimulates the brain with electrical voltage, to measure how interconnected and organised – how “integrated” – its neural circuits are. Sure enough, when people fall into a deep sleep, or receive an injection of anaesthetic, as they slip into unconsciousness, the device demonstrates that their brain integration declines, too. Among patients suffering “locked-in syndrome” – who are as conscious as the rest of us – levels of brain integration remain high; among patients in coma – who aren’t – it doesn’t. Gather enough of this kind of evidence, Koch argues and in theory you could take any device, measure the complexity of the information contained in it, then deduce whether or not it was conscious.

But even if one were willing to accept the perplexing claim that a smartphone could be conscious, could you ever know that it was true? Surely only the smartphone itself could ever know that? Koch shrugged. “It’s like black holes,” he said. “I’ve never been in a black hole. Personally, I have no experience of black holes. But the theory [that predicts black holes] seems always to be true, so I tend to accept it.”

Peter Gamelen
Illustration by Pete Gamlen

It would be satisfying for multiple reasons if a theory like this were eventually to vanquish the Hard Problem. On the one hand, it wouldn’t require a belief in spooky mind-substances that reside inside brains; the laws of physics would escape largely unscathed. On the other hand, we wouldn’t need to accept the strange and soulless claim that consciousness doesn’t exist, when it’s so obvious that it does. On the contrary, panpsychism says, it’s everywhere. The universe is throbbing with it.

Last June, several of the most prominent combatants in the consciousness debates – including Chalmers, Churchland and Dennett – boarded a tall-masted yacht for a trip among the ice floes of Greenland. This conference-at-sea was funded by a Russian internet entrepreneur, Dmitry Volkov, the founder of the Moscow Centre for Consciousness Studies. About 30 academics and graduate students, plus crew, spent a week gliding through dark waters, past looming snow-topped mountains and glaciers, in a bracing chill conducive to focused thought, giving the problem of consciousness another shot. In the mornings, they visited islands to go hiking, or examine the ruins of ancient stone huts; in the afternoons, they held conference sessions on the boat. For Chalmers, the setting only sharpened the urgency of the mystery: how could you feel the Arctic wind on your face, take in the visual sweep of vivid greys and whites and greens, and still claim conscious experience was unreal, or that it was simply the result of ordinary physical stuff, behaving ordinarily?

The question was rhetorical. Dennett and Churchland were not converted; indeed, Chalmers has no particular confidence that a consensus will emerge in the next century. “Maybe there’ll be some amazing new development that leaves us all, now, looking like pre-Darwinians arguing about biology,” he said. “But it wouldn’t surprise me in the least if in 100 years, neuroscience is incredibly sophisticated, if we have a complete map of the brain – and yet some people are still saying, ‘Yes, but how does any of that give you consciousness?’ while others are saying ‘No, no, no – that just is the consciousness!’” The Greenland cruise concluded in collegial spirits, and mutual incomprehension.

It would be poetic – albeit deeply frustrating – were it ultimately to prove that the one thing the human mind is incapable of comprehending is itself. An answer must be out there somewhere. And finding it matters: indeed, one could argue that nothing else could ever matter more – since anything at all that matters, in life, only does so as a consequence of its impact on conscious brains. Yet there’s no reason to assume that our brains will be adequate vessels for the voyage towards that answer. Nor that, were we to stumble on a solution to the Hard Problem, on some distant shore where neuroscience meets philosophy, we would even recognise that we’d found it.

Follow the Long Read on Twitter: @gdnlongread

  • This article was amended on 21 January 2015. The conference-at-sea was funded by the Russian internet entrepreneur Dmitry Volkov, not Dmitry Itskov as was originally stated. This has been corrected.

The new science of death: ‘There’s something happening in the brain that makes no sense’ (Guardian)

blurred figure in a tunnel moving towards a light

Photograph: Gaia Moments/Alamy

Original article

The long read

New research into the dying brain suggests the line between life and death may be less distinct than previously thought

by Alex Blasdel

Tue 2 Apr 2024 05.00 BSTShare

Patient One was 24 years old and pregnant with her third child when she was taken off life support. It was 2014. A couple of years earlier, she had been diagnosed with a disorder that caused an irregular heartbeat, and during her two previous pregnancies she had suffered seizures and faintings. Four weeks into her third pregnancy, she collapsed on the floor of her home. Her mother, who was with her, called 911. By the time an ambulance arrived, Patient One had been unconscious for more than 10 minutes. Paramedics found that her heart had stopped.

After being driven to a hospital where she couldn’t be treated, Patient One was taken to the emergency department at the University of Michigan. There, medical staff had to shock her chest three times with a defibrillator before they could restart her heart. She was placed on an external ventilator and pacemaker, and transferred to the neurointensive care unit, where doctors monitored her brain activity. She was unresponsive to external stimuli, and had a massive swelling in her brain. After she lay in a deep coma for three days, her family decided it was best to take her off life support. It was at that point – after her oxygen was turned off and nurses pulled the breathing tube from her throat – that Patient One became one of the most intriguing scientific subjects in recent history.

For several years, Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness. Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived.

Dying seemed like such an important area of research – we all do it, after all – that Borjigin assumed other scientists had already developed a thorough understanding of what happens to the brain in the process of death. But when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit. Among them was Patient One.

At the time Borjigin began her research into Patient One, the scientific understanding of death had reached an impasse. Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies. A handful of those patients even claimed to witness, from above, doctors’ attempts to resuscitate them. According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.

As remarkable as these near-death experiences sounded, they were consistent enough that some scientists began to believe there was truth to them: maybe people really did have minds or souls that existed separately from their living bodies. In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.

Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences. Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased. “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.

But by 2015, experiments such as Parnia’s had yielded ambiguous results, and the field of near-death studies was not much closer to understanding death than it had been when it was founded four decades earlier. That’s when Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.

“I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”


For all that science has learned about the workings of life, death remains among the most intractable of mysteries. “At times I have been tempted to believe that the creator has eternally intended this department of nature to remain baffling, to prompt our curiosities and hopes and suspicions all in equal measure,” the philosopher William James wrote in 1909.

The first time that the question Borjigin began asking in 2015 was posed – about what happens to the brain during death – was a quarter of a millennium earlier. Around 1740, a French military physician reviewed the case of a famous apothecary who, after a “malign fever” and several blood-lettings, fell unconscious and thought he had travelled to the Kingdom of the BlessedThe physician speculated that the apothecary’s experience had been caused by a surge of blood to the brain. But between that early report and the mid-20th century, scientific interest in near-death experiences remained sporadic.

In 1892, the Swiss climber and geologist Albert Heim collected the first systematic accounts of near-death experiences from 30 fellow climbers who had suffered near-fatal falls. In many cases, the climbers underwent a sudden review of their entire past, heard beautiful music, and “fell in a superbly blue heaven containing roseate cloudlets”, Heim wrote. “Then consciousness was painlessly extinguished, usually at the moment of impact.” There were a few more attempts to do research in the early 20th century, but little progress was made in understanding near-death experiences scientifically. Then, in 1975, an American medical student named Raymond Moody published a book called Life After Life.

Sunbeams behind clouds in vivid sunset sky reflecting in ocean water
 Photograph: Getty Images/Blend Images

In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”. Moody’s book became an international bestseller.

In 1976, the New York Times reported on the burgeoning scientific interest in “life after death” and the “emerging field of thanatology”. The following year, Moody and several fellow thanatologists founded an organisation that became the International Association for Near-Death Studies. In 1981, they printed the inaugural issue of Vital Signs, a magazine for the general reader that was largely devoted to stories of near-death experiences. The following year they began producing the field’s first peer-reviewed journal, which became the Journal of Near-Death Studies. The field was growing, and taking on the trappings of scientific respectability. Reviewing its rise in 1988, the British Journal of Psychiatry captured the field’s animating spirit: “A grand hope has been expressed that, through NDE research, new insights can be gained into the ageless mystery of human mortality and its ultimate significance, and that, for the first time, empirical perspectives on the nature of death may be achieved.”

But near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine. As researchers, the spiritualists’ aim was to collect as many reports of near-death experience as possible, and to proselytise society about the reality of life after death. Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.

The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individual. Many of them were physicians and psychiatrists who had been deeply affected after hearing the near-death stories of patients they had treated in the ICU. Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.

Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain. (Indeed, many of the states reported by near-death experiencers can apparently be achieved by taking a hero’s dose of ketamine.) Their basic premise was: no functioning brain means no consciousness, and certainly no life after death. Their task, which Borjigin took up in 2015, was to discover what was happening during near-death experiences on a fundamentally physical level.

Slowly, the spiritualists left the field of research for the loftier domains of Christian talk radio, and the parapsychologists and physicalists started bringing near-death studies closer to the scientific mainstream. Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221. Those articles have appeared everywhere from the Canadian Urological Association Journal to the esteemed pages of The Lancet.

Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries. Charlotte Martial, a neuroscientist at the University of Liège in Belgium who has done some of the best physicalist work on near-death experiences, hopes we will soon develop a new understanding of the relationship between the internal experience of consciousness and its outward manifestations, for example in coma patients. “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,” she told me. Parnia, the resuscitation specialist, who studies the physical processes of dying but is also sympathetic to a parapsychological theory of consciousness, has a radically different take on what we are poised to find out. “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”


If the field of near-death studies is at the threshold of new discoveries about consciousness and death, it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest. Lance Becker has been a leader in resuscitation science for more than 30 years. As a young doctor attempting to revive people through CPR in the mid-1980s, senior physicians would often step in to declare patients dead. “At a certain point, they would just say, ‘OK, that’s enough. Let’s stop. This is unsuccessful. Time of death: 1.37pm,’” he recalled recently. “And that would be the last thing. And one of the things running through my head as a young doctor was, ‘Well, what really happened at 1.37?’”

In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest. (It is different from a heart attack, in which there is a blockage in a heart that’s still pumping.) Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.

For almost all people at all times in history, cardiac arrest was basically the end of the line. That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques, and soon people were being revived in previously unthinkable, if still modest, numbers.

As more and more people were resuscitated, scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.

It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care. In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.

“I don’t think there’s ever been a more exciting time for the field,” Becker told me. “We’re discovering new drugs, we’re discovering new devices, and we’re discovering new things about the brain.”


The brain – that’s the tricky part. In January 2021, as the Covid-19 pandemic was surging toward what would become its deadliest week on record, Netflix released a documentary series called Surviving Death. In the first episode, some of near-death studies’ most prominent parapsychologists presented the core of their arguments for why they believe near-death experiences show that consciousness exists independently of the brain. “When the heart stops, within 20 seconds or so, you get flatlining, which means no brain activity,” Bruce Greyson, an emeritus professor of psychiatry at the University of Virginia and one of the founding members of the International Association for Near-Death Studies, says in the documentary. “And yet,” he goes on to claim, “people have near-death experiences when they’ve been (quote) ‘flatlined’ for longer than that.”

That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain. Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.

In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down. To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead. Dozens of such reports exist. One of the most famous is about a woman who apparently travelled so far outside her body that she was able to spot a shoe on a window ledge in another part of the hospital where she went into cardiac arrest; the shoe was later reportedly found by a nurse.

an antique illustration of an ‘out of body experience’
 Photograph: Chronicle/Alamy

At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,” Sue Blackmore, a well-known researcher into parapsychology who had her own near-death experience as a young woman in 1970, has written.

The case of the shoe, Blackmore pointed out, relied solely on the report of the nurse who claimed to have found it. That’s far from the standard of proof the scientific community would require to accept a result as radical as that consciousness can travel beyond the body and exist after death. In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,” Charlotte Martial, the University of Liège neuroscientist, told me.

The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?


Perhaps the story to be written about near-death experiences is not that they prove consciousness is radically different from what we thought it was. Instead, it is that the process of dying is far stranger than scientists ever suspected. The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.

In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.

“As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.

In those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irrevocably deeper into death, something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.

The shadows of anonymous people are seen on a wall
 Photograph: Richard Baker/Corbis/Getty Images

Those glimmers and flashes of something like life contradict the expectations of almost everyone working in the field of resuscitation science and near-death studies. The predominant belief – expressed by Greyson, the psychiatrist and co-founder of the International Association of Near Death Studies, in the Netflix series Surviving Death – was that as soon as oxygen stops going to the brain, neurological activity falls precipitously. Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.

Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course, Patient One did not recover, so no one can prove that the extraordinary happenings in her dying brain had experiential counterparts. Greyson and one of the other grandees of near-death studies, a Dutch cardiologist named Pim van Lommel, have asserted that Patient One’s brain activity can shed no light on near-death experiences because her heart hadn’t fully flatlined, but that is a self-defeating argument: there is no rigorous empirical evidence that near-death experiences occur in people whose hearts have completely stopped.

At the very least, Patient One’s brain activity – and the activity in the dying brain of another patient Borjigin studied, a 77-year-old woman known as Patient Three – seems to close the door on the argument that the brain always and nearly immediately ceases to function in a coherent manner in the moments after clinical death. “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.


Borjigin believes that understanding the dying brain is one of the “holy grails” of neuroscience. “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?” Although most people would take that result for granted, Borjigin thinks that, on a physical level, it actually makes little sense.

Borjigin hopes that understanding the neurophysiology of death can help us to reverse it. She already has brain activity data from dozens of deceased patients that she is waiting to analyse. But because of the paranormal stigma associated with near-death studies, she says, few research agencies want to grant her funding. “Consciousness is almost a dirty word amongst funders,” she added. “Hardcore scientists think research into it should belong to maybe theology, philosophy, but not in hardcore science. Other people ask, ‘What’s the use? The patients are gonna die anyway, so why study that process? There’s nothing you can do about it.’”

Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing. The pigs’ brain scans didn’t show the widespread electrical activity that we typically associate with sentience or consciousness. But the fact that there was any activity at all suggests the frontiers of life may one day extend much, much farther into the realms of death than most scientists currently imagine.

Other serious avenues of research into near-death experience are ongoing. Martial and her colleagues at the University of Liège are working on many issues relating to near-death experiences. One is whether people with a history of trauma, or with more creative minds, tend to have such experiences at higher rates than the general population. Another is on the evolutionary biology of near-death experiences. Why, evolutionarily speaking, should we have such experiences at all? Martial and her colleagues speculate that it may be a form of the phenomenon known as thanatosis, in which creatures throughout the animal kingdom feign death to escape mortal dangers. Other researchers have proposed that the surge of electrical activity in the moments after cardiac arrest is just the final seizure of a dying brain, or have hypothesised that it’s a last-ditch attempt by the brain to restart itself, like jump-starting the engine on a car.

Meanwhile, in parts of the culture where enthusiasm is reserved not for scientific discovery in this world, but for absolution or benediction in the next, the spiritualists, along with sundry other kooks and grifters, are busily peddling their tales of the afterlife. Forget the proverbial tunnel of light: in America in particular, a pipeline of money has been discovered from death’s door, through Christian media, to the New York Times bestseller list and thence to the fawning, gullible armchairs of the nation’s daytime talk shows. First stop, paradise; next stop, Dr Oz.

But there is something that binds many of these people – the physicalists, the parapsychologists, the spiritualists – together. It is the hope that by transcending the current limits of science and of our bodies, we will achieve not a deeper understanding of death, but a longer and more profound experience of life. That, perhaps, is the real attraction of the near-death experience: it shows us what is possible not in the next world, but in this one.

 Follow the Long Read on X at @gdnlongread, listen to our podcasts here and sign up to the long read weekly email here.

The Terrible Costs of a Phone-Based Childhood (The Atlantic)

theatlantic.com

The environment in which kids grow up today is hostile to human development.

By Jonathan Haidt

Photographs by Maggie Shannon

MARCH 13, 2024


Two teens sit on a bed looking at their phones

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

Something went suddenly and horribly wrong for adolescents in the early 2010s. By now you’ve likely seen the statistics: Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies from 2010 to 2019. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent.

The problem was not limited to the U.S.: Similar patterns emerged around the same time in Canada, the U.K., Australia, New Zealand, the Nordic countries, and beyond. By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data.

The decline in mental health is just one of many signs that something went awry. Loneliness and friendlessness among American teens began to surge around 2012. Academic achievement went down, too. According to “The Nation’s Report Card,” scores in reading and math began to decline for U.S. students after 2012, reversing decades of slow but generally steady increase. PISA, the major international measure of educational trends, shows that declines in math, reading, and science happened globally, also beginning in the early 2010s.

As the oldest members of Gen Z reach their late 20s, their troubles are carrying over into adulthood. Young adults are dating less, having less sex, and showing less interest in ever having children than prior generations. They are more likely to live with their parents. They were less likely to get jobs as teens, and managers say they are harder to work with. Many of these trends began with earlier generations, but most of them accelerated with Gen Z.

Surveys show that members of Gen Z are shyer and more risk averse than previous generations, too, and risk aversion may make them less ambitious. In an interview last May, OpenAI co-founder Sam Altman and Stripe co-founder Patrick Collison noted that, for the first time since the 1970s, none of Silicon Valley’s preeminent entrepreneurs are under 30. “Something has really gone wrong,” Altman said. In a famously young industry, he was baffled by the sudden absence of great founders in their 20s.

Generations are not monolithic, of course. Many young people are flourishing. Taken as a whole, however, Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. And if a generation is doing poorly––if it is more anxious and depressed and is starting families, careers, and important companies at a substantially lower rate than previous generations––then the sociological and economic consequences will be profound for the entire society.

graph showing rates of self-harm in children
Number of emergency-department visits for nonfatal self-harm per 100,000 children (source: Centers for Disease Control and Prevention)

What happened in the early 2010s that altered adolescent development and worsened mental health? Theories abound, but the fact that similar trends are found in many countries worldwide means that events and trends that are specific to the United States cannot be the main story.

I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online—particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. Life changed rapidly for younger children, too, as they began to get access to their parents’ smartphones and, later, got their own iPads, laptops, and even smartphones during elementary school.


As a social psychologist who has long studied social and moral development, I have been involved in debates about the effects of digital technology for years. Typically, the scientific questions have been framed somewhat narrowly, to make them easier to address with data. For example, do adolescents who consume more social media have higher levels of depression? Does using a smartphone just before bedtime interfere with sleep? The answer to these questions is usually found to be yes, although the size of the relationship is often statistically small, which has led some researchers to conclude that these new technologies are not responsible for the gigantic increases in mental illness that began in the early 2010s.

But before we can evaluate the evidence on any one potential avenue of harm, we need to step back and ask a broader question: What is childhood––including adolescence––and how did it change when smartphones moved to the center of it? If we take a more holistic view of what childhood is and what young children, tweens, and teens need to do to mature into competent adults, the picture becomes much clearer. Smartphone-based life, it turns out, alters or interferes with a great number of developmental processes.

The intrusion of smartphones and social media are not the only changes that have deformed childhood. There’s an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world.

My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now.

1. The Decline of Play and Independence

Human brains are extraordinarily large compared with those of other primates, and human childhoods are extraordinarily long, too, to give those large brains time to wire up within a particular culture. A child’s brain is already 90 percent of its adult size by about age 6. The next 10 or 15 years are about learning norms and mastering skills—physical, analytical, creative, and social. As children and adolescents seek out experiences and practice a wide variety of behaviors, the synapses and neurons that are used frequently are retained while those that are used less often disappear. Neurons that fire together wire together, as brain researchers say.

Brain development is sometimes said to be “experience-expectant,” because specific parts of the brain show increased plasticity during periods of life when an animal’s brain can “expect” to have certain kinds of experiences. You can see this with baby geese, who will imprint on whatever mother-sized object moves in their vicinity just after they hatch. You can see it with human children, who are able to learn languages quickly and take on the local accent, but only through early puberty; after that, it’s hard to learn a language and sound like a native speaker. There is also some evidence of a sensitive period for cultural learning more generally. Japanese children who spent a few years in California in the 1970s came to feel “American” in their identity and ways of interacting only if they attended American schools for a few years between ages 9 and 15. If they left before age 9, there was no lasting impact. If they didn’t arrive until they were 15, it was too late; they didn’t come to feel American.

Human childhood is an extended cultural apprenticeship with different tasks at different ages all the way through puberty. Once we see it this way, we can identify factors that promote or impede the right kinds of learning at each age. For children of all ages, one of the most powerful drivers of learning is the strong motivation to play. Play is the work of childhood, and all young mammals have the same job: to wire up their brains by playing vigorously and often, practicing the moves and skills they’ll need as adults. Kittens will play-pounce on anything that looks like a mouse tail. Human children will play games such as tag and sharks and minnows, which let them practice both their predator skills and their escaping-from-predator skills. Adolescents will play sports with greater intensity, and will incorporate playfulness into their social interactions—flirting, teasing, and developing inside jokes that bond friends together. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play.

One crucial aspect of play is physical risk taking. Children and adolescents must take risks and fail—often—in environments in which failure is not very costly. This is how they extend their abilities, overcome their fears, learn to estimate risk, and learn to cooperate in order to take on larger challenges later. The ever-present possibility of getting hurt while running around, exploring, play-fighting, or getting into a real conflict with another group adds an element of thrill, and thrilling play appears to be the most effective kind for overcoming childhood anxieties and building social, emotional, and physical competence. The desire for risk and thrill increases in the teen years, when failure might carry more serious consequences. Children of all ages need to choose the risk they are ready for at a given moment. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults.

Human childhood and adolescence evolved outdoors, in a physical world full of dangers and opportunities. Its central activities––play, exploration, and intense socializing––were largely unsupervised by adults, allowing children to make their own choices, resolve their own conflicts, and take care of one another. Shared adventures and shared adversity bound young people together into strong friendship clusters within which they mastered the social dynamics of small groups, which prepared them to master bigger challenges and larger groups later on.

And then we changed childhood.

The changes started slowly in the late 1970s and ’80s, before the arrival of the internet, as many parents in the U.S. grew fearful that their children would be harmed or abducted if left unsupervised. Such crimes have always been extremely rare, but they loomed larger in parents’ minds thanks in part to rising levels of street crime combined with the arrival of cable TV, which enabled round-the-clock coverage of missing-children cases. A general decline in social capital––the degree to which people knew and trusted their neighbors and institutions––exacerbated parental fears. Meanwhile, rising competition for college admissions encouraged more intensive forms of parenting. In the 1990s, American parents began pulling their children indoors or insisting that afternoons be spent in adult-run enrichment activities. Free play, independent exploration, and teen-hangout time declined.

In recent decades, seeing unchaperoned children outdoors has become so novel that when one is spotted in the wild, some adults feel it is their duty to call the police. In 2015, the Pew Research Center found that parents, on average, believed that children should be at least 10 years old to play unsupervised in front of their house, and that kids should be 14 before being allowed to go unsupervised to a public park. Most of these same parents had enjoyed joyous and unsupervised outdoor play by the age of 7 or 8.

But overprotection is only part of the story. The transition away from a more independent childhood was facilitated by steady improvements in digital technology, which made it easier and more inviting for young people to spend a lot more time at home, indoors, and alone in their rooms. Eventually, tech companies got access to children 24/7. They developed exciting virtual activities, engineered for “engagement,” that are nothing like the real-world experiences young brains evolved to expect.

Triptych: teens on their phones at the mall, park, and bedroom

2. The Virtual World Arrives in Two Waves

The internet, which now dominates the lives of young people, arrived in two waves of linked technologies. The first one did little harm to Millennials. The second one swallowed Gen Z whole.

The first wave came ashore in the 1990s with the arrival of dial-up internet access, which made personal computers good for something beyond word processing and basic games. By 2003, 55 percent of American households had a computer with (slow) internet access. Rates of adolescent depression, loneliness, and other measures of poor mental health did not rise in this first wave. If anything, they went down a bit. Millennial teens (born 1981 through 1995), who were the first to go through puberty with access to the internet, were psychologically healthier and happier, on average, than their older siblings or parents in Generation X (born 1965 through 1980).

The second wave began to rise in the 2000s, though its full force didn’t hit until the early 2010s. It began rather innocently with the introduction of social-media platforms that helped people connect with their friends. Posting and sharing content became much easier with sites such as Friendster (launched in 2003), Myspace (2003), and Facebook (2004).

Teens embraced social media soon after it came out, but the time they could spend on these sites was limited in those early years because the sites could only be accessed from a computer, often the family computer in the living room. Young people couldn’t access social media (and the rest of the internet) from the school bus, during class time, or while hanging out with friends outdoors. Many teens in the early-to-mid-2000s had cellphones, but these were basic phones (many of them flip phones) that had no internet access. Typing on them was difficult––they had only number keys. Basic phones were tools that helped Millennials meet up with one another in person or talk with each other one-on-one. I have seen no evidence to suggest that basic cellphones harmed the mental health of Millennials.

It was not until the introduction of the iPhone (2007), the App Store (2008), and high-speed internet (which reached 50 percent of American homes in 2007)—and the corresponding pivot to mobile made by many providers of social media, video games, and porn—that it became possible for adolescents to spend nearly every waking moment online. The extraordinary synergy among these innovations was what powered the second technological wave. In 2011, only 23 percent of teens had a smartphone. By 2015, that number had risen to 73 percent, and a quarter of teens said they were online “almost constantly.” Their younger siblings in elementary school didn’t usually have their own smartphones, but after its release in 2010, the iPad quickly became a staple of young children’s daily lives. It was in this brief period, from 2010 to 2015, that childhood in America (and many other countries) was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development.

3. Techno-optimism and the Birth of the Phone-Based Childhood

The phone-based childhood created by that second wave—including not just smartphones themselves, but all manner of internet-connected devices, such as tablets, laptops, video-game consoles, and smartwatches—arrived near the end of a period of enormous optimism about digital technology. The internet came into our lives in the mid-1990s, soon after the fall of the Soviet Union. By the end of that decade, it was widely thought that the web would be an ally of democracy and a slayer of tyrants. When people are connected to each other, and to all the information in the world, how could any dictator keep them down?

In the 2000s, Silicon Valley and its world-changing inventions were a source of pride and excitement in America. Smart and ambitious young people around the world wanted to move to the West Coast to be part of the digital revolution. Tech-company founders such as Steve Jobs and Sergey Brin were lauded as gods, or at least as modern Prometheans, bringing humans godlike powers. The Arab Spring bloomed in 2011 with the help of decentralized social platforms, including Twitter and Facebook. When pundits and entrepreneurs talked about the power of social media to transform society, it didn’t sound like a dark prophecy.

You have to put yourself back in this heady time to understand why adults acquiesced so readily to the rapid transformation of childhood. Many parents had concerns, even then, about what their children were doing online, especially because of the internet’s ability to put children in contact with strangers. But there was also a lot of excitement about the upsides of this new digital world. If computers and the internet were the vanguards of progress, and if young people––widely referred to as “digital natives”––were going to live their lives entwined with these technologies, then why not give them a head start? I remember how exciting it was to see my 2-year-old son master the touch-and-swipe interface of my first iPhone in 2008. I thought I could see his neurons being woven together faster as a result of the stimulation it brought to his brain, compared to the passivity of watching television or the slowness of building a block tower. I thought I could see his future job prospects improving.

Touchscreen devices were also a godsend for harried parents. Many of us discovered that we could have peace at a restaurant, on a long car trip, or at home while making dinner or replying to emails if we just gave our children what they most wanted: our smartphones and tablets. We saw that everyone else was doing it and figured it must be okay.

It was the same for older children, desperate to join their friends on social-media platforms, where the minimum age to open an account was set by law to 13, even though no research had been done to establish the safety of these products for minors. Because the platforms did nothing (and still do nothing) to verify the stated age of new-account applicants, any 10-year-old could open multiple accounts without parental permission or knowledge, and many did. Facebook and later Instagram became places where many sixth and seventh graders were hanging out and socializing. If parents did find out about these accounts, it was too late. Nobody wanted their child to be isolated and alone, so parents rarely forced their children to shut down their accounts.

We had no idea what we were doing.

4. The High Cost of a Phone-Based Childhood

In Walden, his 1854 reflection on simple living, Henry David Thoreau wrote, “The cost of a thing is the amount of … life which is required to be exchanged for it, immediately or in the long run.” It’s an elegant formulation of what economists would later call the opportunity cost of any choice—all of the things you can no longer do with your money and time once you’ve committed them to something else. So it’s important that we grasp just how much of a young person’s day is now taken up by their devices.

The numbers are hard to believe. The most recent Gallup data show that American teens spend about five hours a day just on social-media platforms (including watching videos on TikTok and YouTube). Add in all the other phone- and screen-based activities, and the number rises to somewhere between seven and nine hours a day, on average. The numbers are even higher in single-parent and low-income families, and among Black, Hispanic, and Native American families.

These very high numbers do not include time spent in front of screens for school or homework, nor do they include all the time adolescents spend paying only partial attention to events in the real world while thinking about what they’re missing on social media or waiting for their phones to ping. Pew reports that in 2022, one-third of teens said they were on one of the major social-media sites “almost constantly,” and nearly half said the same of the internet in general. For these heavy users, nearly every waking hour is an hour absorbed, in full or in part, by their devices.

overhead image of teens hands with phones

In Thoreau’s terms, how much of life is exchanged for all this screen time? Arguably, most of it. Everything else in an adolescent’s day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed, and for the hundreds of “friends,” “followers,” and other network connections that must be serviced with texts, posts, comments, likes, snaps, and direct messages. I recently surveyed my students at NYU, and most of them reported that the very first thing they do when they open their eyes in the morning is check their texts, direct messages, and social-media feeds. It’s also the last thing they do before they close their eyes at night. And it’s a lot of what they do in between.

The amount of time that adolescents spend sleeping declined in the early 2010s, and many studies tie sleep loss directly to the use of devices around bedtime, particularly when they’re used to scroll through social media. Exercise declined, too, which is unfortunate because exercise, like sleep, improves both mental and physical health. Book reading has been declining for decades, pushed aside by digital alternatives, but the decline, like so much else, sped up in the early 2010s. With passive entertainment always available, adolescent minds likely wander less than they used to; contemplation and imagination might be placed on the list of things winnowed down or crowded out.

But perhaps the most devastating cost of the new phone-based childhood was the collapse of time spent interacting with other people face-to-face. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends (about two hours a day, on average, not counting time together at school) than did older people (who spent just 30 to 60 minutes with friends). Time with friends began decreasing for young people in the 2000s, but the drop accelerated in the 2010s, while it barely changed for older people. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck.

You might question the importance of this decline. After all, isn’t much of this online time spent interacting with friends through texting, social media, and multiplayer video games? Isn’t that just as good?

Some of it surely is, and virtual interactions offer unique benefits too, especially for young people who are geographically or socially isolated. But in general, the virtual world lacks many of the features that make human interactions in the real world nutritious, as we might say, for physical, social, and emotional development. In particular, real-world relationships and social interactions are characterized by four features—typical for hundreds of thousands of years—that online interactions either distort or erase.

First, real-world interactions are embodied, meaning that we use our hands and facial expressions to communicate, and we learn to respond to the body language of others. Virtual interactions, in contrast, mostly rely on language alone. No matter how many emojis are offered as compensation, the elimination of communication channels for which we have eons of evolutionary programming is likely to produce adults who are less comfortable and less skilled at interacting in person.

Second, real-world interactions are synchronous; they happen at the same time. As a result, we learn subtle cues about timing and conversational turn taking. Synchronous interactions make us feel closer to the other person because that’s what getting “in sync” does. Texts, posts, and many other virtual interactions lack synchrony. There is less real laughter, more room for misinterpretation, and more stress after a comment that gets no immediate response.

Third, real-world interactions primarily involve one‐to‐one communication, or sometimes one-to-several. But many virtual communications are broadcast to a potentially huge audience. Online, each person can engage in dozens of asynchronous interactions in parallel, which interferes with the depth achieved in all of them. The sender’s motivations are different, too: With a large audience, one’s reputation is always on the line; an error or poor performance can damage social standing with large numbers of peers. These communications thus tend to be more performative and anxiety-inducing than one-to-one conversations.

Finally, real-world interactions usually take place within communities that have a high bar for entry and exit, so people are strongly motivated to invest in relationships and repair rifts when they happen. But in many virtual networks, people can easily block others or quit when they are displeased. Relationships within such networks are usually more disposable.

These unsatisfying and anxiety-producing features of life online should be recognizable to most adults. Online interactions can bring out antisocial behavior that people would never display in their offline communities. But if life online takes a toll on adults, just imagine what it does to adolescents in the early years of puberty, when their “experience expectant” brains are rewiring based on feedback from their social interactions.

Kids going through puberty online are likely to experience far more social comparison, self-consciousness, public shaming, and chronic anxiety than adolescents in previous generations, which could potentially set developing brains into a habitual state of defensiveness. The brain contains systems that are specialized for approach (when opportunities beckon) and withdrawal (when threats appear or seem likely). People can be in what we might call “discover mode” or “defend mode” at any moment, but generally not both. The two systems together form a mechanism for quickly adapting to changing conditions, like a thermostat that can activate either a heating system or a cooling system as the temperature fluctuates. Some people’s internal thermostats are generally set to discover mode, and they flip into defend mode only when clear threats arise. These people tend to see the world as full of opportunities. They are happier and less anxious. Other people’s internal thermostats are generally set to defend mode, and they flip into discover mode only when they feel unusually safe. They tend to see the world as full of threats and are more prone to anxiety and depressive disorders.

graph showing rates of disabilities in US college freshman
Percentage of U.S. college freshmen reporting various kinds of disabilities and disorders (source: Higher Education Research Institute)

A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting “safe spaces” and trigger warnings. They were highly sensitive to “microaggressions” and sometimes claimed that words were “violence.” These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.

5. So Many Harms

The debate around adolescents’ use of smartphones and social media typically revolves around mental health, and understandably so. But the harms that have resulted from transforming childhood so suddenly and heedlessly go far beyond mental health. I’ve touched on some of them—social awkwardness, reduced self-confidence, and a more sedentary childhood. Here are three additional harms.

Fragmented Attention, Disrupted Learning

Staying on task while sitting at a computer is hard enough for an adult with a fully developed prefrontal cortex. It is far more difficult for adolescents in front of their laptop trying to do homework. They are probably less intrinsically motivated to stay on task. They’re certainly less able, given their undeveloped prefrontal cortex, and hence it’s easy for any company with an app to lure them away with an offer of social validation or entertainment. Their phones are pinging constantly—one study found that the typical adolescent now gets 237 notifications a day, roughly 15 every waking hour. Sustained attention is essential for doing almost anything big, creative, or valuable, yet young people find their attention chopped up into little bits by notifications offering the possibility of high-pleasure, low-effort digital experiences.

It even happens in the classroom. Studies confirm that when students have access to their phones during class time, they use them, especially for texting and checking social media, and their grades and learning suffer. This might explain why benchmark test scores began to decline in the U.S. and around the world in the early 2010s—well before the pandemic hit.

Addiction and Social Withdrawal

The neural basis of behavioral addiction to social media or video games is not exactly the same as chemical addiction to cocaine or opioids. Nonetheless, they all involve abnormally heavy and sustained activation of dopamine neurons and reward pathways. Over time, the brain adapts to these high levels of dopamine; when the child is not engaged in digital activity, their brain doesn’t have enough dopamine, and the child experiences withdrawal symptoms. These generally include anxiety, insomnia, and intense irritability. Kids with these kinds of behavioral addictions often become surly and aggressive, and withdraw from their families into their bedrooms and devices.

Social-media and gaming platforms were designed to hook users. How successful are they? How many kids suffer from digital addictions?

The main addiction risks for boys seem to be video games and porn. “Internet gaming disorder,” which was added to the main diagnosis manual of psychiatry in 2013 as a condition for further study, describes “significant impairment or distress” in several aspects of life, along with many hallmarks of addiction, including an inability to reduce usage despite attempts to do so. Estimates for the prevalence of IGD range from 7 to 15 percent among adolescent boys and young men. As for porn, a nationally representative survey of American adults published in 2019 found that 7 percent of American men agreed or strongly agreed with the statement “I am addicted to pornography”—and the rates were higher for the youngest men.

Girls have much lower rates of addiction to video games and porn, but they use social media more intensely than boys do. A study of teens in 29 nations found that between 5 and 15 percent of adolescents engage in what is called “problematic social media use,” which includes symptoms such as preoccupation, withdrawal symptoms, neglect of other areas of life, and lying to parents and friends about time spent on social media. That study did not break down results by gender, but many others have found that rates of “problematic use” are higher for girls.

I don’t want to overstate the risks: Most teens do not become addicted to their phones and video games. But across multiple studies and across genders, rates of problematic use come out in the ballpark of 5 to 15 percent. Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?

The Decay of Wisdom and the Loss of Meaning

During that crucial sensitive period for cultural learning, from roughly ages 9 through 15, we should be especially thoughtful about who is socializing our children for adulthood. Instead, that’s when most kids get their first smartphone and sign themselves up (with or without parental permission) to consume rivers of content from random strangers. Much of that content is produced by other adolescents, in blocks of a few minutes or a few seconds.

This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life. Adolescents spend less time steeped in their local or national culture. They are coming of age in a confusing, placeless, ahistorical maelstrom of 30-second stories curated by algorithms designed to mesmerize them. Without solid knowledge of the past and the filtering of good ideas from bad––a process that plays out over many generations––young people will be more prone to believe whatever terrible ideas become popular around them, which might explain why videos showing young people reacting positively to Osama bin Laden’s thoughts about America were trending on TikTok last fall.

All this is made worse by the fact that so much of digital public life is an unending supply of micro dramas about somebody somewhere in our country of 340 million people who did something that can fuel an outrage cycle, only to be pushed aside by the next. It doesn’t add up to anything and leaves behind only a distorted sense of human nature and affairs.

When our public life becomes fragmented, ephemeral, and incomprehensible, it is a recipe for anomie, or normlessness. The great French sociologist Émile Durkheim showed long ago that a society that fails to bind its people together with some shared sense of sacredness and common respect for rules and norms is not a society of great individual freedom; it is, rather, a place where disoriented individuals have difficulty setting goals and exerting themselves to achieve them. Durkheim argued that anomie was a major driver of suicide rates in European countries. Modern scholars continue to draw on his work to understand suicide rates today.

graph showing rates of young people who struggle with mental health
Percentage of U.S. high-school seniors who agreed with the statement “Life often seems meaningless.” (Source: Monitoring the Future)

Durkheim’s observations are crucial for understanding what happened in the early 2010s. A long-running survey of American teens found that, from 1990 to 2010, high-school seniors became slightly less likely to agree with statements such as “Life often feels meaningless.” But as soon as they adopted a phone-based life and many began to live in the whirlpool of social media, where no stability can be found, every measure of despair increased. From 2010 to 2019, the number who agreed that their lives felt “meaningless” increased by about 70 percent, to more than one in five.

6. Young People Don’t Like Their Phone-Based Lives

How can I be confident that the epidemic of adolescent mental illness was kicked off by the arrival of the phone-based childhood? Skeptics point to other events as possible culprits, including the 2008 global financial crisis, global warming, the 2012 Sandy Hook school shooting and the subsequent active-shooter drills, rising academic pressures, and the opioid epidemic. But while these events might have been contributing factors in some countries, none can explain both the timing and international scope of the disaster.

An additional source of evidence comes from Gen Z itself. With all the talk of regulating social media, raising age limits, and getting phones out of schools, you might expect to find many members of Gen Z writing and speaking out in opposition. I’ve looked for such arguments and found hardly any. In contrast, many young adults tell stories of devastation.

Freya India, a 24-year-old British essayist who writes about girls, explains how social-media sites carry girls off to unhealthy places: “It seems like your child is simply watching some makeup tutorials, following some mental health influencers, or experimenting with their identity. But let me tell you: they are on a conveyor belt to someplace bad. Whatever insecurity or vulnerability they are struggling with, they will be pushed further and further into it.” She continues:

Gen Z were the guinea pigs in this uncontrolled global social experiment. We were the first to have our vulnerabilities and insecurities fed into a machine that magnified and refracted them back at us, all the time, before we had any sense of who we were. We didn’t just grow up with algorithms. They raised us. They rearranged our faces. Shaped our identities. Convinced us we were sick.

Rikki Schlott, a 23-year-old American journalist and co-author of The Canceling of the American Mind, writes,

The day-to-day life of a typical teen or tween today would be unrecognizable to someone who came of age before the smartphone arrived. Zoomers are spending an average of 9 hours daily in this screen-time doom loop—desperate to forget the gaping holes they’re bleeding out of, even if just for … 9 hours a day. Uncomfortable silence could be time to ponder why they’re so miserable in the first place. Drowning it out with algorithmic white noise is far easier.

A 27-year-old man who spent his adolescent years addicted (his word) to video games and pornography sent me this reflection on what that did to him:

I missed out on a lot of stuff in life—a lot of socialization. I feel the effects now: meeting new people, talking to people. I feel that my interactions are not as smooth and fluid as I want. My knowledge of the world (geography, politics, etc.) is lacking. I didn’t spend time having conversations or learning about sports. I often feel like a hollow operating system.

Or consider what Facebook found in a research project involving focus groups of young people, revealed in 2021 by the whistleblower Frances Haugen: “Teens blame Instagram for increases in the rates of anxiety and depression among teens,” an internal document said. “This reaction was unprompted and consistent across all groups.”

How can it be that an entire generation is hooked on consumer products that so few praise and so many ultimately regret using? Because smartphones and especially social media have put members of Gen Z and their parents into a series of collective-action traps. Once you understand the dynamics of these traps, the escape routes become clear.

diptych: teens on phone on couch and on a swing

7. Collective-Action Problems

Social-media companies such as Meta, TikTok, and Snap are often compared to tobacco companies, but that’s not really fair to the tobacco industry. It’s true that companies in both industries marketed harmful products to children and tweaked their products for maximum customer retention (that is, addiction), but there’s a big difference: Teens could and did choose, in large numbers, not to smoke. Even at the peak of teen cigarette use, in 1997, nearly two-thirds of high-school students did not smoke.

Social media, in contrast, applies a lot more pressure on nonusers, at a much younger age and in a more insidious way. Once a few students in any middle school lie about their age and open accounts at age 11 or 12, they start posting photos and comments about themselves and other students. Drama ensues. The pressure on everyone else to join becomes intense. Even a girl who knows, consciously, that Instagram can foster beauty obsession, anxiety, and eating disorders might sooner take those risks than accept the seeming certainty of being out of the loop, clueless, and excluded. And indeed, if she resists while most of her classmates do not, she might, in fact, be marginalized, which puts her at risk for anxiety and depression, though via a different pathway than the one taken by those who use social media heavily. In this way, social media accomplishes a remarkable feat: It even harms adolescents who do not use it.

A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.

Social media is all about network effects. Most students are only on it because everyone else is too. Most of them would prefer that nobody be on these platforms. Later in the study, students were asked directly, “Would you prefer to live in a world without Instagram [or TikTok]?” A majority of students said yes––58 percent for each app.

This is the textbook definition of what social scientists call a collective-action problem. It’s what happens when a group would be better off if everyone in the group took a particular action, but each actor is deterred from acting, because unless the others do the same, the personal cost outweighs the benefit. Fishermen considering limiting their catch to avoid wiping out the local fish population are caught in this same kind of trap. If no one else does it too, they just lose profit.

Cigarettes trapped individual smokers with a biological addiction. Social media has trapped an entire generation in a collective-action problem. Early app developers deliberately and knowingly exploited the psychological weaknesses and insecurities of young people to pressure them to consume a product that, upon reflection, many wish they could use less, or not at all.

8. Four Norms to Break Four Traps

Young people and their parents are stuck in at least four collective-action traps. Each is hard to escape for an individual family, but escape becomes much easier if families, schools, and communities coordinate and act together. Here are four norms that would roll back the phone-based childhood. I believe that any community that adopts all four will see substantial improvements in youth mental health within two years.

No smartphones before high school 

The trap here is that each child thinks they need a smartphone because “everyone else” has one, and many parents give in because they don’t want their child to feel excluded. But if no one else had a smartphone—or even if, say, only half of the child’s sixth-grade class had one—parents would feel more comfortable providing a basic flip phone (or no phone at all). Delaying round-the-clock internet access until ninth grade (around age 14) as a national or community norm would help to protect adolescents during the very vulnerable first few years of puberty. According to a 2022 British study, these are the years when social-media use is most correlated with poor mental health. Family policies about tablets, laptops, and video-game consoles should be aligned with smartphone restrictions to prevent overuse of other screen activities.

No social media before 16

The trap here, as with smartphones, is that each adolescent feels a strong need to open accounts on TikTok, Instagram, Snapchat, and other platforms primarily because that’s where most of their peers are posting and gossiping. But if the majority of adolescents were not on these accounts until they were 16, families and adolescents could more easily resist the pressure to sign up. The delay would not mean that kids younger than 16 could never watch videos on TikTok or YouTube—only that they could not open accounts, give away their data, post their own content, and let algorithms get to know them and their preferences.

Phone‐free schools

Most schools claim that they ban phones, but this usually just means that students aren’t supposed to take their phone out of their pocket during class. Research shows that most students do use their phones during class time. They also use them during lunchtime, free periods, and breaks between classes––times when students could and should be interacting with their classmates face-to-face. The only way to get students’ minds off their phones during the school day is to require all students to put their phones (and other devices that can send or receive texts) into a phone locker or locked pouch at the start of the day. Schools that have gone phone-free always seem to report that it has improved the culture, making students more attentive in class and more interactive with one another. Published studies back them up.

More independence, free play, and responsibility in the real world

Many parents are afraid to give their children the level of independence and responsibility they themselves enjoyed when they were young, even though rates of homicide, drunk driving, and other physical threats to children are way down in recent decades. Part of the fear comes from the fact that parents look at each other to determine what is normal and therefore safe, and they see few examples of families acting as if a 9-year-old can be trusted to walk to a store without a chaperone. But if many parents started sending their children out to play or run errands, then the norms of what is safe and accepted would change quickly. So would ideas about what constitutes “good parenting.” And if more parents trusted their children with more responsibility––for example, by asking their kids to do more to help out, or to care for others––then the pervasive sense of uselessness now found in surveys of high-school students might begin to dissipate.

It would be a mistake to overlook this fourth norm. If parents don’t replace screen time with real-world experiences involving friends and independent activity, then banning devices will feel like deprivation, not the opening up of a world of opportunities.

The main reason why the phone-based childhood is so harmful is because it pushes aside everything else. Smartphones are experience blockers. Our ultimate goal should not be to remove screens entirely, nor should it be to return childhood to exactly the way it was in 1960. Rather, it should be to create a version of childhood and adolescence that keeps young people anchored in the real world while flourishing in the digital age.

9. What Are We Waiting For?

An essential function of government is to solve collective-action problems. Congress could solve or help solve the ones I’ve highlighted—for instance, by raising the age of “internet adulthood” to 16 and requiring tech companies to keep underage children off their sites.

In recent decades, however, Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry. Governors and state legislators have been much more effective, and their successes might let us evaluate how well various reforms work. But the bottom line is that to change norms, we’re going to need to do most of the work ourselves, in neighborhood groups, schools, and other communities.

There are now hundreds of organizations––most of them started by mothers who saw what smartphones had done to their children––that are working to roll back the phone-based childhood or promote a more independent, real-world childhood. (I have assembled a list of many of them.) One that I co-founded, at LetGrow.org, suggests a variety of simple programs for parents or schools, such as play club (schools keep the playground open at least one day a week before or after school, and kids sign up for phone-free, mixed-age, unstructured play as a regular weekly activity) and the Let Grow Experience (a series of homework assignments in which students––with their parents’ consent––choose something to do on their own that they’ve never done before, such as walk the dog, climb a tree, walk to a store, or cook dinner).

Even without the help of organizations, parents could break their families out of collective-action traps if they coordinated with the parents of their children’s friends. Together they could create common smartphone rules and organize unsupervised play sessions or encourage hangouts at a home, park, or shopping mall.

teen on her phone in her room

Parents are fed up with what childhood has become. Many are tired of having daily arguments about technologies that were designed to grab hold of their children’s attention and not let go. But the phone-based childhood is not inevitable.

The four norms I have proposed cost almost nothing to implement, they cause no clear harm to anyone, and while they could be supported by new legislation, they can be instilled even without it. We can begin implementing all of them right away, this year, especially in communities with good cooperation between schools and parents. A single memo from a principal asking parents to delay smartphones and social media, in support of the school’s effort to improve mental health by going phone free, would catalyze collective action and reset the community’s norms.

We didn’t know what we were doing in the early 2010s. Now we do. It’s time to end the phone-based childhood.


This article is adapted from Jonathan Haidt’s forthcoming book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.

‘Everybody has a breaking point’: how the climate crisis affects our brains (Guardian)

Researchers measuring the effect of Hurricane Sandy on children in utero at the time reported: ‘Our findings are extremely alarming.’ Illustration: Ngadi Smart/The Guardian

Are growing rates of anxiety, depression, ADHD, PTSD, Alzheimer’s and motor neurone disease related to rising temperatures and other extreme environmental changes?

Original article

Clayton Page Aldern

Wed 27 Mar 2024 05.00 GMTShare

In late October 2012, a category 3 hurricane howled into New York City with a force that would etch its name into the annals of history. Superstorm Sandy transformed the city, inflicting more than $60bn in damage, killing dozens, and forcing 6,500 patients to be evacuated from hospitals and nursing homes. Yet in the case of one cognitive neuroscientist, the storm presented, darkly, an opportunity.

Yoko Nomura had found herself at the centre of a natural experiment. Prior to the hurricane’s unexpected visit, Nomura – who teaches in the psychology department at Queens College, CUNY, as well as in the psychiatry department of the Icahn School of Medicine at Mount Sinai – had meticulously assembled a research cohort of hundreds of expectant New York mothers. Her investigation, the Stress in Pregnancy study, had aimed since 2009 to explore the potential imprint of prenatal stress on the unborn. Drawing on the evolving field of epigenetics, Nomura had sought to understand the ways in which environmental stressors could spur changes in gene expression, the likes of which were already known to influence the risk of specific childhood neurobehavioural outcomes such as autism, schizophrenia and attention deficit hyperactivity disorder (ADHD).

The storm, however, lent her research a new, urgent question. A subset of Nomura’s cohort of expectant women had been pregnant during Sandy. She wanted to know if the prenatal stress of living through a hurricane – of experiencing something so uniquely catastrophic – acted differentially on the children these mothers were carrying, relative to those children who were born before or conceived after the storm.

More than a decade later, she has her answer. The conclusions reveal a startling disparity: children who were in utero during Sandy bear an inordinately high risk of psychiatric conditions today. For example, girls who were exposed to Sandy prenatally experienced a 20-fold increase in anxiety and a 30-fold increase in depression later in life compared with girls who were not exposed. Boys had 60-fold and 20-fold increased risks of ADHD and conduct disorder, respectively. Children expressed symptoms of the conditions as early as preschool.

A resident pulls a woman in a canoe down 6th Street as high tide, rain and winds flood local streets on October 29, 2012 in Lindenhurst, New York.
Flooding in Lindenhurst, New York, in October 2012, after Hurricane Sandy struck. Photograph: Bruce Bennett/Getty Images

“Our findings are extremely alarming,” the researchers wrote in a 2022 study summarising their initial results. It is not the type of sentence one usually finds in the otherwise measured discussion sections of academic papers.

Yet Nomura and her colleagues’ research also offers a representative page in a new story of the climate crisis: a story that says a changing climate doesn’t just shape the environment in which we live. Rather, the climate crisis spurs visceral and tangible transformations in our very brains. As the world undergoes dramatic environmental shifts, so too does our neurological landscape. Fossil-fuel-induced changes – from rising temperatures to extreme weather to heightened levels of atmospheric carbon dioxide – are altering our brain health, influencing everything from memory and executive function to language, the formation of identity, and even the structure of the brain. The weight of nature is heavy, and it presses inward.

Evidence comes from a variety of fields. Psychologists and behavioural economists have illustrated the ways in which temperature spikes drive surges in everything from domestic violence to online hate speech. Cognitive neuroscientists have charted the routes by which extreme heat and surging CO2 levels impair decision-making, diminish problem-solving abilities, and short-circuit our capacity to learn. Vectors of brain disease, such as ticks and mosquitoes, are seeing their habitable ranges expand as the world warms. And as researchers like Nomura have shown, you don’t need to go to war to suffer from post-traumatic stress disorder: the violence of a hurricane or wildfire is enough. It appears that, due to epigenetic inheritance, you don’t even need to have been born yet.

When it comes to the health effects of the climate crisis, says Burcin Ikiz, a neuroscientist at the mental-health philanthropy organisation the Baszucki Group, “we know what happens in the cardiovascular system; we know what happens in the respiratory system; we know what happens in the immune system. But there’s almost nothing on neurology and brain health.” Ikiz, like Nomura, is one of a growing cadre of neuroscientists seeking to connect the dots between environmental and neurological wellness.

As a cohesive effort, the field – which we might call climatological neuroepidemiology – is in its infancy. But many of the effects catalogued by such researchers feel intuitive.

Two people trudge along a beach, with the sea behind them, and three folded beach umbrellas standing on the beach. The sky is a dark orange colour and everything in the picture is strongly tinted orange.
Residents evacuate Evia, Greece, in 2021, after wildfires hit the island. Photograph: Bloomberg/Getty Images

Perhaps you’ve noticed that when the weather gets a bit muggier, your thinking does the same. That’s no coincidence; it’s a nearly universal phenomenon. During a summer 2016 heatwave in Boston, Harvard epidemiologists showed that college students living in dorms without air conditioning performed standard cognitive tests more slowly than those living with it. In January of this year, Chinese economists noted that students who took mathematics tests on days above 32C looked as if they had lost the equivalent of a quarter of a year of education, relative to test days in the range 22–24C. Researchers estimate that the disparate effects of hot school days – disproportionately felt in poorer school districts without access to air conditioning and home to higher concentrations of non-white students – account for something on the order of 5% of the racial achievement gap in the US.

Cognitive performance is the tip of the melting iceberg. You may have also noticed, for example, your own feelings of aggression on hotter days. You and everyone else – and animals, too. Black widow spiders tend more quickly toward sibling cannibalism in the heat. Rhesus monkeys start more fights with one another. Baseball pitchers are more likely to intentionally hit batters with their pitches as temperatures rise. US Postal Service workers experience roughly 5% more incidents of harassment and discrimination on days above 32C, relative to temperate days.

Neuroscientists point to a variety of routes through which extreme heat can act on behaviour. In 2015, for example, Korean researchers found that heat stress triggers inflammation in the hippocampus of mice, a brain region essential for memory storage. Extreme heat also diminishes neuronal communication in zebrafish, a model organism regularly studied by scientists interested in brain function. In human beings, functional connections between brain areas appear more randomised at higher temperatures. In other words, heat limits the degree to which brain activity appears coordinated. On the aggression front, Finnish researchers noted in 2017 that high temperatures appear to suppress serotonin function, more so among people who had committed violent crimes. For these people, blood levels of a serotonin transporter protein, highly correlated with outside temperatures, could account for nearly 40% of the fluctuations in the country’s rate of violent crime.

Illustration of a person sweating in an extreme heat scenario
Prolonged exposure to heat can activate a multitude of biochemical pathways associated with Alzheimer’s and Parkinson’s. Illustration: Ngadi Smart/The Guardian

“We’re not thinking about any of this,” says Ikiz. “We’re not getting our healthcare systems ready. We’re not doing anything in terms of prevention or protections.”

Ikiz is particularly concerned with the neurodegenerative effects of the climate crisis. In part, that’s because prolonged exposure to heat in its own right – including an increase of a single degree centigrade – can activate a multitude of biochemical pathways associated with neurodegenerative diseases such as Alzheimer’s and Parkinson’s. Air pollution does the same thing. (In rats, such effects are seen after exposure to extreme heat for a mere 15 minutes a day for one week.) Thus, with continued burning of fossil fuels, whether through direct or indirect effects, comes more dementia. Researchers have already illustrated the manners in which dementia-related hospitalisations rise with temperature. Warmer weather worsens the symptoms of neurodegeneration as well.

Prior to her move to philanthropy, Ikiz’s neuroscience research largely focused on the mechanisms underlying the neurodegenerative disease amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease or motor neurone disease). Today, she points to research suggesting that blue-green algae, blooming with ever-increasing frequency under a changing global climate, releases a potent neurotoxin that offers one of the most compelling causal explanations for the incidence of non-genetic ALS. Epidemiologists have, for example, identified clusters of ALS cases downwind of freshwater lakes prone to blue-green algae blooms.

A woman pushing a shopping trolley grabs the last water bottles from a long empty shelf in a supermarket.
A supermarket in Long Beach is stripped of water bottles in preparation for Hurricane Sandy. Photograph: Mike Stobe/Getty Images

It’s this flavour of research that worries her the most. Children constitute one of the populations most vulnerable to these risk factors, since such exposures appear to compound cumulatively over one’s life, and neurodegenerative diseases tend to manifest in the later years. “It doesn’t happen acutely,” says Ikiz. “Years pass, and then people get these diseases. That’s actually what really scares me about this whole thing. We are seeing air pollution exposure from wildfires. We’re seeing extreme heat. We’re seeing neurotoxin exposure. We’re in an experiment ourselves, with the brain chronically exposed to multiple toxins.”

Other scientists who have taken note of these chronic exposures resort to similarly dramatic language as that of Nomura and Ikiz. “Hallmarks of Alzheimer disease are evolving relentlessly in metropolitan Mexico City infants, children and young adults,” is part of the title of a recent paper spearheaded by Dr Lilian Calderón-Garcidueñas, a toxicologist who directs the University of Montana’s environmental neuroprevention laboratory. The researchers investigated the contributions of urban air pollution and ozone to biomarkers of neurodegeneration and found physical hallmarks of Alzheimer’s in 202 of the 203 brains they examined, from residents aged 11 months to 40 years old. “Alzheimer’s disease starting in the brainstem of young children and affecting 99.5% of young urbanites is a serious health crisis,” Calderón-Garcidueñas and her colleagues wrote. Indeed.

A flooded Scottish street, with cars standing in water, their wheels just breaking the surface. A row of houses in the background with one shop called The Pet Shop.
Flooding in Stonehaven, Aberdeenshire, in 2020. Photograph: Martin Anderson/PA

Such neurodevelopmental challenges – the effects of environmental degradation on the developing and infant brain – are particularly large, given the climate prognosis. Rat pups exposed in utero to 40C heat miss brain developmental milestones. Heat exposure during neurodevelopment in zebrafish magnifies the toxic effects of lead exposure. In people, early pregnancy exposure to extreme heat is associated with a higher risk of children developing neuropsychiatric conditions such as schizophrenia and anorexia. It is also probable that the ALS-causing neurotoxin can travel in the air.

Of course, these exposures only matter if you make it to an age in which neural rot has a chance to manifest. Neurodegenerative disease mostly makes itself known in middle-aged and elderly people. But, on the other hand, the brain-eating amoeba likely to spread as a result of the climate crisis – which is 97% fatal and will kill someone in a week – mostly infects children who swim in lakes. As children do.

A coordinated effort to fully understand and appreciate the neurological costs of the climate crisis does not yet exist. Ikiz is seeking to rectify this. In spring 2024, she will convene the first meeting of a team of neurologists, neuroscientists and planetary scientists, under the banner of the International Neuro Climate Working Group.

Mexico City landscape engulfed in smog.
Smog hits Mexico City. Photograph: E_Rojas/Getty Images/iStockphoto

The goal of the working group (which, full disclosure, I have been invited to join) is to wrap a collective head around the problem and seek to recommend treatment practices and policy recommendations accordingly, before society finds itself in the midst of overlapping epidemics. The number of people living with Alzheimer’s is expected to triple by 2050, says Ikiz – and that’s without taking the climate crisis into account. “That scares me,” she says. “Because in 2050, we’ll be like: ‘Ah, this is awful. Let’s try to do something.’ But it will be too late for a lot of people.

“I think that’s why it’s really important right now, as evidence is building, as we’re understanding more, to be speaking and raising awareness on these issues,” she says. “Because we don’t want to come to that point of irreversible damage.”

For neuroscientists considering the climate problem, avoiding that point of no return implies investing in resilience research today. But this is not a story of climate anxiety and mental fortitude. “I’m not talking about psychological resilience,” says Nomura. “I’m talking about biological resilience.”

A research agenda for climatological neuroepidemiology would probably bridge multiple fields and scales of analysis. It would merge insights from neurology, neurochemistry, environmental science, cognitive neuroscience and behavioural economics – from molecular dynamics to the individual brain to whole ecosystems. Nomura, for example, wants to understand how external environmental pressures influence brain health and cognitive development; who is most vulnerable to these pressures and when; and which preventive strategies might bolster neurological resilience against climate-induced stressors. Others want to price these stressors, so policymakers can readily integrate them into climate-action cost-benefit analyses.

Wrecked houses along a beach.
Storm devastation in Seaside Heights, New Jersey. Photograph: Mike Groll/AP

For Nomura, it all comes back to stress. Under the right conditions, prenatal exposure to stress can be protective, she says. “It’s like an inoculation, right? You’re artificially exposed to something in utero and you become better at handling it – as long as it is not overwhelmingly toxic.” Stress in pregnancy, in moderation, can perhaps help immunise the foetus against the most deleterious effects of stress later in life. “But everybody has a breaking point,” she says.

Identifying these breaking points is a core challenge of Nomura’s work. And it’s a particularly thorny challenge, in that as a matter of both research ethics and atmospheric physics, she and her colleagues can’t just gin up a hurricane and selectively expose expecting mothers to it. “Human research in this field is limited in a way. We cannot run the gold standard of randomised clinical trials,” she says. “We cannot do it. So we have to take advantage of this horrible natural disaster.”

Recently, Nomura and her colleagues have begun to turn their attention to the developmental effects of heat. They will apply similar methods to those they applied to understanding the effects of Hurricane Sandy – establishing natural cohorts and charting the developmental trajectories in which they’re interested.

The work necessarily proceeds slowly, in part because human research is further complicated by the fact that it takes people longer than animals to develop. Rats zoom through infancy and are sexually mature by about six weeks, whereas for humans it takes more than a decade. “That’s a reason this longitudinal study is really important – and a reason why we cannot just get started on the question right now,” says Nomura. “You cannot buy 10 years’ time. You cannot buy 12 years’ time.” You must wait. And so she waits, and she measures, as the waves continue to crash.

Clayton Page Aldern’s book The Weight of Nature, on the effects of climate change on brain health, is published by Allen Lane on 4 April.

Ditching ‘Anthropocene’: why ecologists say the term still matters (Nature)

A aerial view of a section of the Niger river in Bamako clogged with plastic waste and other polluting materials.
Plastic waste is clogging the Niger River in Bamako, Mali. After it sediments, plastic will become part of the geological record of human impacts on the planet. Credit: Michele Cattani/AFP via Getty

Original article

Beyond stratigraphic definitions, the name has broader significance for understanding humans’ place on Earth.

David Adam

14 March 2024

After 15 years of discussion, geologists last week decided that the Anthropocene — generally understood to be the age of irreversible human impacts on the planet — will not become an official epoch in Earth’s geological timeline.

The rejected proposal would have codified the end of the current Holocene epoch, which has been in place since the end of the last ice age 11,700 years ago. It suggested that the Anthropocene started in 1952, when plutonium from hydrogen-bomb tests showed up in the sediment of Crawford Lake near Toronto, Canada.

The vote has drawn controversy over procedural details, and debate about its legitimacy continues. But whether or not it’s formally approved as a stratigraphic term, the idea of the Anthropocene is now firmly rooted in research. So, how are scientists using the term, and what does it mean to them and their fields?

‘It’s a term that belongs to everyone’

As head of the Leverhulme Centre for Anthropocene Biodiversity at the University of York, UK, Chris Thomas has perhaps more riding on the term than most. “When the news of this — what sounds like a slightly dodgy vote — happened, I sort of wondered, is it the end of us? But I think not,” he says.

For Thomas, the word Anthropocene neatly summarizes the sense that humans are part of Earth’s system and integral to its processes — what he calls indivisible connectedness. “That helps move us away from the notion that somehow humanity is apart from the rest of nature and natural systems,” he says. “It’s undoable — the change is everywhere.”

The concept of an era of human-driven change also provides convenient common ground for him to collaborate with researchers from other disciplines. “This is something that people in the arts and humanities and the social sciences have picked up as well,” he says. “It is a means of enabling communication about the extent to which we are living in a truly unprecedented and human-altered world.”

Seen through that lens, the fact that the Anthropocene has been formally rejected because scientists can’t agree on when it began seems immaterial. “Many people in the humanities who are using the phrase find the concept of the articulation of a particular year, based on a deposit in a particular lake, a ridiculous way of framing the concept of a human-altered planet.”

Jacquelyn Gill, a palaeoecologist at the University of Maine in Orono, agrees. “It’s a term that belongs to everyone. To people working in philosophy and literary criticism, in the arts, in the humanities, the sciences,” she says. “I think it’s far more meaningful in the way that it is currently being used, than in any attempts that stratigraphers could have made to restrict or define it in some narrow sense.”

She adds: “It serves humanity best as a loose concept that we can use to define something that we all widely understand, which is that we live in an era where humans are the dominant force on ecological and geological processes.”

Capturing human influences

The idea of the Anthropocene is especially helpful to make clear that humans have been shaping the planet for thousands of years, and that not all of those changes have been bad, Gill says. “We could do a better job of thinking about human–environment relationships in ways that are not inherently negative all the time,” she says. “People are not a monolith, and neither are our attitudes or relationships to nature.”

Some 80% of biodiversity is currently stewarded on Indigenous lands, Gill points out. “Which should tell you something, right? That it’s not the presence of people that’s the problem,” she says. “The solution to those problems is changing the way that many dominant cultures relate to the natural world.”

The concept of the Anthropocene is owned by many fields, Gill says. “This reiterates the importance of understanding that the role of people on our planet requires many different ways of knowing and many different disciplines.”

In a world in which the threat of climate change dominates environmental debates, the term Anthropocene can help to broaden the discussion, says Yadvinder Malhi, a biodiversity researcher at the University of Oxford, UK.

“I use it all the time. For me, it captures the time where human influence has a global planetary effect, and it’s multidimensional. It’s much more than just climate change,” he says. “It’s what we’re doing. The oceans, the resources we are extracting, habitats changing.”

He adds: “I need that term when I’m trying to capture this idea of humans affecting the planet in multiple ways because of the size of our activity.”

The looseness of the term is popular, but would a formal definition help in any way? Malhi thinks it would. “There’s no other term available that captures the global multidimensional impacts on the planet,” he says. “But there is a problem in not having a formal definition if people are using it in different terms, in different ways.”

Although the word ‘Anthropocene’ makes some researchers think of processes that began 10,000 years ago, others consider it to mean those of the past century. “I think a formal adoption, like a definition, would actually help to clarify that.”

doi: https://doi.org/10.1038/d41586-024-00786-2

The Anthropocene is dead. Long live the Anthropocene (Science)

Panel rejects a proposed geologic time division reflecting human influence, but the concept is here to stay

Original article

5 MAR 20244:00 PM ET

BY PAUL VOOSEN

A mushroom cloud rises in the night sky
A 1953 nuclear weapons test in Nevada was among the human activities that could have marked the Anthropocene. NNSA/NEVADA FIELD OFFICE/SCIENCE SOURCE

For now, we’re still in the Holocene.

Science has confirmed that a panel of two dozen geologists has voted down a proposal to end the Holocene—our current span of geologic time, which began 11,700 years ago at the end of the last ice age—and inaugurate a new epoch, the Anthropocene. Starting in the 1950s, it would have marked a time when humanity’s influence on the planet became overwhelming. The vote, first reported by The New York Times, is a stunning—though not unexpected—rebuke for the proposal, which has been working its way through a formal approval process for more than a decade.

“The decision is definitive,” says Philip Gibbard, a geologist at the University of Cambridge who is on the panel and serves as secretary-general of the International Commission on Stratigraphy (ICS), the body that governs the geologic timescale. “There are no outstanding issues to be resolved. Case closed.”

The leaders of the Anthropocene Working Group (AWG), which developed the proposal for consideration by ICS’s Subcommission on Quaternary Stratigraphy, are not yet ready to admit defeat. They note that the online tally, in which 12 out of 18 subcommission members voted against the proposal, was leaked to the press without approval of the panel’s chair. “There remain several issues that need to be resolved about the validity of the vote and the circumstances surrounding it,” says Colin Waters, a geologist at the University of Leicester who chaired AWG.

Few opponents of the Anthropocene proposal doubted the enormous impact that human influence, including climate change, is having on the planet. But some felt the proposed marker of the epoch—some 10 centimeters of mud from Canada’s Crawford Lake that captures the global surge in fossil fuel burning, fertilizer use, and atomic bomb fallout that began in the 1950s—isn’t definitive enough.

Others questioned whether it’s even possible to affix one date to the start of humanity’s broad planetary influence: Why not the rise of agriculture? Why not the vast changes that followed European encroachment on the New World? “The Anthropocene epoch was never deep enough to understand human transformation of this planet,” says Erle Ellis, a geographer at the University of Maryland, Baltimore County who resigned last year in protest from AWG.

Opponents also felt AWG made too many announcements to the press over the years while being slow to submit a proposal to the subcommission. “The Anthropocene epoch was pushed through the media from the beginning—a publicity drive,” says Stanley Finney, a stratigrapher at California State University Long Beach and head of the International Union of Geological Sciences, which would have had final approval of the proposal.

Finney also complains that from the start, AWG was determined to secure an “epoch” categorization, and ignored or countered proposals for a less formal Anthropocene designation. If they had only made their formal proposal sooner, they could have avoided much lost time, Finney adds. “It would have been rejected 10 years earlier if they had not avoided presenting it to the stratigraphic community for careful consideration.”

The Anthropocene backers will now have to wait for a decade before their proposal can be considered again. ICS has long instituted this mandatory cooling-off period, given how furious debates can turn, for example, over the boundary between the Pliocene and Pleistocene, and whether the Quaternary—our current geologic period, a category above epochs—should exist at all.

Even if it is not formally recognized by geologists, the Anthropocene is here to stay. It is used in art exhibits, journal titles, and endless books. And Gibbard, Ellis, and others have advanced the view that it can remain an informal geologic term, calling it the “Anthropocene event.” Like the Great Oxygenation Event, in which cyanobacteria flushed the atmosphere with oxygen billions of years ago, the Anthropocene marks a huge transition, but one without an exact date. “Let us work together to ensure the creation of a far deeper and more inclusive Anthropocene event,” Ellis says.

Waters and his colleagues will continue to press that the Anthropocene is worthy of recognition in the geologic timescale, even if that advocacy has to continue in an informal capacity, he says. Although small in size, Anthropocene strata such as the 10 centimeters of lake mud are distinct and can be traced using more than 100 durable geochemical signals, he says. And there is no going back to where the planet was 100 years ago, he says. “The Earth system changes that mark the Anthropocene are collectively irreversible.”


doi: 10.1126/science.z3wcw7b

Are We in the ‘Anthropocene,’ the Human Age? Nope, Scientists Say. (New York Times)

A panel of experts voted down a proposal to officially declare the start of a new interval of geologic time, one defined by humanity’s changes to the planet.

Four people standing on the deck of a ship face a large, white mushroom cloud in the distance.
In weighing their decision, scientists considered the effect on the world of nuclear activity. A 1946 test blast over Bikini atoll. Credit: Jack Rice/Associated Press

Original article

By Raymond Zhong

March 5, 2024

The Triassic was the dawn of the dinosaurs. The Paleogene saw the rise of mammals. The Pleistocene included the last ice ages.

Is it time to mark humankind’s transformation of the planet with its own chapter in Earth history, the “Anthropocene,” or the human age?

Not yet, scientists have decided, after a debate that has spanned nearly 15 years. Or the blink of an eye, depending on how you look at it.

A committee of roughly two dozen scholars has, by a large majority, voted down a proposal to declare the start of the Anthropocene, a newly created epoch of geologic time, according to an internal announcement of the voting results seen by The New York Times.

By geologists’ current timeline of Earth’s 4.6-billion-year history, our world right now is in the Holocene, which began 11,700 years ago with the most recent retreat of the great glaciers. Amending the chronology to say we had moved on to the Anthropocene would represent an acknowledgment that recent, human-induced changes to geological conditions had been profound enough to bring the Holocene to a close.

The declaration would shape terminology in textbooks, research articles and museums worldwide. It would guide scientists in their understanding of our still-unfolding present for generations, perhaps even millenniums, to come.

In the end, though, the members of the committee that voted on the Anthropocene over the past month were not only weighing how consequential this period had been for the planet. They also had to consider when, precisely, it began.

By the definition that an earlier panel of experts spent nearly a decade and a half debating and crafting, the Anthropocene started in the mid-20th century, when nuclear bomb tests scattered radioactive fallout across our world. To several members of the scientific committee that considered the panel’s proposal in recent weeks, this definition was too limited, too awkwardly recent, to be a fitting signpost of Homo sapiens’s reshaping of planet Earth.

“It constrains, it confines, it narrows down the whole importance of the Anthropocene,” said Jan A. Piotrowski, a committee member and geologist at Aarhus University in Denmark. “What was going on during the onset of agriculture? How about the Industrial Revolution? How about the colonizing of the Americas, of Australia?”

“Human impact goes much deeper into geological time,” said another committee member, Mike Walker, an earth scientist and professor emeritus at the University of Wales Trinity Saint David. “If we ignore that, we are ignoring the true impact, the real impact, that humans have on our planet.”

Hours after the voting results were circulated within the committee early Tuesday, some members said they were surprised at the margin of votes against the Anthropocene proposal compared with those in favor: 12 to four, with two abstentions. (Another three committee members neither voted nor formally abstained.)

Even so, it was unclear on Tuesday whether the results stood as a conclusive rejection or whether they might still be challenged or appealed. In an email to The Times, the committee’s chair, Jan A. Zalasiewicz, said there were “some procedural issues to consider” but declined to discuss them further. Dr. Zalasiewicz, a geologist at the University of Leicester, has expressed support for canonizing the Anthropocene.

This question of how to situate our time in the narrative arc of Earth history has thrust the rarefied world of geological timekeepers into an unfamiliar limelight.

The grandly named chapters of our planet’s history are governed by a body of scientists, the International Union of Geological Sciences. The organization uses rigorous criteria to decide when each chapter started and which characteristics defined it. The aim is to uphold common global standards for expressing the planet’s history.

A man stands next to a machine with tubing and lines of plastic that end up in a shallow pool of water.
Polyethylene being extruded and fed into a cooling bath during plastics manufacture, circa 1950. Credit: Hulton Archive, via Getty Images

Geoscientists don’t deny our era stands out within that long history. Radionuclides from nuclear tests. Plastics and industrial ash. Concrete and metal pollutants. Rapid greenhouse warming. Sharply increased species extinctions. These and other products of modern civilization are leaving unmistakable remnants in the mineral record, particularly since the mid-20th century.

Still, to qualify for its own entry on the geologic time scale, the Anthropocene would have to be defined in a very particular way, one that would meet the needs of geologists and not necessarily those of the anthropologists, artists and others who are already using the term.

That’s why several experts who have voiced skepticism about enshrining the Anthropocene emphasized that the vote against it shouldn’t be read as a referendum among scientists on the broad state of the Earth. “This was a narrow, technical matter for geologists, for the most part,” said one of those skeptics, Erle C. Ellis, an environmental scientist at the University of Maryland, Baltimore County. “This has nothing to do with the evidence that people are changing the planet,” Dr. Ellis said. “The evidence just keeps growing.”

Francine M.G. McCarthy, a micropaleontologist at Brock University in St. Catharines, Ontario, is the opposite of a skeptic: She helped lead some of the research to support ratifying the new epoch.

“We are in the Anthropocene, irrespective of a line on the time scale,” Dr. McCarthy said. “And behaving accordingly is our only path forward.”

The Anthropocene proposal got its start in 2009, when a working group was convened to investigate whether recent planetary changes merited a place on the geologic timeline. After years of deliberation, the group, which came to include Dr. McCarthy, Dr. Ellis and some three dozen others, decided that they did. The group also decided that the best start date for the new period was around 1950.

The group then had to choose a physical site that would most clearly show a definitive break between the Holocene and the Anthropocene. They settled on Crawford Lake, in Ontario, where the deep waters have preserved detailed records of geochemical change within the sediments at the bottom.

Last fall, the working group submitted its Anthropocene proposal to the first of three governing committees under the International Union of Geological Sciences. Sixty percent of each committee has to approve the proposal for it to advance to the next.

The members of the first one, the Subcommission on Quaternary Stratigraphy, submitted their votes starting in early February. (Stratigraphy is the branch of geology concerned with rock layers and how they relate in time. The Quaternary is the ongoing geologic period that began 2.6 million years ago.)

Under the rules of stratigraphy, each interval of Earth time needs a clear, objective starting point, one that applies worldwide. The Anthropocene working group proposed the mid-20th century because it bracketed the postwar explosion of economic growth, globalization, urbanization and energy use. But several members of the subcommission said humankind’s upending of Earth was a far more sprawling story, one that might not even have a single start date across every part of the planet.

Two cooling towers, a square building and a larger building behind it with smokestacks and industrial staircases on the outside.
The world’s first full-scale atomic power station in Britain in 1956. Credit: Hulton Archive, via Getty Images

This is why Dr. Walker, Dr. Piotrowski and others prefer to describe the Anthropocene as an “event,” not an “epoch.” In the language of geology, events are a looser term. They don’t appear on the official timeline, and no committees need to approve their start dates.

Yet many of the planet’s most significant happenings are called events, including mass extinctions, rapid expansions of biodiversity and the filling of Earth’s skies with oxygen 2.1 to 2.4 billion years ago.

Even if the subcommission’s vote is upheld and the Anthropocene proposal is rebuffed, the new epoch could still be added to the timeline at some later point. It would, however, have to go through the whole process of discussion and voting all over again.

Time will march on. Evidence of our civilization’s effects on Earth will continue accumulating in the rocks. The task of interpreting what it all means, and how it fits into the grand sweep of history, might fall to the future inheritors of our world.

“Our impact is here to stay and to be recognizable in the future in the geological record — there is absolutely no question about this,” Dr. Piotrowski said. “It will be up to the people that will be coming after us to decide how to rank it.”

Raymond Zhong reports on climate and environmental issues for The Times.

Latest News on Climate Change and the Environment

Protecting groundwater. After years of decline in the nation’s groundwater, a series of developments indicate that U.S. state and federal officials may begin tightening protections for the dwindling resource. In Nevada, Idaho and Montana, court decisions have strengthened states’ ability to restrict overpumping. California is considering penalizing officials for draining aquifers. And the White House has asked scientists to advise how the federal government can help.

Weather-related disasters. An estimated 2.5 million people were forced from their homes in the United States by weather-related disasters in 2023, according to new data from the Census Bureau. The numbers paint a more complete picture than ever before of the lives of people affected by such events as climate change supercharges extreme weather.

Amazon rainforest. Up to half of the Amazon rainforest could transform into grasslands or weakened ecosystems in the coming decades, a new study found, as climate change, deforestation and severe droughts damage huge areas beyond their ability to recover. Those stresses in the most vulnerable parts of the rainforest could eventually drive the entire forest ecosystem past a tipping point that would trigger a forest-wide collapse, researchers said.

A significant threshold. Over the past 12 months, the average temperature worldwide was more than 1.5 degrees Celsius, or 2.7 degrees Fahrenheit, higher than it was at the dawn of the industrial age. That number carries special significance, as nations agreed under the 2015 Paris Agreement to try to keep the difference between average temperatures today and in preindustrial times to 1.5 degrees Celsius, or at least below 2 degrees Celsius.

New highs. The exceptional warmth that first enveloped the planet last summer is continuing strong into 2024: Last month clocked in as the hottest January ever measured, and the hottest January on record for the oceans, too. Sea surface temperatures were just slightly lower than in August 2023, the oceans’ warmest month on the books.

Polémica con el Antropoceno: la humanidad todavía no sabe en qué época geológica vive (El País)

elpais.com

Un comité de expertos ha tumbado la propuesta de declarar un nuevo momento geológico, pero el propio presidente denuncia irregularidades en la votación

Manuel Ansede

Madrid –

Extracción de un testigo de sedimentos del fondo del lago Crawford, a las afueras de Toronto (Canadá). TIM PATTERSON / UNIVERSIDAD DE CARLETON

La idea del Antropoceno —que la humanidad vive desde 1950 en una nueva época geológica caracterizada por la contaminación humana— se ha hecho tan popular en los últimos años que hasta la Real Academia Española adoptó el término en el Diccionario de la Lengua en 2021. Los académicos se dieron esta vez demasiada prisa. El concepto sigue en el aire, en medio de una vehemente polémica entre especialistas. Miembros del comité de expertos que debe tomar la decisión en la Unión Internacional de Ciencias Geológicas (UICG) —la Subcomisión de Estratigrafía del Cuaternario— han filtrado este martes al diario The New York Times que han votado mayoritariamente en contra de reconocer la existencia del Antropoceno. Sin embargo, el presidente de la Subcomisión, el geólogo Jan Zalasiewicz, explica a EL PAÍS que el resultado preliminar de la votación se ha anunciado sin su autorización y que todavía quedan “algunos asuntos pendientes con los votos que hay que resolver”. La humanidad todavía no sabe en qué época geológica vive.

El químico holandés Paul Crutzen, ganador del Nobel de Química por iluminar el agujero de la capa de ozono, planteó en el año 2000 que el planeta había entrado en una nueva época, provocada por el impacto brutal de los seres humanos. Un equipo internacional de especialistas, el Grupo de Trabajo del Antropoceno, ha analizado los hechos científicos desde 2009 y el año pasado presentó una propuesta para proclamar oficialmente esta nueva época geológica, marcada por la radiactividad de las bombas atómicas y los contaminantes procedentes de la quema de carbón y petróleo. El diminuto lago Crawford, a las afueras de Toronto (Canadá), era el lugar indicado para ejemplificar el inicio del Antropoceno, gracias a los sedimentos de su fondo, imperturbados desde hace siglos.

La mayoría de los miembros de la Subcomisión de Estratigrafía del Cuaternario de la UICG ha votado en contra de la propuesta, según el periódico estadounidense. El geólogo británico Colin Waters, líder del Grupo de Trabajo del Antropoceno, explica a EL PAÍS que se ha enterado por la prensa. “Todavía no hemos recibido una confirmación oficial directamente del secretario de la Subcomisión de Estratigrafía del Cuaternario. Parece que The New York Times recibe los resultados antes que nosotros, es muy decepcionante”, lamenta Waters.

El geólogo reconoce que el dictamen, si se confirma, sería el fin de su propuesta actual, pero no se rinde. “Tenemos muchos investigadores eminentes que desean continuar como grupo, de manera informal, defendiendo las evidencias de que el Antropoceno debería ser formalizado como una época”, afirma. A su juicio, los estratos geológicos actuales —contaminados por isótopos radiactivos, microplásticos, cenizas y pesticidas— han cambiado de manera irreversible respecto a los del Holoceno, la época geológica iniciada hace más de 10.000 años, tras la última glaciación. “Dadas las pruebas existentes, que siguen aumentando, no me sorprendería un futuro llamamiento a reconsiderar nuestra propuesta”, opina Waters, de la Universidad de Leicester.

El jefe del Grupo de Trabajo del Antropoceno sostiene que hay “algunas cuestiones de procedimiento” que ponen en duda la validez de la votación. La geóloga italiana Silvia Peppoloni, jefa de la Comisión de Geoética de la UICG, confirma que su equipo ha realizado un informe sobre esta pelea entre la Subcomisión de Estratigrafía del Cuaternario y el Grupo de Trabajo del Antropoceno. El documento está sobre la mesa del presidente de la UICG, el británico John Ludden.

La geóloga canadiense Francine McCarthy estaba convencida de que el lago Crawford convencería a los escépticos. Desde fuera parece pequeño, con apenas 250 metros de largo, pero su profundidad roza los 25 metros. Sus aguas superficiales no se mezclan con las de su lecho, por lo que el suelo del fondo se puede analizar como una lasaña, en la que cada capa acumula sedimentos procedentes de la atmósfera. Ese calendario subacuático del lago Crawford revela la denominada Gran Aceleración, el momento alrededor de 1950 en el que la humanidad empezó a dejar una huella cada vez más evidente, con el lanzamiento de bombas atómicas, la quema masiva de petróleo y carbón y la extinción de especies.

“Ignorar el enorme impacto de los humanos en nuestro planeta desde mediados del siglo XX tiene potencialmente consecuencias dañinas, al minimizar la importancia de los datos científicos para hacer frente al evidente cambio en el sistema de la Tierra, como ya señaló Paul Crutzen hace casi 25 años”, advierte McCarthy.

Em votação, cientistas negam que estejamos no Antropoceno, a época geológica dos humanos (Folha de S.Paulo)

www1.folha.uol.com.br

Grupo rejeitou que mudanças sejam profundas o bastante para encerrar o Holoceno

Raymond Zhong

5 de março de 2024


O Triássico foi o amanhecer dos dinossauros. O Paleogeno viu a ascensão dos mamíferos. O Pleistoceno incluiu as últimas eras glaciais.

Está na hora de marcar a transformação da humanidade no planeta com seu próprio capítulo na história da Terra, o “Antropoceno”, ou a época humana?

Ainda não, decidiram os cientistas, após um debate que durou quase 15 anos. Ou um piscar de olhos, dependendo do ângulo pelo qual você olha.

Um comitê de cerca de duas dezenas de estudiosos votou, em grande maioria, contra uma proposta de declarar o início do Antropoceno, uma época recém-criada do tempo geológico, de acordo com um anúncio interno dos resultados da votação visto pelo The New York Times.

Pela linha do tempo atual dos geólogos da história de 4,6 bilhões de anos da Terra, nosso mundo agora está no Holoceno, que começou há 11,7 mil anos com o recuo mais recente dos grandes glaciares.

Alterar a cronologia para dizer que avançamos para o Antropoceno representaria um reconhecimento de que as mudanças recentes induzidas pelo homem nas condições geológicas foram profundas o suficiente para encerrar o Holoceno.

A declaração moldaria a terminologia em livros didáticos, artigos de pesquisa e museus em todo o mundo. Orientaria os cientistas em sua compreensão do nosso presente ainda em desenvolvimento por gerações, talvez até por milênios.

No fim das contas, porém, os membros do comitê que votaram sobre o Antropoceno nas últimas semanas não estavam apenas considerando o quão determinante esse período havia sido para o planeta. Eles também tiveram que considerar quando, precisamente, ele começou.

Pela definição que um painel anterior de especialistas passou quase uma década e meia debatendo e elaborando, o Antropoceno começou na metade do século 20, quando testes de bombas nucleares espalharam material radioativo por todo o nosso mundo.

Para vários membros do comitê científico que avaliaram a proposta do painel nas últimas semanas, essa definição era muito limitada, muito recente e inadequada para ser um marco adequado da remodelação do Homo sapiens no planeta Terra.

“Isso restringe, confina, estreita toda a importância do Antropoceno”, disse Jan A. Piotrowski, membro do comitê e geólogo da Universidade de Aarhus, na Dinamarca. “O que estava acontecendo durante o início da agricultura? E a Revolução Industrial? E a colonização das Américas, da Austrália?”

“O impacto humano vai muito mais fundo no tempo geológico”, disse outro membro do comitê, Mike Walker, cientista da Terra e professor emérito da Universidade de Gales Trinity Saint David. “Se ignorarmos isso, estamos ignorando o verdadeiro impacto que os humanos têm em nosso planeta.”

Horas após a circulação dos resultados da votação dentro do comitê nesta terça-feira (5) de manhã, alguns membros disseram que ficaram surpresos com a margem de votos contra a proposta do Antropoceno em comparação com os a favor: 12 a 4, com 2 abstenções.

Mesmo assim, nesta terça de manhã não ficou claro se os resultados representavam uma rejeição conclusiva ou se ainda poderiam ser contestados ou apelados. Em um e-mail para o Times, o presidente do comitê, Jan A. Zalasiewicz, disse que havia “algumas questões procedimentais a considerar”, mas se recusou a discuti-las mais a fundo.

Zalasiewicz, geólogo da Universidade de Leicester, expressou apoio à canonização do Antropoceno.

Essa questão de como situar nosso tempo na narrativa da história da Terra colocou o mundo dos guardiões do tempo geológico sob uma luz desconhecida.

Os capítulos grandiosamente nomeados da história de nosso planeta são governados por um grupo de cientistas, a União Internacional de Ciências Geológicas. A organização usa critérios rigorosos para decidir quando cada capítulo começou e quais características o definiram. O objetivo é manter padrões globais comuns para expressar a história do planeta.

Os geocientistas não negam que nossa era se destaca dentro dessa longa história. Radionuclídeos de testes nucleares. Plásticos e cinzas industriais. Poluentes de concreto e metal. Aquecimento global rápido. Aumento acentuado de extinções de espécies. Esses e outros produtos da civilização moderna estão deixando vestígios inconfundíveis no registro mineral, especialmente desde meados do século 20.

Ainda assim, para se qualificar para a entrada na escala de tempo geológico, o Antropoceno teria que ser definido de uma maneira muito específica, que atendesse às necessidades dos geólogos e não necessariamente dos antropólogos, artistas e outros que já estão usando o termo.

Por isso, vários especialistas que expressaram ceticismo quanto à consagração do Antropoceno enfatizaram que o voto contra não deve ser interpretado como um referendo entre cientistas sobre o amplo estado da Terra.

“Este é um assunto específico e técnico para os geólogos, em sua maioria”, disse um desses céticos, Erle C. Ellis, um cientista ambiental da Universidade de Maryland. “Isso não tem nada a ver com a evidência de que as pessoas estão mudando o planeta”, afirmou Ellis. “A evidência continua crescendo.”

Francine M.G. McCarthy, micropaleontóloga da Universidade Brock em St. Catharines, Ontário (Canadá), é tem visão oposta: ela ajudou a liderar algumas das pesquisas para apoiar a ratificação da nova época.

“Estamos no Antropoceno, independentemente de uma linha na escala de tempo”, disse McCarthy. “E agir de acordo é o nosso único caminho a seguir.”

A proposta do Antropoceno teve início em 2009, quando um grupo de trabalho foi convocado para investigar se as recentes mudanças planetárias mereciam um lugar na linha do tempo geológica.

Após anos de deliberação, o grupo, que passou a incluir McCarthy, Ellis e cerca de três dezenas de outros, decidiu que sim. O grupo também decidiu que a melhor data de início para o novo período era por volta de 1950.

O grupo então teve que escolher um local físico que mostrasse de forma mais clara uma quebra definitiva entre o Holoceno e o Antropoceno. Eles escolheram o Lago Crawford, em Ontário, no Canadá, onde as águas profundas preservaram registros detalhados de mudanças geoquímicas nos sedimentos do fundo.

No outono passado, o grupo de trabalho enviou sua proposta do Antropoceno para o primeiro dos três comitês governantes da União Internacional de Ciências Geológicas —60% de cada comitê precisam aprovar a proposta para que ela avance para o próximo.

Os membros do primeiro comitê, a Subcomissão de Estratigrafia do Quaternário, enviaram seus votos a partir do início de fevereiro. (Estratigrafia é o ramo da geologia que se dedica ao estudo das camadas de rocha e como elas se relacionam no tempo. O Quaternário é o período geológico em curso que começou há 2,6 milhões de anos.)

De acordo com as regras da estratigrafia, cada intervalo de tempo da Terra precisa de um ponto de partida claro e objetivo, que se aplique em todo o mundo. O grupo de trabalho do Antropoceno propôs meados do século 20 porque isso abrangia a explosão do crescimento econômico pós-guerra, a globalização, a urbanização e o uso de energia.

Mas vários membros da subcomissão disseram que a transformação da humanidade na Terra era uma história muito mais abrangente, que talvez nem tenha uma única data de início em todas as partes do planeta.

Por isso, Walker, Piotrowski e outros preferem descrever o Antropoceno como um “evento”, não como uma “época”. Na linguagem da geologia, eventos são um termo mais amplo. Eles não aparecem na linha do tempo oficial, e nenhum comitê precisa aprovar suas datas de início.

No entanto, muitos dos acontecimentos mais significativos do planeta são chamados de eventos, incluindo extinções em massa, expansões rápidas da biodiversidade e o preenchimento dos céus da Terra com oxigênio há 2,1 bilhões a 2,4 bilhões de anos.

Mesmo que o voto da subcomissão seja mantido e a proposta do Antropoceno seja rejeitada, a nova época ainda poderá ser adicionada à linha do tempo em algum momento posterior. No entanto, terá que passar por todo o processo de discussão e votação novamente.

The Quiet Threat To Science Posed By ‘Indigenous Knowledge’ (Forbes)

James Broughel

Feb 29, 2024,07:06am EST

Portrait of Quechua man in traditinal hat.
The White House is working on incorporating “indigenous knowledge” into federal regulatory policy. GETTY

“Indigenous knowledge” is in the spotlight thanks to President Biden, who issued an executive order within days of taking office, aimed at ushering in a new era of tribal self-determination. It was a preview of things to come. His administration went on to host an annual White House summit on tribal nations, and convened an interagency working group that spent a year developing government-wide guidance on indigenous knowledge.

Released in late 2022, the 46-page guidance document defines indigenous knowledge as “a body of observations, oral and written knowledge, innovations, practices, and beliefs developed by Tribes and Indigenous Peoples through experience with the environment.” According to the guidance, indigenous knowledge “is applied to phenomena across biological, physical, social, cultural, and spiritual systems.”

Now the Biden Administration wants federal agencies to include these sorts of beliefs into their decision making. As a result, agencies like the EPAFDA, and CDC are incorporating indigenous knowledge into their scientific integrity practices.

In some cases, tribal knowledge can certainly provide empirical data to decisionmakers. For example, if an agency is concerned about pollution in a certain area, tribal leaders might be able to provide insights about abnormally high rates of illness experienced within their community. That said, categorizing knowledge that includes folklore and traditions under the banner of enhancing “scientific integrity” poses a number of serious problems, to put it mildly.

Very often, indigenous knowledge deals in subjective understandings related to culture, stories, and values—not facts or empirically-derived cause-and-effect relationships. In such cases, the knowledge can still be useful, but it is not “science” per se, which is usually thought of as the study of observable phenomena in the physical and natural world.

Treating science and indigenous knowledge as equivalent risks blending oral traditions and spirituality with verifiable data and evidence. Scientists are aware of the danger, which explains why the authors of a recent article in Science Magazine wisely noted “we do not argue that Indigenous Knowledge should usurp the role of, or be called, science.” Instead, they argue, indigenous knowledge can complement scientific information.

Indeed, this knowledge should be collected alongside other input from stakeholders with an interest in the outcomes of federal policy. It shouldn’t be confused with science itself, however. Yet by baking indigenous insights into scientific integrity policies without clearly explaining how the knowledge is to be collected, verified, and used, federal agencies will make it easier to smuggle untested claims into the evidentiary records for rulemakings.

Another issue is that indigenous knowledge varies dramatically across the more than 500 federally-recognized tribes. There are likely to be instances where one group’s teachings may offer time-tested wisdom, while another’s proves unreliable when held up against observable facts. Indigenous knowledge can also point in opposite directions. Last year, the Biden administration invoked indigenous knowledge when it canceled seven oil and gas leases in Alaska, but indigenous groups are known to often support energy development as well.

Even the Biden team admits indigenous knowledge is “often unique and specific” to a single tribe or people. But the Biden team doesn’t offer a way to distinguish between competing or contradictory accounts.

While no one disputes the historical mistreatment of Native Americans, this is unrelated to the question of whether knowledge is accurate. Moreover, other forms of localized knowledge also deserve attention. In rural towns and municipalities, for example, long-time residents often develop their own bodies of knowledge concerning everything from flood patterns to forest fire risks. To be clear, this local knowledge is also not “science” in most cases. But, like indigenous knowledge, it can be critically important.

That agency scientific integrity initiatives would single out knowledge based on social categories like race and ethnicity is unscientific. The danger is that indigenous knowledge policies will enable subjective understandings to become baked into rulemakings alongside the latest in peer-reviewed research.

If federal agencies aim to incorporate subjective belief systems into rulemaking, they should take care to do so responsibly without allowing unverified claims to be smuggled into purportedly impartial regulatory analyses. In most instances, indigenous knowledge will fall outside the scope of what can rightfully be considered part of ensuring scientific integrity.

The path forward lies in incorporating indigenous insights into policy decisions at the stage where they rightfully belong: as part of holding meetings and gathering feedback from stakeholders. Very likely, indigenous and other forms of local knowledge will often turn out to be more important than science. But confusing politics and science risks undermining both.

[Note from RT: there are many problems in the line of reasoning presented in this piece; the one that is perhaps most important is that the author’s perception of what “Indigenous knowledge” is is based on the results of processes of decontextualization, fragmentation, and reconstruction of Indigenous ideas in instrumental ways, inside larger social and cultural frames that have no relation to the contexts in which these ideas circulate originally. Indigenous knowledge would not be so crucial today if it were compatible with non-Indigenous, modern/Western modes of thinking and social organization. In most cases, the complaint that Indigenous knowledge is difficult to accommodate comes from realms in which there is great confidence that business as usual will solve the current environmental situation.]

‘Ignorantes’ e ‘ineficientes’: instrução normativa do RS é influenciada por estereótipos negativos sobre pescadores artesanais (Bori)

Pescadores artesanais são retratados de forma negativa por documentos consultados por técnicos para elaboração da Instrução Normativa Conjunta 2004 sobre pesca no RS

23 de fevereiro de 2024

Highlights
  • Estudo analisou estereótipos sobre pescadores artesanais em publicações ligadas aos técnicos que participaram da formulação de instrução normativa de pesca no Rio Grande do Sul
  • Documentos científicos descrevem trabalhadores como resistentes a mudanças e adeptos de práticas de pesca predatórias
  • Pesquisa reforça a importância de desconstruir estereótipos na formulação de políticas públicas para torná-las mais justas

Ideias preconceituosas sobre pesca e pescadores artesanais influenciaram ato administrativo do Ministério do Meio Ambiente sobre pesca artesanal no estado do Rio Grande do Sul. Constatado por estudo da Universidade Federal do Pará (UFPA) e da Universidade de São Paulo (USP), o uso de estereótipos que retratam pescadores como ignorantes e adeptos de práticas predatórias danosas ao meio ambiente dificulta a criação de propostas eficazes e justas de regulamentação do setor. A análise está publicada em artigo científico da edição de sexta (23) da revista “Ambiente & Sociedade”.

Os pesquisadores investigaram estereótipos sobre pesca artesanal contidos em publicações científicas ligadas a técnicos que participaram da formulação da Instrução Normativa Conjunta de 2004 sobre a atividade de pesca no estuário da Lagoa dos Patos, no Rio Grande do Sul, implementada durante a gestão da ministra do meio ambiente Marina Silva, no governo Lula. Com um levantamento bibliográfico e entrevistas com seis profissionais que participaram da criação da política, eles identificaram 22 documentos científicos ligados aos técnicos participantes, como artigos científicos, teses e livros.

A imagem estereotipada que estes técnicos tinham sobre as comunidades pesqueiras influenciaram a construção da norma. A pesquisa identificou nove tipos de discursos negativos sobre os pescadores artesanais nestes documentos. Eles descrevem esses sujeitos como brancos, totalmente dedicados à pesca, ignorantes, avessos a mudanças, desordeiros, isolados, ineficientes, competitivos e adeptos de práticas predatórias danosas ao meio ambiente.

Para o pesquisador Gustavo Goulart Moreira Moura, da UFPA e autor do estudo, a criação desse perfil negativo sobre pescadores não é uma coincidência isolada e, muito menos, uma prática nova. “A construção de uma imagem depreciativa desses grupos sociais se inicia no século XIX, se consolida ao longo do século XX e continua a vigorar no século XXI, pois é parte de um projeto de destruição de territórios tradicionais, de conquista dos mares por meio da modernização capitalista da pesca”.

O trabalho enfatiza que utilizar discursos baseados em estereótipos negativos sobre a pesca artesanal para a formulação de normas pode levar à reprodução de discursos de ódio contra comunidades pesqueiras e, também, gerar ações que limitam seu acesso a territórios. “Esta lógica vai subsidiar políticas públicas feitas de forma autoritária, violenta e restritiva, porque os povos e comunidades tradicionais de pesca são vistos como ignorantes, marginais e destruidores do meio ambiente”, relata Moura.

De acordo com Moura, a reformulação da INC 2004 vem sendo discutida há algum tempo, mas ainda não foi efetivada. Um dos problemas da norma apontados pelo pesquisador é a sua falta de dinamismo, que não acompanha a rotina da pesca artesanal no estuário da Lagoa dos Patos. “As comunidades tradicionais têm um sistema de manejo da pesca com abertura e fechamento de safras flexível baseada nas condições ambientais e na localidade ano a ano, ao contrário do que propõe a norma”.

O trabalho destaca a importância de se levar em conta os conhecimentos tradicionais de pescadores na elaboração de leis e normas mais equitativas e eficientes. O esforço passa por quebrar estereótipos sobre estas comunidades, o que requer, segundo Moura, uma ação coordenada entre setores da sociedade. “Por exemplo, é preciso sensibilizar a grande mídia para não espalhar preconceitos contra esses grupos sociais e para ajudar a cobrar dos tomadores de decisão a elaboração de leis sejam feitas dentro do marco dos direitos humanos. Os tomadores de decisão deveriam escolher consultores que compartilhem valores compatíveis com o Estado Democrático e de Direito”, conclui.

Q&A: To Save The Planet, Traditional Indigenous Knowledge Is Indispensable (Inside Climate News)

Politics & Policy

Indigenous peoples’ ecological expertise honed over centuries is increasingly being used by policymakers to complement mainstream science.

By Katie Surma

February 14, 2024

A member of the Indigenous Baduy tribe works at his field on Indonesia's Java island. Anthropologist Gonzalo Oviedo says Indigenous communities in Southeast Asia “tend to recognize many more varieties of plant subspecies.” Credit: Bay Ismoyo/AFP via Getty Images
A member of the Indigenous Baduy tribe works at his field on Indonesia’s Java island. Anthropologist Gonzalo Oviedo says Indigenous communities in Southeast Asia “tend to recognize many more varieties of plant subspecies.” Credit: Bay Ismoyo/AFP via Getty Images

The past few years have been a triumph for traditional Indigenous knowledge, the body of observations, innovations and practices developed by Indigenous peoples throughout history with regard to their local environment. 

First, the world’s top scientific and environmental policymaking bodies embraced it. Then, in 2022, the Biden administration instructed U.S. federal agencies to include it in their decision making processes. And, last year, the National Science Foundation announced $30 million in grants to fund it.

Traditional Indigenous knowledge, also called traditional ecological knowledge or traditional knowledge, is compiled by tribes according to their distinct culture and generally is transmitted orally between generations. It has evolved since time immemorial, yet mainstream institutions have only begun to recognize its value for helping to address pressing global problems like climate change and biodiversity loss, to say nothing of its cultural importance.  

Traditional Indigenous knowledge has helped communities sustainably manage territories and natural resources—from predicting natural disasters to protecting biologically important areas and identifying medicinal plants. Today, more than a quarter of land globally is occupied, managed or owned by Indigenous peoples and local communities, with roughly 80 percent of Earth’s biodiversity located on Indigenous territories. Study after study has confirmed that those lands have better environmental outcomes than alternatives. 

But, just as the links between those outcomes and Indigenous expertise are becoming more widely acknowledged, the communities stewarding this knowledge are coming under increasing threat from land grabbing, rapid cultural changes and other factors.

Then there is the backlash from the right and the left. As traditional Indigenous knowledge has moved into the mainstream alongside science for a better understanding and management of the natural world, critics on all sides have emerged. Some have argued that just as Christian creationism is incompatible with science, so too is traditional knowledge—this argument is widely seen as premised on a misunderstanding about what traditional knowledge is. On the other end of the ideological spectrum, some progressives have balked at the notion that there are fundamental differences between the two systems. 

For a better understanding of what traditional knowledge is, Inside Climate News spoke with Gonzalo Oviedo, an anthropologist and environmental scientist who has worked on social aspects of conservation for more than three decades. This conversation has been lightly edited for clarity and length.

For people who may not know much about traditional knowledge, can you give some examples of what it is? 

One key element of traditional knowledge is the understanding of where key biodiversity areas are located in the landscape where communities have traditionally lived. 

This is exactly what conservation science does: identify areas that contain important genetic resources, or areas that contain important features that influence the rest of the ecosystem. 

Traditional cultures do exactly this with areas that are key for the reproduction of animal species, for conserving water sources or for harboring certain types of plants including medicinal plants. Often, those areas become sacred places that Indigenous communities protect very rigorously. Protecting those key biodiversity areas is one of the most important management practices and it’s based on an understanding of how an ecosystem works in a given area. 

Another element is closely related to the work of botanists, which is the creation of very sophisticated botanic taxonomy (the systematic classification of organisms). There are taxonomic systems generated by Indigenous peoples that are more sophisticated than mainstream taxonomy. In Southeast Asia, for example, Indigenous communities tend to recognize many more varieties of plant subspecies based on their practices and lifeways. They see the plants in a more detailed way and notice more differences. They also have more linguistic terms for diverse shades of green that represent different types of plants. 

A third element is the understanding of the biological succession of forests and other ecosystems. Communities have very detailed knowledge of how ecosystems have changed and evolved over long periods of time. People who live within ecosystems, and in a way where their livelihoods are connected to the ecosystem, are a fundamental source of direct knowledge of how ecosystems evolve. 

In places like the Arctic, where people are dependent on their ability to predict changes in the climate, there has been a lot of important research done with Indigenous communities to systematize their climate knowledge. In dry land climates, where traditional communities are very vulnerable to changes in precipitation, they’ve identified key biodiversity areas that serve as reservoirs for periods when droughts are prolonged and these communities strictly protect those reservoirs. Fishing communities in the Pacific are extremely knowledgeable about marine biodiversity and the management of those ecosystems.

What developments have contributed to more mainstream acceptance of traditional knowledge? It’s hard to imagine that Indigenous peoples’ advocacy for stronger protection of their rights hasn’t played a role. Have there been other developments contributing to the growing recognition of the value of this knowledge system to global conservation efforts? 

The process of integrating traditional knowledge into the mainstream is still relatively new. Only in the last 20 years or so has there been more significant progress on this. The Convention on Biological Diversity, the CBD, entered into force in 1993 and has a very important provision in Article 8(j) on the recognition of traditional knowledge and the need to “respect, preserve and maintain” it. As a result of that provision, there has been a lot of interest in how to integrate that into public policy, biodiversity management and related fields. 

The evolution of nature conservation paradigms in the last 20 to 30 years or so has also been an important driver. Three decades ago it was still very difficult to get conservation organizations to recognize that the traditional knowledge of Indigenous and local communities is a positive factor for conservation and that working together with those communities is fundamental. Today, the conservation movement universally agrees to this.

When you say “evolution of nature conservation paradigms,” are you referring to the shift away from “fortress conservation,” or the model where protected areas were fenced off and Indigenous and local communities removed from their traditional lands in the name of conservation? 

There have been several factors contributing to the change and moving away from the fortress conservation concept to inclusive conservation has been one of them. By inclusive I mean the understanding that Indigenous and community held lands are better protected through traditional management practices and the value of traditional knowledge associated with that. 

It is also better recognized today that working for sustainable livelihoods like subsistence farming and harvesting is good for conservation. In the past, livelihood activities were seen as a threat to conservation. Today, it is widely accepted that by supporting sustainable livelihoods, you’re supporting conservation as well. 

Also, today, it is recognized that humans have always managed ecosystems. The concept of “empty wilderness” is no longer viable for conservation and it’s not true in most parts of the world. These are several ways that the conservation paradigm is evolving. It’s safe to say that not everyone is on the same page. But things are evolving in the direction of inclusiveness. 

What are some of the biggest challenges to ensuring that traditional knowledge is protected and, if approved by communities, transmitted for use in mainstream conservation efforts?

There are two main challenges. One relates to how other knowledge systems see traditional knowledge. 

This is essentially the problem of getting people to understand what traditional knowledge is, and overcoming unhelpful and incorrect stereotypes about it. For example, some people say that, unlike science, traditional knowledge is not based on evidence or is not based on credible scientific processes that allow for verification. That is not necessarily true. 

There are, of course, differences between traditional knowledge and scientific knowledge. Traditional knowledge tends to use more qualitative methods and less quantitative approaches and methodologies compared to what science does today. 

But there are several aspects in which both are quite similar. To start, the key motivation in both systems is problem solving. The intellectual process of both sometimes works through comparisons and applies methods of trial and error. You also have in both the process of moving from practical knowledge to abstraction, and also feedback looping and adaptive learning.

Misunderstandings or stereotypes about what traditional knowledge is have led to unfriendly public policies in natural resource management and education systems. 

To address this, institutions like the Intergovernmental Panel on Climate Change (IPCC) and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) need to continue collecting evidence and information about traditional knowledge and communicate its value to policymakers. That is still a fundamental need. 

A second major challenge relates to the erosion of the intergenerational transmission of traditional knowledge. That transmission mostly happens through oral systems that require direct physical contact between different generations. That is being lost because of demographic changes, migration and the use of formal education systems that take children into schools and separate them physically from transmitters of traditional knowledge. This is a serious problem but there are examples of helpful actions that have been implemented in places like Ecuador, where the formal education system works together with Indigenous communities under an inter-cultural model. 

Another aspect of this is the loss of knowledge. If there is lack of transmission or insufficient transmission between generations, when elders die a significant amount of knowledge dies with them.

Cultural change is also a factor. People are coming into contact with other forms of knowledge, some that are presented in a more dynamic way, like on television, and that tend to capture the attention of younger people. 

The pace of change is happening so fast. Traditional knowledge is transmitted slowly through in person contact and in the context of daily life. If the pace of cultural change isn’t managed, and communities aren’t supported in their maintenance of knowledge transmission, then that knowledge will be irreversibly lost. 

There has been some pushback to the incorporation of traditional knowledge alongside science in policy making and into education curriculums. Critics have analogized traditional knowledge with “creationism.”  What do you make of this? 

It’s important to understand precisely what traditional knowledge is and to differentiate it from spirituality. 

Communities often connect spirituality with traditional knowledge. Spirituality is part of the traditional life of the communities, but spirituality is not in itself traditional knowledge. For example, people in Laos fishing communities that live around wetlands have a sophisticated knowledge of how wetlands function. They have for generations fished and taken resources from the wetlands.

Based on their traditional knowledge of the wetlands, they understand the need for rules to avoid depletion of fish populations by preserving key areas for reproduction and ecological processes. They have developed a set of norms so people understand they cannot fish in certain areas, and those norms take place through spirituality. They say, “You can’t fish in this area because this is where our spirits live and these spirits shouldn’t be interfered with.” This becomes a powerful norm because it connects with a deep spiritual value of the community. 

This doesn’t mean that when recognizing the traditional knowledge of the community, one has to take the topic of the spirits as knowledge that has to be validated. The spiritual aspect is the normative part, articulated around beliefs, it is not the knowledge. The same thing goes for practices protecting key biodiversity areas. Traditional cultures all over the world have sacred sites, waters, and they are based on some knowledge of how the ecosystem works and the need to protect key and sensitive areas. Traditional knowledge is essentially problem solving, practical and develops through empirical processes of observation and experience. You have to distinguish it from spirituality, that develops through stories, myths and visions from spiritual leaders. 

The relationship between knowledge and spiritual beliefs happened in a similar way in the history of western science and with traditional Chinese medicine. Historically, you will find that Chinese medical science was intimately linked to Taoist religion and Confucianism. Yet the value of Chinese medicine doesn’t mean that you have to adopt Taoism or Confucianism. It takes a long time for societies to understand how to distinguish these things because their connections are very complex. 

What is at stake if traditional knowledge is lost? 

First, that would be a loss for all of humanity. There has been recent research showing that traditional knowledge can benefit the whole of society if understood and transmitted to other knowledge systems.

There are certain aspects of traditional knowledge that, if lost, will be difficult to recuperate like elements of botanic taxonomy that are not recorded. If lost, we’re losing an important part of human knowledge. 

Second, traditional knowledge is important for cultures that have generated and use that knowledge, especially for their adaptation to climatic and other changes. If properly recognized and supported, that knowledge can be a factor of positive development and evolution for those communities. Change is happening everywhere and will continue to happen in traditional societies. But there are different types of cultural change and some are destructive to traditional communities, like the absorption of invasive external values and mythologies that completely destroy young peoples’ cultural background and erode the fabrics of traditional societies. 

There is also cultural change that can be positive if it is well managed. Young peoples’ use of technology could be a good source of change if it is used to help maintain and transmit their traditional culture. That can prompt pride and value in communities, and promote intercultural understanding which is fundamental in a world where there is still so much cultural discrimination against Indigenous peoples and a lack of understanding of their cultures and value systems.

Traditional knowledge can play an important role in intercultural dialogues. We need healing processes within societies so that cultures can speak to each other on equal footing, which unfortunately isn’t the case in many places today. 

Katie Surma – Reporter, Pittsburgh

Katie Surma is a reporter at Inside Climate News focusing on international environmental law and justice. Before joining ICN, she practiced law, specializing in commercial litigation. She also wrote for a number of publications and her stories have appeared in the Washington Post, USA Today, Chicago Tribune, Seattle Times and The Associated Press, among others. Katie has a master’s degree in investigative journalism from Arizona State University’s Walter Cronkite School of Journalism, an LLM in international rule of law and security from ASU’s Sandra Day O’Connor College of Law, a J.D. from Duquesne University, and was a History of Art and Architecture major at the University of Pittsburgh. Katie lives in Pittsburgh, Pennsylvania, with her husband, Jim Crowell.

The Weather Man (Stanford Magazine)

Daniel Swain studies extreme floods. And droughts. And wildfires. Then he explains them to the rest of us.

February 6, 2024

    

An illustration of Daniel Swain walking through the mountains and clouds.

By Tracie White

Illustrations by Tim O’Brien

7:00 a.m., 45 degrees F

The moment Daniel Swain wakes up, he gets whipped about by hurricane-force winds. “A Category 5, literally overnight, hits Acapulco,” says the 34-year-old climate scientist and self-described weather geek, who gets battered daily by the onslaught of catastrophic weather headlines: wildfires, megafloods, haboobs (an intense dust storm), atmospheric rivers, bomb cyclones. Everyone’s asking: Did climate change cause these disasters? And, more and more, they want Swain to answer.

Swain, PhD ’16, rolls over in bed in Boulder, Colo., and checks his cell phone for emails. Then, retainer still in his mouth, he calls back the first reporter of the day. It’s October 25, and Isabella Kwai at the New York Times wants to know whether climate change is responsible for the record-breaking speed and ferocity of Hurricane Otis, which rapidly intensified and made landfall in Acapulco as the eastern Pacific’s strongest hurricane on record. It caught everyone off guard. Swain posted on X (formerly known as Twitter) just hours before the storm hit: “A tropical cyclone undergoing explosive intensification unexpectedly on final approach to a major urban area . . . is up there on list of nightmare weather scenarios becoming more likely in a warming #climate.”

Swain is simultaneously 1,600 miles away from the tempest and at the eye of the storm. His ability to explain science to the masses—think the Carl Sagan of weather—has made him one of the media’s go-to climate experts. He’s a staff research scientist at UCLA’s Institute of the Environment and Sustainability who spends more than 1,100 hours each year on public-facing climate and weather communication, explaining whether (often, yes) and how climate change is raising the number and exacer­bating the viciousness of weather disasters. “I’m a physical scientist, but I not only study how the physics and thermo­dynamics of weather evolve but how they affect people,” says Swain. “I lead investigations into how extreme events like floods and droughts and wildfires are changing in a warming climate, and what we might do about it.”

He translates that science to everyday people, even as the number of weather-disaster headlines grows each year. “To be quite honest, it’s nerve-racking,” says Swain. “There’s such a demand. But there’s a climate emergency, and we need climate scientists to talk to the world about it.”

No bells, no whistles. No fancy clothes, makeup, or vitriolic speech. Sometimes he doesn’t even shave for the camera. Just a calm, matter-of-fact voice talking about science on the radio, online, on TV. In 2023, he gave nearly 300 media interviews—sometimes at midnight or in his car. The New York Times, CNN, and BBC keep him on speed dial. Social media is Swain’s home base. His Weather West blog reaches millions. His weekly Weather West “office hours” on YouTube are public and interactive, doubling as de facto press conferences. His tweets reach 40 million people per year. “I don’t think that he appreciates fully how influential he is of the public understanding of weather events, certainly in California but increasingly around the world,” says Stanford professor of earth system science Noah Diffenbaugh, ’96, MS ’97, Swain’s doctoral adviser and mentor. “He’s such a recognizable presence in newspapers and radio and television. Daniel’s the only climate scientist I know who’s been able to do that.”

Illustration of Daniel Swain's reflection in a puddle.

There’s no established job description for climate communicator—what Swain calls himself—and no traditional source of funding. He’s not particularly a high-energy person, nor is he naturally gregarious; in fact, he has a chronic medical condition that often saps his energy. But his work is needed, he says. “Climate change is an increasingly big part of what’s driving weather extremes today,” Swain says. “I connect the dots between the two. There’s a lot of misunderstanding about how a warming climate affects day-to-day variations in weather, but my goal is to push public perception toward what the science actually says.” So when reporters call him, he does his best to call them back. 

Decoration

7:30 a.m., winds at 5 mph from the east northeast

Swain finishes the phone call with the Times reporter and schedules a Zoom interview with Reuters for noon. Then he brushes his teeth. He’s used to a barrage of requests when there’s a catastrophic weather event. Take August 2020, when, over three days, California experienced 14,000 lightning strikes from “dry” thunderstorms. More than 650 reported wildfires followed, eventually turning the skies over San Francisco a dystopian orange. “In a matter of weeks, I did more than 100 interviews with television, radio, and newspaper outlets, and walked a social media audience of millions through the disaster unfolding in their own backyards,” he wrote in a recent essay for Nature.

Swain’s desire to understand the physics of weather stretches back to his preschool years. In 1993, his family moved from San Francisco across the Golden Gate Bridge to San Rafael, and the 4-year-old found himself wondering where all that Bay City fog had gone. Two years later, Swain spent the first big storm of his life under his parents’ bed. He lay listening to screeching 100 mile-per-hour winds around his family’s home, perched on a ridge east of Mount Tamalpais. But he was more excited than scared. The huge winter storm of 1995 that blew northward from San Francisco and destroyed the historic Conservatory of Flowers just got 6-year-old Swain wired.

‘Climate change is an increasingly big part of what’s driving weather extremes today. I connect the dots between the two.’

“To this day, it’s the strongest winds I’ve ever experienced,” he says. “It sent a wind tunnel through our house.” It broke windows. Shards of glass embedded in one of his little brother’s stuffies, which was sitting in an empty bedroom. “I remember being fascinated,” he says. So naturally, when he got a little older, he put a weather station on top of that house. And then, in high school, he launched his Weather West blog. “It was read by about 10 people,” Swain says, laughing. “I was a weather geek. It didn’t exactly make me popular.” Two decades, 550 posts, and 2 million readers later, well, who’s popular now?

Swain graduated from UC Davis with a bachelor’s degree in atmospheric science. He knew then that something big was happening on the weather front, and he wanted to understand how climate change was influencing the daily forecast. So at Stanford, he studied earth system science and set about using physics to understand the causes of changing North Pacific climate extremes. “From the beginning, Daniel had a clear sense of wanting to show how climate change was affecting the weather conditions that matter for people,” says Diffenbaugh. “A lot of that is extreme weather.” Swain focused on the causes of persistent patterns in the atmosphere—long periods of drought or exceptionally rainy winters—and how climate change might be exacerbating them.

The first extreme weather event he studied was the record-setting California drought that began in 2012. He caught the attention of both the media and the scientific community after he coined the term Ridiculously Resilient Ridge, referring to a persistent ridge of high pressure caused by unusual oceanic warmth in the western tropical Pacific Ocean. That ridge was blocking weather fronts from bringing rain into California. The term was initially tongue-in-cheek. Today the Ridiculously Resilient Ridge (aka RRR or Triple R) has a Wikipedia page.

“One day, I was sitting in my car, waiting to pick up one of my kids, reading the news on my phone,” says Diffenbaugh. “And I saw this article in the Economist about the drought. It mentioned this Ridiculously Resilient Ridge. I thought, ‘Oh, wow, that’s interesting. That’s quite a branding success.’ I click on the page and there’s a picture of Daniel Swain.”

Diffenbaugh recommended that Swain write a scientific paper about the Ridiculously Resilient Ridge, and Swain did, in 2014. By then, the phrase was all over the internet. “Journalists started calling while I was still at Stanford,” says Swain. “I gave into it initially, and the demand just kept growing from there.”

Decoration

11:45 a.m., precipitation 0 inches

Swain’s long, lanky frame is seated ramrod straight in front of his computer screen, scrolling for the latest updates about Hurricane Otis. At noon, he signs in to Zoom and starts answering questions again.

Reuters: “Hurricane Otis wasn’t in the forecast until about six to 10 hours before it occurred. What would you say were the factors that played into its fierce intensification?”

Swain: “Tropical cyclones, or hurricanes, require a few different ingredients. I think the most unusual one was the warmth of water temperature in the Pacific Ocean off the west coast of Mexico. It’s much higher than usual. This provided a lot of extra potential intensity to this storm. We expect to see increases in intensification of storms like this in a warming climate.”

Swain’s dog, Luna, bored by the topic, snores softly. She’s asleep just behind him, next to a bookshelf filled with weather disaster titles: The Terror by Dan Simmons; The Water Will Come by Jeff Goodell; Fire Weather by John Vaillant. And the deceptively hopeful-sounding Paradise by Lizzie Johnson, which tells the story of the 2018 Camp Fire that burned the town of Paradise, Calif., to the ground. Swain was interviewed by Johnson for the book. The day of the fire, he found himself glued to the comment section of his blog, warning anyone who asked about evacuation to get out.

“During the Camp Fire, people were commenting, ‘I’m afraid. What should we do? Do we stay or do we go?’ Literally life or death,” he says. He wrote them back: “There is a huge fire coming toward you very fast. Leave now.” As they fled, they sent him progressively more horrifying images of burning homes and trees like huge, flaming matchsticks. “This makes me extremely uncomfortable—that I was their best bet for help,” says Swain.

Swain doesn’t socialize much. He doesn’t have time. His world revolves around his home life, his work, and taking care of his health. He has posted online about his chronic health condition, Ehlers-Danlos syndrome, a heritable connective tissue disease that, for him, results in fatigue, gastrointestinal problems, and injuries—he can partially dislocate a wrist mopping the kitchen floor. He works to keep his health condition under control when he has down time, traveling to specialists in Utah, taking medications and supplements, and being cautious about any physical activity. When he hikes in the Colorado Rocky Mountains, he’s careful and tries to keep his wobbly ankles from giving out. Doctors don’t have a full understanding of EDS. So, Swain researches his illness himself, much like he does climate science, constantly looking for and sifting through new data, analyzing it, and sometimes sharing what he discovers online with the public. “If it’s this difficult to parse even as a professional scientist and science communicator, I can only imagine how challenging this task is for most other folks struggling with complex/chronic illnesses,” he wrote on Twitter. 

‘“There is a huge fire coming toward you very fast. Leave now.” This makes me extremely uncomfortable—that I was their best bet for help. ’

It helps if he can exert some control over his own schedule to minimize fatigue. The virtual world has helped him do that. He mostly works from a small, extra bedroom in an aging rental home perched at an elevation of 5,400 feet in Boulder, where he lives with his partner, Jilmarie Stephens, a research scientist at the University of Colorado Boulder.

When Swain was hired at UCLA in 2018, Peter Kareiva, the then director of the Institute of the Environment and Sustainability, supported a nontraditional career path that would allow Swain to split his time between research and climate communication, with the proviso that he find grants to fund much of his work. That same year, Swain was invited to join a group at the National Center for Atmospheric Research (NCAR) located in Boulder, which has two labs located at the base of the Rocky Mountains. 

“Daniel had a very clear vision about how he wanted to contribute to science and the world, using social media and his website,” says Kareiva, a research professor at UCLA. “We will not solve climate change without a movement, and communication and social media are key to that. Most science papers are never even read. What we do as scientists only matters if it has an impact on the world. We need at least 100 more Daniels.”

And yet financial support for this type of work is never assured. In a recent essay in Nature, Swain writes about what he says is a desperate need for more institutions to fund climate communication by scientists. “Having a foot firmly planted in both research and public-engagement worlds has been crucial,” he writes. “Even as I write this, it’s unclear whether there will be funding to extend my present role beyond the next six months.”

Decoration

4:00 p.m., 67 degrees F

“Ready?” says the NBC reporter on the computer screen. “Can we just have you count to 10, please?”

“Yep. One, two, three, four, five, six, seven, eight, nine, 10,” Swain says.

“Walk me through in a really concise way why we saw this tropical storm, literally overnight, turn into a Category 5 hurricane, when it comes to climate change,” the reporter says.

“So, as the Earth warms, not only does the atmosphere warm or air temperatures increase, but the oceans are warming as well. And because warm tropical oceans are hurricane fuel, the maximum potential intensity of hurricanes is set by how warm the oceans are,” Swain says.

An hour later, Swain lets Luna out and prepares for the second half of his day: He’ll spend the next five hours on a paper for a science journal. It’s a review of research on weather whiplash in California—the phenomenon of rapid swings between extremes, such as the 2023 floods that came on the heels of a severe drought. Using atmospheric modeling, Swain predicted in a 2018 Nature Climate Change study that there would be a 25 percent to 100 percent increase in extreme dry-to-wet precipitation events in the years ahead. Recent weather events support that hypothesis, and Swain’s follow-up research analyzes the ways those events are seriously stressing California’s water storage and flood control infrastructure.

“What’s remarkable about this summer is that the record-shattering heat has occurred not only over land but also in the oceans,” Swain explained in an interview with Katie Couric on YouTube in August, “like the hot tub [temperature] water in certain parts of the shallow coastal regions off the Gulf of Mexico.” In a warming climate, the atmosphere acts as a kitchen sponge, he explains later. It soaks up water but also wrings it out. The more rapid the evaporation, the more intense the 
precipitation. When it rains, there are heavier downpours and more extreme flood events.

‘What we do as scientists only matters if it has an impact on the world. We need at least 100 more Daniels.’

“It really comes down to thermo­dynamics,” he says. The increasing temperatures caused by greenhouse gases lead to more droughts, but they also cause more intense precipitation. The atmosphere is thirstier, so it takes more water from the land and from plants. The sponge holds more water vapor. That’s why California is experiencing these wild alternations, he says, from extremely dry to extremely wet. “It explains the role climate change plays in turning a tropical storm overnight into hurricane forces,” he says.

Decoration

October 26, expected high of 45 degrees F

In 2023, things got “ludicrously crazy” for both Swain and the world. It was the hottest year in recorded history. Summer temperatures broke records worldwide. The National Oceanic and Atmospheric Administration reported 28 confirmed weather/climate disaster events with losses exceeding $1 billion—among them a drought, four flooding events, 19 severe storm events, two tropical cyclones, and a killer wildfire. Overall, catastrophic weather events resulted in the deaths of 492 people in the United States. “Next year may well be worse than that,” Swain says. “It’s mind-blowing when you think about that.” 

“There have always been floods and wildfires, hurricanes and storms,” Swain continues. “It’s just that now, climate change plays a role in most weather disasters”—pumped-up storms, more intense and longer droughts and wildfire seasons, and heavier rains and flooding. It also plays a role in our ability to manage those disasters, Swain says. In a 2023 paper he published in Communications Earth & Environment, for example, he provides evidence that climate change is shifting the ideal timing of prescribed burns (which help mitigate wildfire risk) from spring and autumn to winter.

The day after Hurricane Otis strikes, Swain’s schedule has calmed down, so he takes time to make the short drive from his home up to the NCAR Mesa Lab, situated in a majestic spot where the Rocky Mountains meet the plains. Sometimes he’ll sit in his Hyundai in the parking lot, looking out his windshield at the movements of the clouds while doing media interviews on his cell phone. Today he scrolls through weather news updates on the aftermath of Hurricane Otis, keeping informed for the next interview that pops up, or his next blog post. In total, 52 people will be reported dead due to the disaster. The hurricane destroyed homes and hotels, high-rises and hospitals. Swain’s name will appear in at least a dozen stories on Hurricane Otis, including one by David Wallace-Wells, an opinion writer for the New York Times, columnist for the New York Times Magazine, and bestselling author of The Uninhabitable Earth: Life After Warming. “It’s easy to get pulled into overly dramatic ways of looking at where the world is going,” says Wallace-Wells, who routinely listens to Swain’s office hours and considers him a key source when he needs information on weather events. “Daniel helps people know how we can better calibrate those fears with the use of scientific rigor. He’s incredibly valuable.”

From the parking lot in the mountains, Swain often watches the weather that blows across the wide-open plains that stretch for hundreds of miles, all the way to the Mississippi River. He never tires of examining weather in real time, learning from it. He studies the interplay between the weather and the clouds at this spot where storms continually roll in and roll out.

“After all these years,” he says, “I’m still a weather geek.” 


Tracie White is a senior writer at Stanford. Email her at traciew@stanford.edu.

The Causes of Climate Change (Psychology Today)

Human-caused climate change is not our main challenge: It is certain values.

Ilan Kelman Ph.D.

Posted February 21, 2021 

We are told that 2030 is a significant year for global sustainability targets. What could we really achieve comprehensively from now until then, especially with climate change dominating so many discussions and proposals?

Photo Taken by Ilan Kelman
More sustainable transport on water and land, with many advantages beyond tackling climate change (Leeuwarden, the Netherlands). Source: Photo Taken by Ilan Kelman

Several United Nations agreements use 2030 for their timeframe, including the Sustainable Development Goals, the Sendai Framework for Disaster Risk Reduction, the Paris Agreement for tackling human-caused climate change, and the Addis Ababa Action Agenda on Financing for Development. Aside from the oddity of having separate agreements with separate approaches from separate agencies to achieve similar goals, climate change is often explicitly separated as a topic. Yet it brings little that is new to the overall and fundamental challenges causing our sustainability troubles.

Consider what would happen if tomorrow we magically reached exactly zero greenhouse gas emissions. Overfishing would continue unabated through what is termed illegal, unreported, and unregulated fishing, often in protected areas such as Antarctic waters. Demands from faraway markets would still devastate nearshore marine habitats and undermine local practices serving local needs.

Deforestation would also continue. Examples are illegal logging in protected areas of Borneo and slashing-and-burning through the Amazon’s rainforest, often to plant products for supermarket shelves appealing to affluent populations. Environmental exploitation and ruination did not begin with, and is not confined to, climate change.

A similar ethos persists for human exploitation. No matter how awful the harm, human trafficking, organ harvesting, child marriage, child labour, female genital mutilation, and arms deals would not end with greenhouse gas emissions.

If we solved human-caused climate change, then humanity—or, more to the point, certain sectors of humanity—would nonetheless display horrible results in wrecking people and ecosystems. It comes from a value favouring immediate exploitation of any resource without worrying about long-term costs. It sits alongside the value of choosing to live out of balance with the natural environment from local to global scales.

These are exactly the same values causing the climate to change quickly and substantively due to human activity. In effect, it is about using fossil fuels as a resource as rapidly as possible, irrespective of the negative social and environmental consequences.

Changing these values represents the fundamental challenge. Doing so ties together all the international efforts and agreements.

The natural environment, though, does not exist in isolation from us. Human beings have never been separate from nature, even when we try our best to divorce society from the natural environments around us. Our problematic values are epitomised by seeing nature as being at our service, different or apart from humanity.

Human-caused climate change is one symptom among many of such unsustainable and destructive values. Referring to the “climate crisis” or “climate emergency” is misguided since similar crises and emergencies manifest for similar reasons, including overfishing, deforestation, human exploitation, and an industry selling killing devices.

The real crisis and the real emergency are certain values. These values lead to behaviour and actions which are the antithesis of what the entire 2030 agenda aims to achieve. We do a disservice to ourselves and our place in the environment by focusing on a single symptom, such as human-caused climate change.

Revisiting our values entails seeking fundaments for what we seek for 2030—and, more importantly, beyond. One of our biggest losses is in caring: caring for ourselves and for people and environments. Dominant values promote inward-looking, short-term thinking for action yielding immediate, superficial, and short-lived gains.

We ought to pivot sectors with these values toward caring about the long-term future, caring for people, caring for nature, and especially caring for ourselves—all of us—within and connected to nature. A caring pathway to 2030 is helpful, although we also need an agenda mapping out a millennium (and more) beyond this arbitrary year. Rather than using “social capital” and “natural capital” to define people and the environment, and rather than treating our skills and efforts as commodities, our values must reflect humanity, caring, integration with nature, and many other underpinning aspects.

When we fail to do so, human-caused climate change demonstrates what manifests, but it is only a single example from many. Placing climate change on a pedestal as the dominant or most important topic distracts from the depth and breadth required to identify problematic values and then morph them into constructive ones.

Focusing on the values that cause climate change and all the other ills is a baseline for reaching and maintaining sustainability. Then, we would not only solve human-caused climate change and achieve the 2030 agenda, but we would also address so much more for so much longer.

With the World Stumbling Past 1.5 Degrees of Warming, Scientists Warn Climate Shocks Could Trigger Unrest and Authoritarian Backlash (Inside Climate News)

Science

With the World Stumbling Past 1.5 Degrees of Warming, Scientists Warn Climate Shocks Could Trigger Unrest and Authoritarian Backlash

Most of the public seems unaware that global temperatures will soon push past the target to which the U.N. hoped to limit warming, but researchers see social and psychological crises brewing.

By Bob Berwyn

January 28, 2024

Activists march in protest on day nine of the COP28 Climate Conference on Dec. 9, 2023 in Dubai, United Arab Emirates. Credit: Sean Gallup/Getty Images
Activists march in protest on day nine of the COP28 Climate Conference on Dec. 9, 2023 in Dubai, United Arab Emirates. Credit: Sean Gallup/Getty Images

As Earth’s annual average temperature pushes against the 1.5 degree Celsius limit beyond which climatologists expect the impacts of global warming to intensify, social scientists warn that humanity may be about to sleepwalk into a dangerous new era in human history. Research shows the increasing climate shocks could trigger more social unrest and authoritarian, nationalist backlashes.

Established by the 2015 Paris Agreement and affirmed by a 2018 report from the Intergovernmental Panel on Climate Change, the 1.5 degree mark has been a cliff edge that climate action has endeavored to avoid, but the latest analyses of global temperature data showed 2023 teetering on that red line. 

One major dataset suggested that the threshold was already crossed in 2023, and most projections say 2024 will be even warmerCurrent global climate policies have the world on a path to heat by about 2.7 degrees Celsius by 2100, which would threaten modern human civilization within the lifespan of children born today.

Paris negotiators were intentionally vague about the endeavor to limit warming to 1.5 degrees, and the Intergovernmental Panel on Climate Change put the goal in the context of 30-year global averages. Earlier this month, the Berkeley Earth annual climate report showed Earth’s average temperature in 2023 at 1.54 degrees Celsius above the 1850-1900 pre-industrial average, marking the first step past the target. 

But it’s barely registering with people who are being bombarded with inaccurate climate propaganda and distracted by the rising cost of living and regional wars, said Reinhard Steurer, a climate researcher at the University of Natural Resources and Life Sciences, Vienna.

“The real danger is that there are so many other crises around us that there is no effort left for the climate crisis,” he said. “We will find all kinds of reasons not to put more effort into climate protection, because we are overburdened with other things like inflation and wars all around us.”

Steurer said he doesn’t expect any official announcement from major climate institutions until long after the 1.5 degree threshold is actually crossed, when some years will probably already be edging toward 2 degrees Celsius. “I think most scientists recognize that 1.5 is gone,” he said.

“We’ll be doing this for a very long time,” he added, “not accepting facts, pretending that we are doing a good job, pretending that it’s not going to be that bad.” 

In retrospect, using the 1.5 degree temperature rise as the key metric of whether climate action was working may have been a bad idea, he said.

“It’s language nobody really understands, unfortunately, outside of science,” he said. ”You always have to explain that 1.5 means a climate we can adapt to and manage the consequences, 2 degrees of heating is really dangerous, and 3 means collapse of civilization.”

Absent any formal notification of breaching the 1.5 goal, he hopes more scientists talk publicly about worst-case outcomes.

“It would really make a difference if scientists talked more about societal collapse and how to prepare for that because it would signal, now it’s getting real,” he said. “It’s much more tangible than 1.5 degrees.”

Instead, recent public climate discourse was dominated by feel-good announcements about how COP28 kept the 1.5 goal alive, he added.

“This is classic performative politics,” he said. “If the fossil fuel industry can celebrate the outcome of the COP, that’s not a good sign.”

Like many social scientists, Steurer is worried that the increasingly severe climate shocks that warming greater than 1.5 degrees brings will reverberate politically as people reach for easy answers.

“That is usually denial, in particular when it comes to right-wing parties,” he said. “That’s the easiest answer you can find.” 

“Global warming will be catastrophic sooner or later, but for now, denial works,” he said. “And that’s all that matters for the next election.”

‘Fear, Terror and Anxiety’

Social policy researcher Paul Hoggett, professor emeritus at the University of the West of England in Bristol, said the scientific roots of 1.5-degree target date back to research in the early 2000s that culminated in a University of Exeter climate conference at which scientists first spelled out the risks of triggering irreversible climate tipping points above that level of warming.

“I think it’s still seen very much as that key marker of where we move from something which is incremental, perhaps to something which ceases to be incremental,” he said. “But there’s a second reality, which is the reality of politics and policymaking.” 

The first reality is “profoundly disturbing,” but in the political world, 1.5 is a symbolic maker, he said. 

“It’s more rhetorical; it’s a narrative of 1.5,” he said, noting the disconnect of science and policy. “You almost just shrug your shoulders. As the first reality worsens, the political and cultural response becomes more perverse.” 

A major announcement about breaching the 1.5 mark in today’s political and social climate could be met with extreme denial in a political climate marked by “a remorseless rise of authoritarian forms of nationalism,” he said. “Even an announcement from the Pope himself would be taken as just another sign of a global elite trying to pull the wool over our eyes.” 

An increasing number of right-wing narratives simply see this as a set of lies, he added.

“I think this is a huge issue that is going to become more and more important in the coming years,” he said. “We’re going backwards to where we were 20 years ago, when there was a real attempt to portray climate science as misinformation,” he said. “More and more right wing commentators will portray what comes out of the IPCC, for example, as just a pack of lies.”

The IPCC’s reports represent a basic tenet of modernity—the idea that there is no problem for which a solution cannot be found, he said.

“Even an announcement from the Pope himself would be taken as just another sign of a global elite trying to pull the wool over our eyes.”

“However, over the last 100 years, this assumption has periodically been put to the test and has been found wanting,” Hoggett wrote in a 2023 paper. The climate crisis is one of those situations with no obvious solution, he wrote. 

In a new book, Paradise Lost? The Climate Crisis and the Human Condition, Hoggett says the climate emergency is one of the big drivers of authoritarian nationalism, which plays on the terror and anxiety the crisis inspires.

“Those are crucial political and individual emotions,” he said. “And it’s those things that drive this non-rational refusal to see what’s in front of your eyes.”

“At times of such huge uncertainty, a veritable plague of toxic public feelings can be unleashed, which provide the effective underpinning for political movements such as populism, authoritarianism, and totalitarianism,” he said.

“When climate reality starts to get tough, you secure your borders, you secure your own sources of food and energy, and you keep out the rest of them. That’s the politics of the armed lifeboat.” 

The Emotional Climate

“I don’t think people like facing things they can’t affect,” said psychotherapist Rebecca Weston, co-president of the Climate Psychology Alliance of North America. “And in trauma, people do everything that they possibly can to stop feeling what is unbearable to feel.”

That may be one reason why the imminent breaching of the 1.5 degree limit may not stir the public, she said.

“We protect ourselves from fear, we protect ourselves from deep grief on behalf of future generations and we protect ourselves from guilt and shame. And I think that the fossil fuel industry knows that,” she said. “We can be told something over and over and over again, but if we have an identity and a sense of ourselves tied up in something else, we will almost always refer to that, even if it’s at the cost of pretending that something that is true is not true.”

Such deep disavowal is part of an elaborate psychological system for coping with the unbearable. “It’s not something we can just snap our fingers and get ourselves out of,” she said.

People who point out the importance of the 1.5-degree warming limit are resented because they are intruding on peoples’ psychological safety, she said, and they become pariahs. “The way societies enforce this emotionally is really very striking,” she added. 

But how people will react to passing the 1.5 target is hard to predict, Weston said.

“I do think it revolves around the question of agency and the question of meaning in one’s life,” she said. “And I think that’s competing with so many other things that are going on in the world at the same time, not coincidentally, like the political crises that are happening globally, the shift to the far right in Europe, the shift to the far right in the U.S. and the shift in Argentina.”

Those are not unrelated, she said, because a lack of agency produces a yearning for false, exclusionary solutions and authoritarianism. 

“If there’s going to be something that keeps me up at night, it’s not the 1.5. It’s the political implications of that feeling of helplessness,” she said. “People will do an awful lot to avoid feeling helpless. That can mean they deny the problem in the first place. Or it could mean that they blame people who are easier targets, and there is plenty of that to witness happening in the world. Or it can be utter and total despair, and a turning inward and into a defeatist place.”

She said reaching the 1.5 limit will sharpen questions about addressing the problem politically and socially. 

“I don’t think most people who are really tracking climate change believe it’s a question of technology or science,” she said. “The people who are in the know, know deeply that these are political and social and emotional questions. And my sense is that it will deepen a sense of cynicism and rage, and intensify the polarization.”

Unimpressed by Science

Watching the global temperature surging past the 1.5 degree mark without much reaction from the public reinforces the idea that the focus on the physical science of climate change in recent decades came at the expense of studying how people and communities will be affected and react to global warming, said sociologist and author Dana Fisher, a professor in the School of International Service at American University and director of its Center for Environment, Community, and Equity.

“It’s a fool’s errand to continue down that road right now,” she said. “It’s been an abysmal ratio of funds that are going to understand the social conflict that’s going to come from climate shocks, the climate migration and the ways that social processes will have to shift. None of that has been done.”

Passing the 1.5 degree threshold will “add fuel to the fire of the vanguard of the climate movement,” she said. “Groups that are calling for systemic change, that are railing against incremental policy making and against business as usual are going to be empowered by this information, and we’re going to see those people get more involved and be more confrontational.”

And based on the historical record, a rise in climate activism is likely to trigger a backlash, a dangerous chain reaction that she outlined in her new book, Saving Ourselves: From Climate Shocks to Climate Action

“When you see a big cycle of activism growing, you get a rise in counter-movements, particularly as activism becomes more confrontational, even if it’s nonviolent, like we saw during the Civil Rights period,” she said. “And it will lead to clashes.”

Looking at the historic record, she said, shows that repressive crackdowns on civil disobedience is often where the violence starts. There are signs that pattern will repeat, with police raids and even pre-emptive arrests of climate activists in Germany, and similar repressive measures in the United Kingdom and other countries.

“I think that’s an important story to talk about, that people are going to push back against climate action just as much as they’re going to push for it,” she said. “There are those that are going to feel like they’re losing privileged access to resources and funding and subsidies.”

“When you see a big cycle of activism growing, you get a rise in counter-movements, particularly as activism becomes more confrontational, even if it’s nonviolent, like we saw during the Civil Rights period.”

A government dealing effectively with climate change would try to deal with that by making sure there were no clear winners and losers, she said, but the climate shocks that come with passing the 1.5 degree mark will worsen and intensify social tensions.

“There will be more places where you can’t go outside during certain times of the year because of either smoke from fires, or extreme heat, or flooding, or all the other things that we know are coming,” she said. “That’s just going to empower more people to get off their couches and become activists.”

‘A Life or Death Task For Humanity’

Public ignorance of the planet’s passing the 1.5 degree mark depends on “how long the powers-that-be can get away with throwing up smokescreens and pretending that they are doing something significant,” said famed climate researcher James Hansen, who recently co-authored a paper showing that warming is accelerating at a pace that will result in 2 degrees of warming within a couple of decades.

“As long as they can maintain the 1.5C fiction, they can claim that they are doing their job,” he said. “They will keep faking it as long as the scientific community lets them get away with it.”

But even once the realization of passing 1.5 is widespread, it might not change the social and political responses much, said Peter Kalmus, a climate scientist and activist in California.

“Not enough people care,” he said. “I’ve been a climate activist since 2006. I’ve tried so many things, I’ve had so many conversations, and I still don’t know what it will take for people to care. Maybe they never will.”

Hovering on the brink of this important climate threshold has left Kalmus feeling “deep frustration, sadness, helplessness, and anger,” he said. “I’ve been feeling that for a long time. Now, though, things feel even more surreal, as we go even deeper into this irreversible place, seeming not to care.”

“No one really knows for sure, but it may still be just physically possible for Earth to stay under 1.5C,” he said, “if humanity magically stopped burning fossil fuels today. But we can’t stop fossil fuels that fast even if everyone wanted to. People would die. The transition takes preparation.”

And there are a lot of people who just don’t want to make that transition, he said.

“We have a few people with inordinate power who actively want to continue expanding fossil fuels,” he said. “They are the main beneficiaries of extractive capitalism; billionaires, politicians, CEOs, lobbyists and bankers. And the few people who want to stop those powerful people haven’t figured out how to get enough power to do so.”

Kalmus said he was not a big fan of setting a global temperature threshold to begin with. 

“For me it’s excruciatingly clear that every molecule of fossil fuel CO2 or methane humanity adds to the atmosphere makes irreversible global heating that much worse, like a planet-sized ratchet turning molecule by molecule,” he said. “I think the target framing lends itself to a cycle of procrastination and failure and target moving.”

Meanwhile, climate impacts will continue to worsen into the future, he said.

“There is no upper bound, until either we choose to end fossil fuels or until we simply aren’t organized enough anymore as a civilization to burn much fossil fuel,” he said. “I think it’s time for the movement to get even more radical. Stopping fossil-fueled global heating is a life-or-death task for humanity and the planet, just most people haven’t realized it yet.”

Bob Berwyn – Reporter, Austria

Bob Berwyn an Austria-based reporter who has covered climate science and international climate policy for more than a decade. Previously, he reported on the environment, endangered species and public lands for several Colorado newspapers, and also worked as editor and assistant editor at community newspapers in the Colorado Rockies.

The Forest Eaters | Rachel Nolan (New York Review of Books)

nybooks.com

In 2017, the Brazilian journalist Eliane Brum moved from São Paulo to a small city in the Amazon. Her new book vividly uncovers how the rainforest is illegally seized and destroyed.

February 22, 2024 issue

Rachel Nolan


In August 2017 Eliane Brum, one of Brazil’s best-known journalists, moved from the great metropolis of São Paulo to Altamira, a small, violence-plagued city along the Xingu River in the Amazon. Brum worked for the country’s most respected newspaper, Folha de São Paulo, as well as other smaller news outlets, where she was known for a column called The Life No One Sees, about lives that are usually “reduced to a footnote so tiny it almost slides off the page.” She regularly embedded for long periods of time with those who had no obvious reason to appear in a newspaper: a retired school lunch lady who is slowly dying of cancer, a baggage handler who dreams of taking a flight one day.

Born to Italian immigrants in Brazil, Brum was a single, teenage mother when she began working as a journalist in Florianópolis, a midsize beach city in the south. She wrote news coverage, several nonfiction books, and a novel, and codirected three documentaries. During her time in São Paulo, after covering urban Brazil for decades, she decided that the biggest story—not just in the country, but in the world—was in the rainforest. Her new book’s subtitle is “The Amazon as the Center of the World.” The book is about her move, what pulled her to Altamira, and what she found there—her attempt to radically remake her life, which she calls “reforesting” herself.

About three quarters of the Amazonian population live in towns and cities. Altamira—a city in the state of Pará, nearly twice as large as Texas—is not beautiful, it is not picturesque, it is not pleasant. Though the waters of the Xingu River used to run clear, it is now not anyone’s idea of an idyllic rainforest outpost. Once a Jesuit mission, it is now a 100,000-strong city of hulking Land Rovers with tinted windows threatening to mow down those poor or reckless enough to walk in the street. It has the dubious distinction of being among Brazil’s most violent cities, worse than Rio de Janeiro, with its famous street crime, where I was scolded within an inch of my life by an elderly stranger for leaving apartment keys and cash folded into a towel on the beach while I went for a solo swim.

Altamira is territory of the grileiros—whom Brum’s translator, Diane Whitty, glosses as “land grabbers”and their henchmen. Worth the price of admission is Brum’s detailed explanation of their particular technique of seizing and destroying the Amazon: the grileiros hire private militias to drive out Indigenous peoples, along with anyone else who lives on public preserves in the forest; chop down hardwood trees (illegally—but who is to tell in such a remote area?); and then set the rest on fire. Once that patch of the Amazon is burned, grileiros bring in cattle or plant soybeans to solidify their claim, as well as to turn a profit beyond the value of the stolen land. At the local level, corrupt officials bow to or directly work with the grileiros.The noncorrupt rightly fear them. At the national level, Brazilians have neither the resources nor the will to do much to stop them. Grileiros are, Brum writes with a flourish, “key to understanding the destruction of the rainforest, yesterday, today, always.”

The fires that spread in the Amazon in 2019 and so horrified those of us watching abroad on tiny screens were unusually large, but not unusual in any other way. The Amazon burns continuously in fires set by those working for grileiros,even now, after Jair Bolsonaro, who was elected president in 2018 on a platform of explicit support for the grileiros (his enthusiasm for murdering the rainforest earned him the nickname Captain Chainsaw), was voted out of office. The feverish pace of deforestation of the Bolsonaro years has slowed, dropping by 33.6 percent during the first six months after Luiz Inácio Lula da Silva—known to all as Lula—was inaugurated president for his third term in 2023. But less has changed than those of us rooting for the survival of planet Earth might like: the local dynamics, the destructive ways of making money from the rainforest, the permissiveness and lawlessness have remained the same.

Over the past fifty years, an estimated 17 percent of the Amazon has been turned into cropland or cattle pasture. Many scientists warn that, at around 20 or 25 percent deforestation, the Amazon could reach a tipping point, at which the poetically named “flying rivers” that recycle water vapor from the forest into rain in other areas of South America would cease to fly. Huge areas of the rainforest would turn to scrubby savanna, possibly over only a few decades, with potentially catastrophic effects, like severe droughts in places as far away as the western United States.

Heriberto Araújo, a Spanish journalist who has covered China and Latin America for Agence France-Presse and the Mexican news agency Notimex, among others, wrote in his recent book Masters of the Lost Land1 that when he traveled the Trans-Amazonian Highway past Altamira and deeper into the state of Pará, he saw not the thick vegetation of rainforest but rolling pastures and fields of soybeans:

While I had vaguely hoped to see a wild jaguar—a beast formerly so common in these forests that pioneers, unafraid, had even domesticated some specimens and treated them like pets—I was disappointed; the sole animal in sight was the humpbacked, floppy-eared, glossy white Nelore cow, the ultimate conqueror of the frontier.

Visitors in the nineteenth century described the Amazon as a wall of sound, loud with the bellows of red howler monkeys and the calls of birds and frogs. Now large areas are silent but for the rustling of cows’ tails as they slap flies—except where chainsaws grate against the remaining trees.

The subject of Brum’s book is not the rainforest itself but the human beings who live in it, logging, burning, farming, gathering, tending, replanting. An estimated 30 million people live in the Amazon. This sounds wrong to some outsiders: Apart from Indigenous groups, shouldn’t the Amazon be empty of humans, the better to leave the plants and animals in peace? (Some go so far as to argue that even the Indigenous should be displaced to cities, echoing anti-Native conservationist ideas throughout history and around the world, including in the US.)

But Brum distinguishes between the human residents of the Amazon who harm their environment, like the grileiros and big cattle, oil, and timber,and those who make a less damaging living from farming, gathering, or engaging in renewable or smaller-scale extraction. The latter group, many of whom were driven out by huge development projects like dams, mourn the trees and fish and fruit. Brum thinks that this group should have the right to stay. Her book is an attempt to be more like them, to get up close with those who have merged with the rainforest in a way that she seeks to emulate, and then to try to convey to outsiders what she has heard and felt and learned—with all its sweat and noise and discomfort. She confesses that the “book harbors the desire to make the Amazon a personal matter for those reading it.”

Brum is a useful guide to the people of the Amazon, from the Yanomami in and around Altamira and the “pioneers” who first brought in the cows to the hired guns and the workers who today clear the forest and tend cattle and soy for little or no pay. Some grileiros are small-time cattle rustlers or heads of neo-Pentecostal churches preaching the gospel of prosperity. The most powerful “don’t live in the Amazon or get their hands dirty” at all; they are members of the country’s one percent, from São Paulo, Minas Gerais, and Rio Grande do Sul. “Right now, while I’m writing and you’re reading,” she says, “they might be playing polo or listening to the São Paulo Symphony Orchestra.”

Most victims of the Amazon’s many murders are workers who demand back wages or other rights, activists who demand land for the landless, and foreign or local Yanomami environmental defenders. In 2005 an American-born nun named Dorothy Stang, who was supporting the poor in their efforts to defend land against ranchers so that they could earn a living extracting forest products without cutting down the trees, was killed on the orders of a local cattleman.

The term grileiro derives from the Portuguese word for “cricketer,” because back in the 1970s, Brum writes,

the men used to consummate their fraud by placing new sheets of paper and live crickets in boxes where the insects…produced excrement that yellowed the documents and made them look more believably like old land titles.

Grileiros worked with lawyers and corrupt civil servants who helped authenticate the fake papers: a bribe to officials registering deeds made the title official. Unlike homesteading in the United States, which was also often made possible by fraudulent claims, land grabs in the Amazon are ongoing. In Brazil, scattered notaries public, rather than a centralized registry, oversee land titles, leaving the door wide open to fraud and corruption. The researcher and journalist Maurício Torres found in 2009 that the municipality of São Félix do Xingu, in Pará, would have to be three stories high to make space for all the titles registered at the land deed offices.

This whole set of flora and fauna—cows, soybeans, grileiros—is part of the long story of what in Brazil is called “colonization.” That word, as in other Latin American countries, refers not to overseas colonies but to projects that fill out the population in valuable hinterlands. Since Brazil gained its independence from Portugal in 1822, the country has been preoccupied with keeping control over the Amazon. Brazil claims the largest portion of the rainforest, but it spills over national borders into Peru and Colombia, with smaller portions held by five other nations, as well as 3,344 separate acknowledged Indigenous territories. Beyond symbolizing natural majesty, not to mention mystery, in the world’s imagination, the Amazon represents wealth. Ten percent of all species live there, and the Amazon River, with over a thousand tributaries, holds a fifth of the planet’s fresh water.

The word “colonization” in Brazil once had the sort of positive connotation that “exploration” and “westward expansion” did to North American ears. The violent process still occupies a place among the country’s founding myths: bandeirantes (literally “flag carriers”) are honored with statues all over Brazil. During the colonial period bandeirantes cleared and settled the areas around São Paulo, then explored the interior, pushing land claims well beyond what had been allotted to the Portuguese in their 1494 treaty with the Spanish. In the eighteenth century they set off a gold rush. To grab more land for Brazil, the bandeirantes organized sneak attacks on Indigenous villages and enslaved captives. Their actual and spiritual heirs went on to slaughter Indigenous people and clear lands around the country for centuries.

After independence, government officials promoted the settlement of more remote areas in the hope of encouraging smallholding farms, not unlike the Jeffersonian ideal for early North America. Who might those farmers be? Not Indigenous peoples. Certainly not Black Brazilians, since slavery lasted for six and a half decades after independence, later than any other country in the Americas. (During the colonial period, Brazilians built an economy of sugar plantations worked by enslaved Africans—over 40 percent of all Africans forcibly brought to the New World disembarked in Brazil.) The land was for whiter Brazilians.

Europeans were shipped in, too, though mostly as workers. As in similar schemes to attract European migrants to Argentina, Venezuela, and elsewhere in Latin America, Brazilian officials in the state of São Paulo engaged in an explicit program of branqueamento,or “whitening,” just as Brazilian slaves became free. They offered free transatlantic boat passage to European immigrants, even sending agents over to impoverished northern Italian port cities to sign up the likes of Brum’s great-great-grandparents.

When Brum was still new to Altamira, she went shopping in a supermarket with an activist who worked on land conflicts, and ran into a tall white stranger. Exchanging pleasantries, she realized they were from the same part of southern Brazil, where people proudly refer to themselves as gaúchos, a kind of Brazilian cowboy. Brum had been proud of this heritage, too. After the man left, the activist told her, “He’s a grileiro.” “Still naive, I replied, ‘Gosh, a gaúcho, how disgraceful.’ Then he explained, ‘You have to understand that gaúchos are known as the Amazon’s locusts.’”

While colonization schemes “integrated” the Amazon into the rest of Brazil, the result was not sweet little farms but a thriving rubber economy. In the late nineteenth and early twentieth centuries, men from northeastern Brazil, including many recently manumitted slaves, worked throughout the Amazon as tappers on a freelance basis—affixing drains to trees to siphon off latex, the basis of wild rubber, which was at that time an important raw material for the global industrial revolution. (In 1928 Henry Ford, in an attempt to vertically integrate his car empire, briefly opened a rubber plantation and model city in the Amazon called Fordlândia.)

Escaping harsh work conditions and debts to predatory traders, many of these migrant workers vanished into the forest and settled, intermarrying with Indigenous people and quilombolas,the descendants of runaway slaves. Brum writes about the difficulty of characterizing this group, called the beiradeiros—literally, those who live on the edge of the river—to outsiders. She explains that they are the “third people” of the forest, neither quilombolas nor Indigenous. “The beiradeiros fish and hunt, crack Brazil nuts, pick açaí, plant fields, make flour, sometimes raise chickens,” she writes.

They might tap rubber if the price is good, prospect a little when there’s a new gold strike. They hunted a lot of jaguars and oncillas in the past because whites wanted the hides.

Brum opposes the conservationist groups who would oust the beiradeiros in the name of preserving the ecosystem: “Humans—this generic term invented to conceal asymmetries—are not a threat to the forest; rather, some humans are. Others interact with it, transform it, and even plant it.” Since before the “colonization” of the Amazon, even before the Portuguese disembarked in what is now Brazil, Indigenous peoples of the region have contributed to the richness of the soil and density of the forest cover by cultivating sweet potatoes, peanuts, cacao, manioc, and squash.

Brazil’s military dictatorship, in power from 1964 to 1985, oversaw a new colonization scheme in the Amazon that was much like the old one, but with more chainsaws, more funding, and more paranoia. Their colossal development plan involved displacing almost one million people—rubber tappers, farmers, Indigenous people—to exploit natural resources and build infrastructure like the Trans-Amazonian Highway, a 2,500-mile road connecting the whole basin from east to west. They also offered tax breaks, special lines of credit, and cheap land to those who would relocate to the Amazon from elsewhere in the country. On September 27, 1972, the dictator Emílio Garrastazu Médici traveled to Altamira to cut the ribbon on the project and claimed that it solved two problems: “men without land in the northeast and land without men in the Amazon.” The slogan of the project became “a land without men for men without land.” It is no accident that this sounds much like the Zionist phrase “a land without a people for a people without a land”—both draw on the concept of terra nullius (nobody’s land) that has given a legal veneer to the seizure of land around the world.

There was widespread fear in the government that foreign powers, particularly the US, had designs on the Amazon, as well as cold war concerns that guerrilla fighters might use the remote rainforest as a base. “Occupy so as not to surrender” was one not-so-subtle slogan. There were some guerrilla fighters active in Pará, but they were executed upon capture in the 1970s. After spending some time in Brazil, I was startled to learn that many people still believe that outsiders—now often the European Union and the United Nations—wish to invade, steal, or prohibit Brazilians from profiting from the Amazon, or even from entering it, by declaring it an international reserve.

As president, Bolsonaro floated the idea that international nonprofits had set the enormous 2019 blazes because they “lost money.” Later, when questioned by foreign reporters about his evidence-free assertions about international conspiracies to take over the Amazon, he said, with characteristic indelicacy, “Brazil is the virgin that every foreign pervert wants to get their hands on.” This may be a case of projection—the most successful national land grabs in the Amazon have been by Brazil, which took Acre from Bolivia and a piece of what is now Amapá from French Guiana. The historian Barbara Weinstein recalls that Itamar Franco, Brazil’s president after Fernando Collor de Mello was impeached for corruption in 1992, referred to US organizations that complained about destruction in the Amazon as “palefaces.” The implication was that North Americans slaughtered their Indigenous populations and stole and settled their land. Why shouldn’t Brazilians do the same? Bolsonaro’s views are crude but not new.

Colonization involved the massacre of whole Indigenous settlements: a truth commission report later found that over the course of the military dictatorship, government officials killed at least 8,350 Indigenous people. It also turned out to be an economic disaster for everyone other than cattle ranchers and grileiros, costing billions of dollars, and to this day the infrastructure is plagued by mudslides and flooding. Between 1978 and 1988 the Amazon was deforested by the equivalent of the whole state of Connecticut each year. Ideas of environmental protection have certainly evolved, but the destruction of the Amazon caused an outcry even at the time. The environmentalist Chico Mendes, head of the rubber tappers’ union, opposed the destruction of the Amazon, saying the government should demarcate “extractive reserves” for people to use the rainforest, but cautiously and in sustainable ways. (Dorothy Stang, the murdered nun, echoed this approach.) Mendes was assassinated by a rancher in 1988. In 1989 the prominent Kayapó leader Raoni Metuktire toured the world warning of climate collapse:

If you continue the burn-offs, the wind will increase, the Sun will grow very hot, the Earth too. All of us, not just the Indigenous, will be unable to breathe. If you destroy the forest, we will all be silenced.

In 1988, during the transition to democracy, the new Brazilian constitution granted Indigenous people “their original rights to the lands they traditionally occupy,” making it the state’s responsibility to demarcate these lands and ensure respect for property. Over the next several decades, 690 Indigenous preserves—13 percent of the national territory, much of it in the Amazon—were cordoned off. In addition to representing (insufficient) reparations for past harms, the preserves appear to be by far the best option to prevent deforestation: Indigenous peoples have proved themselves to be the world’s best protectors of the forest in study after scientific study.

Last September, Brazil’s Supreme Court blocked efforts by agribusiness-supported politicians to mandate that groups were only entitled to land they physically occupied when the 1988 constitution was signed, even though many communities had been expelled from their lands during the dictatorship. After nine of eleven judges sided with Indigenous peoples, a member of the Pitaguarí group told news outlets about the celebrations outside the courthouse:

We’re happy and we cry because we know that it’s only with demarcated territory, with protected Indigenous territory, that we’ll be able to stop climate change from happening and preserve our biome.

Then agribusiness struck back. Its allies in the National Congress quickly amended part of the legislation that the Supreme Court had found unconstitutional. Lula vetoed the new bill, but Congress overturned the veto, reinstating the absurd rule, at least until the question returns to the Supreme Court.

Though technically 13 percent of the country’s land is protected for Indigenous groups, in practice people living on these preserves—Indigenous, Black, and a combination of the two groups—are often forced out by violence or extreme poverty. The latest available numbers show 36 percent of Brazil’s Indigenous people living in cities. The Covid-19 pandemic fell hard on Indigenous Brazilians, killing many of the elders who led resistance movements or were among the last to speak their languages.

Before he was elected president, Bolsonaro’s anti-Indigenous views were already notorious. He lamented that Brazil had been less “efficient” than the North Americans, “who exterminated the Indians.” He called the demarcation of Yanomami territory “high treason” and said, “I’m not getting into this nonsense of defending land for Indians,” especially in mineral-rich areas. Brum writes that Bolsonaro “used the virus as an unexpected biological weapon in his plan to destroy original peoples” by refusing to make vaccines available or implement public health measures as it became clear that the virus’s victims were disproportionately Indigenous.

For many years Brum resisted writing directly about Indigenous groups, including the Yanomami who occupy the area nearest to Altamira. She felt she didn’t know enough, worried that she didn’t speak the language. After moving to Altamira, she got over her reticence. Some of the most intriguing quotations in her book are from the Yanomami shaman and diplomat Davi Kopenawa, who refers to outsiders to the forest as “commodities people” or “forest eaters.” He describes our books as “paper skin” where words are imprisoned, but nevertheless agreed to write one, as told in Yanomami to a French anthropologist named Bruce Albert. I followed Brum’s book into Kopenawa’s The Falling Sky (2013),2 thinking I would read just a few sections, and then tore through its six hundred pages. “I gave you my story so that you would answer those who ask themselves what the inhabitants of the forest think,” he tells Albert at the beginning of the book. Kopenawa hopes that outsiders can come to understand the following:

The Yanomami are other people than us, yet their words are right and clear…. Their forest is beautiful and silent. They were created there and have lived in it without worry since the beginning of time. Their thought follows other paths than that of merchandise.

In quoting Kopenawa extensively, Brum wants the reader to see that everyone outside the Amazon, not just gaúchos, are the locusts. Through our consumption patterns—the voracious global appetite for red meat, construction materials, new furniture, new paper created from pulped trees—most of us are preying on the Amazon and by extension on people like the Yanomami. In a place like Altamira, Brum writes, the “chain of relations is short or even nonexistent. Here it’s impossible to play innocent, or play innocent so well that we believe it ourselves, as you can do in cities like São Paulo or New York.” Brum could have included a bit more information from further up the supply chain—many of the “forest eaters” are not individual consumers but agribusiness firms unchained in Brazil, where regulations often go unenforced—but the point stands.

Brum finds plenty to criticize in Lula’s mixed record on environmental issues, and reserves her sharpest words for his support of the Belo Monte dam. The dam is a hydroelectric power plant built on the Xingu River, a project that she wrote about with rage and at length in a previous book, The Collector of Leftover Souls (2019). The fifth largest in the world, the plant was first dreamed up by the military dictatorship, but fiercely opposed by inhabitants of the Amazon because the plans required diverting rivers, destroying animal habitats, flooding huge sections of the rainforest, and displacing at least tens of thousands of people. Construction of a slightly modified plan went ahead anyway during Lula’s first term and was completed in 2019, with builders digging more earth than was moved to construct the Panama Canal. Critics say that even aside from large-scale environmental destruction, the engineering of the plant meant it would never produce the amount of energy originally promised.

Lula is of course better on Amazon policy than Bolsonaro. So is a potato, or a child. But like other Latin American leftists, he paid for extensive social spending, especially successful programs fighting malnutrition and hunger, with income from high-priced global commodities. Producing and exporting these commodities, like soybeans, takes a high environmental toll. Nonetheless, there is reason for modest optimism. His environment and climate change minister, Marina Silva, is an extraordinary woman who was born in a rubber-tapping region of the Amazon and became an environmental activist alongside Chico Mendes. But the National Congress is still dominated by agribusiness and with many earlier land grabs already laundered into legality with false paperwork, one of the most effective strategies has been not taking back stolen land, but slowing deforestation and ongoing land theft in less frequently claimed parts of the Amazon.

Though she lived a daring life even before her move to the Amazon, Brum has written a semi-memoir surprisingly low on memoir, heavy on close readings of other people, and appealingly self-deprecating. “Any journalist who makes themself out to be a great adventurer is simply foolish,” she writes.

Just live alongside the pilots and bowmen of Amazonian motor canoes and you’ll retreat into your inescapable insignificance. They can spot tracajá eggs where I see only sand, pointy rocks where I see only water, rain where I see only blue. I could barely manage to hang my hammock in a tree at bedtime.

She points out what should be obvious: that those best equipped to care for and report on the Amazon are those who are native to it and know it best.

Her projects in the Amazon now go well beyond journalism, extending into activism. She writes that her first marriage did not endure the move to Altamira, and she later married a British journalist named Jonathan Watts, who covers the environment for The Guardian.The couple, along with four other journalists, founded the Rainforest Journalism Fund in 2018 to promote reporting initially in the Amazon, and then in the Congo Basin and Southeast Asia as well. Brum and Watts have since set up an experimental 1.2-acre reforestation scheme in Altamira, on lands that had been devastated by burning for cattle grazing.

In El País in 2014, Brum interviewed the Brazilian anthropologist Eduardo Viveiros de Castro, who told her, “The Indigenous are experts in the end of the world.” Brum’s recommendation—really, her plea—is that as the planet warms and the Amazon turns to savanna, outsiders “listen to the people who have been called barbarians…. Listen [out of] an ultimate survival instinct.” She writes:

Perhaps, if we are fortunate, those whose lives have so often been destroyed by those who label themselves civilized will agree to teach us to live after the end of the world.

How This Climate Activist Justifies Political Violence (New York Times)

Talk Original article

Jan. 14, 2024

By David Marchese Photo Illustration by Bráulio Amado

With the 2021 publication of his unsettling book, “How to Blow Up a Pipeline,” Andreas Malm established himself as a leading thinker of climate radicalism. The provocatively titled manifesto, which, to be clear, does not actually provide instructions for destroying anything, functioned both as a question — why has climate activism remained so steadfastly peaceful in the face of minimal results? — and as a call for the escalation of protest tactics like sabotage. The book found an audience far beyond that of texts typically published by relatively obscure Marxist-influenced Swedish academics, earning thoughtful coverage in The New Yorker, The Economist, The Nation, The New Republic and a host of other decidedly nonradical publications, including this one. (In another sign of the book’s presumed popular appeal, it was even adapted into a well-reviewed movie thriller.) Malm’s follow-up, “Overshoot: How the World Surrendered to Climate Breakdown,” written with Wim Carton and scheduled to be published this year, examines the all-consuming pursuit of fossil-fuel profits and what the authors identify as the highly dubious and hugely dangerous new justifications for that pursuit. But, says Malm, who is 46, “the hope is that humanity is not going to let everything go down the drain without putting up a fight.”

It’s hard for me to think of a realm outside of climate where mainstream publications would be engaging with someone, like you, who advocates political violence.1 Why are people open to this conversation? 

If you know something about the climate crisis, this means that you are aware of the desperation that people feel. It is quite likely that you feel it yourself. With this desperation comes an openness to the idea that what we’ve done so far isn’t enough. But the logic of the situation fundamentally drives this conversation: All attempts to rein in this problem have failed miserably. Which means that, virtually by definition, we have to try something more than we’ve tried.

How confident are you that when you open the door to political violence, it stays at the level of property and not people? You’ve written about the need to be careful, but the emotions that come with violence are not careful emotions. 

Political history is replete with movements that have conducted sabotage without taking the next step. But the risk is there. One driver of that risk is that the climate crisis itself is exacerbating all the time. It’s hard-wired to get worse. So people might well get more desperate. Now, in the current situation, in every instance that I know of, climate movements that experiment with sabotage steer clear of deliberately targeting people. We might smash things, which people are doing here and there,2 but no one is seriously considering that you should get a gun and shoot people. Everyone knows that would be completely disastrous. The point that’s important to make is that the reason that people contemplate escalation is that there are no risk-free options left.

I know you’re saying historically this is not the case, but it’s hard to think that deaths don’t become inevitable if there is more sabotage. 

Sure, if you have a thousand pipeline explosions per year, if it takes on that extreme scale. But we are some distance from that, unfortunately.

Don’t say “unfortunately.” 

Well, I want sabotage to happen on a much larger scale than it does now. I can’t guarantee that it won’t come with accidents. But what do I know? I haven’t personally blown up a pipeline, and I can’t foretell the future.

The prospect of even accidental violence against people — 

But the thing we need to keep in mind is that existing pipelines, new pipelines, new infrastructure for extracting fossil fuels are not potentially, possibly — they are killing people as we speak. The more saturated the atmosphere is with CO2, putting more CO2 into the atmosphere causes more destruction and death. In Libya in September, in the city of Derna, you had thousands of people killed in floods in one night. Scientists could conclude that global warming made these floods 50 times as likely as if there hadn’t been such warming.3 We need to start seeing these people as victims of the violence of the climate crisis. In the light of this, the idea of attacking infrastructure and closing down new pipelines is a disarmament. It’s about taking down a machine that actually kills people.

I’m curious: How do you communicate with your kids4 about climate? 

I’m not sure that I’ve had any deliberate plan, but it has been inevitable, with my 9-year-old at least, that we’ve had conversations.

Do you anticipate having the conversation where you explain the radical nature of your ideas? 

Well, yeah. Both of them have watched the film, “How to Blow Up a Pipeline.”5

Your 4-year-old? 

Yes. There were a couple of scenes that stayed with them, particularly when people were wounded. They found this fascinating. They know that their father is a little politically crazy, if I can put it that way.

A scene from the film “How to Blow Up a Pipeline."

Generally we teach kids that violence or breaking people’s things is bad. Do you feel you can honestly give your kids the same message? 

I hope that I communicate through my parenting that generally you shouldn’t break things. But I hope that they get the impression that I consider there to be exceptions to this rule. My 4-year-old, for instance, when we were biking around Malmo,6 where we live, he would be on the lookout for S.U.V.s. He knows these are the bad cars. I think they have an awareness of the tactic of deflating S.U.V. tires.7

Is there not a risk that smashing things would cause a backlash that would actually impede progress on climate? 

I fundamentally disagree with the idea that there is progress happening and that we might ruin it by escalating. In 2022, we had the largest windfall of profits in the fossil-fuel industry8 ever. These profits are reinvested into expanded production of fossil fuels. The progress that people talk about is often cast in terms of investment in renewables and expansion in the capacity of solar and wind power around the world. However, that is not a transition. That is an addition of one kind of energy on top of another. It doesn’t matter how many solar panels we build if we also keep building more coal power plants, more oil pipelines, and on that crucial metric there simply is no progress. I struggle to see how anyone could interpret the trends as pointing in the right direction. Now, on the question of what kind of reaction would we get from society if we as a climate movement radicalized: There might be more repression of the movement. There might be more aggressive defense of fossil-fuel interests. We also see signs that radical forms of climate protest alienate popular audiences. But the kind of tactic that mostly pisses people off, and I’m talking about the European context, is random targeting of commuters by means of road blockades. Sabotage of particular installations for fossil-fuel extraction can gain more support from people because these actions make sense. The target is obviously the source of the problem, and it doesn’t necessarily hurt ordinary people in their daily lives. We have to be careful about not doing things that alienate the target audience, which is ordinary working people.

Don’t you think, with companies as wealthy as the oil giants, if activists smash their stuff, they’ll just fix it and get back to business? 

Here’s a big problem that we deal with quite extensively in the “Overshoot” book: stranded assets. ExxonMobil and Aramco and these giants exude this worry that a transition would destroy their capital and that this shift could happen quickly. So in this context, the rationale of sabotage is to bring home the message to these companies: Yes, your assets are at risk of destruction. When something happens that makes the threat of stranded assets credible, investors will suddenly realize, there’s a real risk that if I invest a lot of money, I might lose everything.

Explain the term “overshoot.” 

The simplest definition of “overshoot” is that you shoot past the limits that you have set for global warming. So you go over 1.5 or 2 degrees. But the term has come to mean something more in climate science and policy discourse, which is that you can go over and then go back down. So you shoot past 1.5 or 2, but then you return to 1.5 or 2, primarily by means of carbon-dioxide removal. I think this is extremely implausible. But the idea is that you can exceed a temperature limit but respect it at a later point by rolling out technologies for taking it down.

And your argument is that overshoot just provides a cover for business as usual? 

Yes. What’s happening now is that you see ExxonMobil or Occidental or ADNOC9 — these companies are at the forefront of expanding DAC10 capacity. What Al Jaber11 is talking about all the time is that the problem isn’t fossil fuels; the problem is emissions. So we can continue to have fossil fuels; we’re just going to take down the CO2 that we emit by DAC. It isn’t a reality. It’s like an ideological promise that we’re going to be able to clean up the mess while continuing to create the same mess.

A few minutes ago, you said you’ve never blown up a pipeline. If that’s what you think is necessary, why haven’t you? 

I have engaged in as much militant climate activism as I have had access to in my activist communities and contexts. I’ve done things that I can’t tell you or that I wouldn’t tell others publicly. I live my life in Malmo, pretty isolated from activist communities. Let’s put it this way: If I were part of a group where something like blowing up a pipeline was perceived as a tactic that could be useful for our struggle, then I would gladly participate. But this is not where I am in my life.

I don’t want to encourage you, but if people did only the activism that was congruent with where they were at in their life, hardly anybody who lives a comfortable life would do anything. 

Like I said, I’ve participated in things that I can’t tell you about because they’ve been illegal and they’ve been militant. I’ve done it recently. But I can do that only as part of a collective of people who do something that they have decided on together. We shouldn’t think of activism as something that is invented out of thin air, deduced from abstract principles, and then you just shoot off and do something crazy. I can’t tell you what things I have done, but the things that I do and that any other climate activist should be doing cannot be an individual project.

Greta Thunberg went by herself and sat in front of a building instead of going to school.12

Sure, sure, sure, and she became the person she became thanks to the millions who joined her. Maybe I should do something similar.

In “Overshoot,” you write this about the very wealthy: “There is no escaping the conclusion that the worst mass killers in this rapidly warming world are the billionaires, merely by dint of their lifestyles.” That doesn’t feel like a bathetic overstatement when we live in a world of terrorist violence and Putin turning Ukraine into a charnel house? Why is that a useful way of framing the problem? 

Precisely for the reason I tried to outline previously, which is that spewing CO2 into the atmosphere at an excessive scale — and when it comes to luxury emissions, it is completely excessive — is an act that leads to the death of people.

But by that logic, unless we live a carbon-neutral lifestyle, we should all be looking in the mirror and saying, I am a killer. 

I don’t live a zero-carbon lifestyle. No one who lives in a capitalist society can do so. But the people on top, they are the ones who have power when it comes to investment. Are they going to invest the money in fossil fuels or in renewables? The overwhelming decision they make is to invest it in fossil fuels. They belong to a class that shapes the structure, and in their own private consumption habits, they engage in completely extravagant acts of combustion of fossil fuels.13 On the level of private morals: Do I practice what I preach? I try to avoid flying. I don’t have a car. I should be vegan, but I’m just a vegetarian. I’m not claiming to be any climate angel in my private consumption, and that’s problematic. But I don’t think that is the issue — that each of us in the middle strata or working class in advanced capitalist countries, through our private consumption choices, decide what’s going to happen with this society. This is not how it works.

A protester wearing goggles and a mask holds a bottle up. There is a vehicle on fire in the background.

We live in representative democracies where certain liberties are respected. We vote for the policies and the people we want to represent us. And if we don’t get the things we want, it doesn’t give us license to then say, “We’re now engaging in destructive behavior.” Right? Either we’re against political violence or not. We can’t say we’re for it when it’s something we care about and against it when it’s something we think is wrong. 

Of course we can. Why not?

That is moral hypocrisy. 

I disagree.

Why? 

The idea that if you object to your enemy’s use of a method, you therefore also have to reject your own use of this method would lead to absurd conclusions. The far right is very good at running electoral campaigns. Should we thereby conclude that we shouldn’t run electoral campaigns? This goes for political violence too, unless you’re a pacifist and you reject every form of political violence — that’s a reasonably coherent philosophical position. Slavery was a system of violence. The Haitian revolution was the violent overthrow of that system. It is never the case that you defeat an enemy by renouncing every kind of method that enemy is using.

But I’m specifically thinking about our liberal democracy, however debased it may be. How do you rationalize advocacy for violence within what are supposed to be the ideals of our system? 

Imagine you have a Trump victory in the next election — doesn’t seem unimaginable — and you get a climate denialist back in charge of the White House and he rolls back whatever good things President Biden has done. What should the climate movement do then? Should it accept this as the outcome of a democratic election and protest in the mildest of forms? Or should it radicalize and consider something like property destruction? I admit that this is a difficult question, but I imagine that a measured response to it would need to take into account how democracy works in a country like the United States and whether allowing fossil-fuel companies to wreck the planet because they profit from it can count as a form of democracy and should therefore be respected.

Could you give me a reason to live?14

What do you mean?

Your work is crushing. But I have optimism about the human project. 

I’m not an optimist about the human project.

So give me a reason to live. 

Well, here’s where we enter the virgin territories of metaphysics.

Those are my favorite territories. 

Wonderful.

I’m not joking. 

Yeah, I’m not sure that I have the qualifications to give people advice about reasons to live. My daily affective state is one of great despair about the incredible destructive forces at work in this world — not only at the level of climate. What has been going on in the Middle East just adds to this feeling of destructive forces completely out of control. The situation in the world, as far as I can tell, is incredibly bleak. So how do we live with what we know about the climate crisis? Sometimes I think that the meaning of life is to not give up, to keep the resistance going even though the forces stacked against you are overwhelmingly strong. This often requires some kind of religious conviction, because sometimes it seems irrational.

I think all you need to do is look at your children. 

Yes, but I have to admit to some kind of cognitive dissonance, because, rationally, when you think about children and their future, you have to be dismal. Children are fundamentally a source of joy, and psychologically you want to keep them that way. I try to keep my children in the category of the nonapocalyptic. I’m quite happy to go and swim with my son and be in that moment and not think, Ah, 30 years from now he’s going to lie dead on some inundated beach. You know what I mean?

Which of your arguments are you most unsure of? 

I cannot claim to have a good explanation for what is essentially a mystery, namely that humanity is allowing the climate catastrophe to spiral on. One of my personal intellectual journeys in recent years has been psychoanalysis. Once you start looking into the psychic dimensions of a problem like the climate crisis, you have to open yourself to the fundamental difficulty in understanding what’s happening.

Is it possible for you to summarize your psychoanalytic understanding of the climate crisis? 

Not simply, because it’s so complex. On the far right, you see this aggressive defense of cars and fossil fuels that verges on a desire for destruction, which of course is part of Freud’s latent theory of the two categories of drives: eros and thanatos.15 Another fundamental category in the psychic dimension of the climate crisis is denial. Denial is as central to the development of the climate crisis as the greenhouse effect.

What about you, psychoanalytically speaking? 

I have my weekly therapy on Thursday.

But what’s your deal? 

You mean in my private life?

Yeah. 

On a deeper level, the point for the psychoanalysis is that you go back to your childhood and try to process your relation to your parents and how they have constituted you. Do you really want me to go there?

Yes. 

I have to try to figure out how this ties in with my climate activism. I guess this is some sort of a superego part of it: a strong sense of duty or obligation; that I have to try to do what I can to intervene in this situation. That’s a very strong affective mechanism. For instance, I constantly give up on an intellectual project that would be far more satisfying, a nerdy historical project,16 because I feel that I cannot with good conscience do this when the world is on fire.

But I’m asking what caused your impulses. 

Now we’re into the deep psychoanalytic stuff. I had a vicious Oedipal conflict with my father. One way that this came to express itself was that in the preteen years, I clashed with my father — even more violently during my teenage years. My way to defend myself against what I perceived as his tyranny was to become as proficient as he was in arguing and beat him in his own game by rhetorically defeating him. I think I did. I think he accepted that I’m his superior when it comes to writing and arguing. Psychoanalytically, of course, the things that I’ve continued to do can be understood as an extension of my formative rebellion against my authoritarian father in a classically Oedipal setting, if you see what I mean.17

I asked why you aren’t blowing up pipelines, and you gave this answer about how action has to happen in the context of a community and “Oh, but I have done very serious stuff” — there’s something fishy. You have actually engaged in property destruction? Or are you just scared of somebody calling you a hypocrite? 

There are things that I have done when it comes to militant activism recently that I, as a matter of principle and political expediency, do not reveal. Part of the whole point of it is to not reveal it. Sure, someone could accuse me of being a hypocrite because I don’t offer evidence that I have done anything militant. But those close to me know. That’s good enough for me.

I also said, “Give me a reason to live.” 

I will always remember this. No one ever asked me this before.

And I said that one of the reasons to keep going is kids. But you said their future is rationally going to be terrible. If you think your children’s future is going to be terrible, why keep going? 

One of the arguments in this “Overshoot” book is that the technical possibilities are all there. It’s a matter of the political trends. This feeling that my kids will face a terrible future isn’t based on the idea that it’s impossible to save us by technical means. It’s just, to quote Walter Benjamin, the enemy has never ceased to be victorious18 — and it’s more victorious than ever. That’s how it feels.

Opening illustration: Source photograph by Jeremy Chan/Getty Images

This interview has been edited and condensed for clarity from two conversations.

Notes

1. Just to be explicit about this: Malm does not endorse or advocate any political violence that targets people. His aim is violence against property.

2. To cite one example, last March in western France, thousands of people arrived at a site of a “megabasin” water reservoir for agricultural use and sabotaged a pump. The action was against what the protesters believe is water hoarding. Malm has been particularly influential in France, where the authorities have questioned arrested activists about their feelings on his work.

3. To reach this conclusion, scientists working with the World Weather Attribution research group employed computer simulations to compare weather events today, including the Syrian flooding, with the weather that was most likely to have occurred if the climate had not already warmed, as it has, by 1.2 degrees Celsius above the average preindustrial temperature.

4. I knew Malm had children because in setting up our discussions, he explained that we had to talk in the evening on Swedish time, after he had put his kids to bed.

5. The film, directed by Daniel Goldhaber, uses Malm’s book as a launching pad for a story about young radicals who plan to blow up a pipeline in Texas. From The Times’s review: “A truly radical film wouldn’t go out of its way to concoct sympathetic motives, or to keep its plotting so clean.”

6. Malm teaches at Lund University, near Malmo, where he’s an associate professor of human geography.

7. Malm was among a group of activists who used this protest tactic in Stockholm in 2007. Deflating S.U.V. tires in protest has not been uncommon in Europe. In 2022, the tires of roughly 900 S.U.V.s were deflated in a single night of coordinated protest, according to the protesters.

8. For 2022, the Saudi state-controlled Aramco reported a record profit of $161.1 billion; Exxon reported a record profit of $56 billion; BP reported a record profit of nearly $28 billion. (Full 2023 profits have not been reported yet.)

9. The Abu Dhabi National Oil Company.

10. Direct air capture, a technology to remove carbon dioxide from the air.

11. Sultan Ahmed Al Jaber, the chief executive of ADNOC, who somewhat counterintuitively was president of the recent COP28 climate conference. (Where, it must be said, more than 200 countries agreed to a pact that calls for “transitioning away from fossil fuels.”) Al Jaber was criticized for saying, shortly before COP28, that “there is no science out there, or no scenario out there, that says that the phaseout of fossil fuel is what’s going to achieve 1.5.”

12. In 2018, rather than go to school, Greta Thunberg, then 15, sat alone in front of the Swedish Parliament with a sign announcing that she was on a school strike for the climate. The act is widely credited for kicking off a global wave of peaceful climate activism.

13. According to a 2023 report by Oxfam, The Guardian and the Stockholm Environment Institute, the richest 1 percent of humanity is responsible for more carbon emissions than the poorest two-thirds. The report drew on data from 2019.

14. I just blurted this out. I don’t even think Malm’s pessimism is wrong, but I find it suffocating. People need hope.

15. In Freud’s writings, he argued that individuals wrestle with the desire to live, eros, and the desire to die, widely known as thanatos.

16. That project is about what Malm calls a “people’s histories of wilderness,” with a focus on how some have withdrawn “into the wild to get away from oppression and potentially fight back.”

17. Malm also wanted to point out the following: “My father and I have generally been on good terms and have become quite close in our worldview — with remaining differences — over the past decade or two.”

18. This is a paraphrase of a line from the visionary German-Jewish cultural critic’s 1940 essay “On the Concept of History.” Benjamin died from suicide that same year.

Talk to Me (New Yorker)

Annals of Nature

Can artificial intelligence allow us to speak to another species?

By Elizabeth Kolbert

September 4, 2023

A big whale and a smaller one dive into the ocean with a school of fish below

Sperm whales communicate via clicks, which they also use to locate prey in the dark. Illustration by Sophy Hollington

Listen to this story

Ah, the world! Oh, the world!

—“Moby-Dick.”

David Gruber began his almost impossibly varied career studying bluestriped grunt fish off the coast of Belize. He was an undergraduate, and his job was to track the fish at night. He navigated by the stars and slept in a tent on the beach. “It was a dream,” he recalled recently. “I didn’t know what I was doing, but I was performing what I thought a marine biologist would do.”

Gruber went on to work in Guyana, mapping forest plots, and in Florida, calculating how much water it would take to restore the Everglades. He wrote a Ph.D. thesis on carbon cycling in the oceans and became a professor of biology at the City University of New York. Along the way, he got interested in green fluorescent proteins, which are naturally synthesized by jellyfish but, with a little gene editing, can be produced by almost any living thing, including humans.

While working in the Solomon Islands, northeast of Australia, Gruber discovered dozens of species of fluorescent fish, including a fluorescent shark, which opened up new questions. What would a fluorescent shark look like to another fluorescent shark? Gruber enlisted researchers in optics to help him construct a special “shark’s eye” camera. (Sharks see only in blue and green; fluorescence, it turns out, shows up to them as greater contrast.) Meanwhile, he was also studying creatures known as comb jellies at the Mystic Aquarium, in Connecticut, trying to determine how, exactly, they manufacture the molecules that make them glow. This led him to wonder about the way that jellyfish experience the world. Gruber enlisted another set of collaborators to develop robots that could handle jellyfish with jellyfish-like delicacy.

“I wanted to know: Is there a way where robots and people can be brought together that builds empathy?” he told me.

In 2017, Gruber received a fellowship to spend a year at the Radcliffe Institute for Advanced Study, in Cambridge, Massachusetts. While there, he came across a book by a free diver who had taken a plunge with some sperm whales. This piqued Gruber’s curiosity, so he started reading up on the animals.

The world’s largest predators, sperm whales spend most of their lives hunting. To find their prey—generally squid—in the darkness of the depths, they rely on echolocation. By means of a specialized organ in their heads, they generate streams of clicks that bounce off any solid (or semi-solid) object. Sperm whales also produce quick bursts of clicks, known as codas, which they exchange with one another. The exchanges seem to have the structure of conversation.

One day, Gruber was sitting in his office at the Radcliffe Institute, listening to a tape of sperm whales chatting, when another fellow at the institute, Shafi Goldwasser, happened by. Goldwasser, a Turing Award-winning computer scientist, was intrigued. At the time, she was organizing a seminar on machine learning, which was advancing in ways that would eventually lead to ChatGPT. Perhaps, Goldwasser mused, machine learning could be used to discover the meaning of the whales’ exchanges.

“It was not exactly a joke, but almost like a pipe dream,” Goldwasser recollected. “But David really got into it.”

Gruber and Goldwasser took the idea of decoding the codas to a third Radcliffe fellow, Michael Bronstein. Bronstein, also a computer scientist, is now the DeepMind Professor of A.I. at Oxford.

“This sounded like probably the most crazy project that I had ever heard about,” Bronstein told me. “But David has this kind of power, this ability to convince and drag people along. I thought that it would be nice to try.”

Gruber kept pushing the idea. Among the experts who found it loopy and, at the same time, irresistible were Robert Wood, a roboticist at Harvard, and Daniela Rus, who runs M.I.T.’s Computer Science and Artificial Intelligence Laboratory. Thus was born the Cetacean Translation Initiative—Project ceti for short. (The acronym is pronounced “setty,” and purposefully recalls seti, the Search for Extraterrestrial Intelligence.) ceti represents the most ambitious, the most technologically sophisticated, and the most well-funded effort ever made to communicate with another species.

“I think it’s something that people get really excited about: Can we go from science fiction to science?” Rus told me. “I mean, can we talk to whales?”

Sperm whales are nomads. It is estimated that, in the course of a year, an individual whale swims at least twenty thousand miles. But scattered around the tropics, for reasons that are probably squid-related, there are a few places the whales tend to favor. One of these is a stretch of water off Dominica, a volcanic island in the Lesser Antilles.

ceti has its unofficial headquarters in a rental house above Roseau, the island’s capital. The group’s plan is to turn Dominica’s west coast into a giant whale-recording studio. This involves installing a network of underwater microphones to capture the codas of passing whales. It also involves planting recording devices on the whales themselves—cetacean bugs, as it were. The data thus collected can then be used to “train” machine-learning algorithms.

The scientist David Gruber explains the mission of Project CETI, and what his team has learned about how whales communicate.

In July, I went down to Dominica to watch the ceti team go sperm-whale bugging. My first morning on the island, I met up with Gruber just outside Roseau, on a dive-shop dock. Gruber, who is fifty, is a slight man with dark curly hair and a cheerfully anxious manner. He was carrying a waterproof case and wearing a ceti T-shirt. Soon, several more members of the team showed up, also carrying waterproof cases and wearing ceti T-shirts. We climbed aboard an oversized Zodiac called ceti 2 and set off.

The night before, a tropical storm had raked the region with gusty winds and heavy rain, and Dominica’s volcanic peaks were still wreathed in clouds. The sea was a series of white-fringed swells. ceti 2 sped along, thumping up and down, up and down. Occasionally, flying fish zipped by; these remained aloft for such a long time that I was convinced for a while they were birds.

About two miles offshore, the captain, Kevin George, killed the engines. A graduate student named Yaly Mevorach put on a set of headphones and lowered an underwater mike—a hydrophone—into the waves. She listened for a bit and then, smiling, handed the headphones to me.

The most famous whale calls are the long, melancholy “songs” issued by humpbacks. Sperm-whale codas are neither mournful nor musical. Some people compare them to the sound of bacon frying, others to popcorn popping. That morning, as I listened through the headphones, I thought of horses clomping over cobbled streets. Then I changed my mind. The clatter was more mechanical, as if somewhere deep beneath the waves someone was pecking out a memo on a manual typewriter.

Mevorach unplugged the headphones from the mike, then plugged them into a contraption that looked like a car speaker riding a broom handle. The contraption, which I later learned had been jury-rigged out of, among other elements, a metal salad bowl, was designed to locate clicking whales. After twisting it around in the water for a while, Mevorach decided that the clicks were coming from the southwest. We thumped in that direction, and soon George called out, “Blow!”

A few hundred yards in front of us was a gray ridge that looked like a misshapen log. (When whales are resting at the surface, only a fraction of their enormous bulk is visible.) The whale blew again, and a geyser-like spray erupted from the ridge’s left side.

As we were closing in, the whale blew yet again; then it raised its elegantly curved flukes into the air and dove. It was unlikely to resurface, I was told, for nearly an hour.

We thumped off in search of its kin. The farther south we travelled, the higher the swells. At one point, I felt my stomach lurch and went to the side of the boat to heave.

“I like to just throw up and get back to work,” Mevorach told me.

Trying to attach a recording device to a sperm whale is a bit like trying to joust while racing on a Jet Ski. The exercise entails using a thirty-foot pole to stick the device onto the animal’s back, which in turn entails getting within thirty feet of a creature the size of a school bus. That day, several more whales were spotted. But, for all of our thumping around, ceti 2 never got close enough to one to unhitch the tagging pole.

The next day, the sea was calmer. Once again, we spotted whales, and several times the boat’s designated pole-handler, Odel Harve, attempted to tag one. All his efforts went for naught. Either the whale dove at the last minute or the recording device slipped off the whale’s back and had to be fished out of the water. (The device, which was about a foot long and shaped like a surfboard, was supposed to adhere via suction cups.) With each new sighting, the mood on ceti 2 lifted; with each new failure, it sank.

On my third day in Dominica, I joined a slightly different subset of the team on a different boat to try out a new approach. Instead of a long pole, this boat—a forty-foot catamaran called ceti 1—was carrying an experimental drone. The drone had been specially designed at Harvard and was fitted out with a video camera and a plastic claw.

Because sperm whales are always on the move, there’s no guarantee of finding any; weeks can go by without a single sighting off Dominica. Once again, though, we got lucky, and a whale was soon spotted. Stefano Pagani, an undergraduate who had been brought along for his piloting skills, pulled on what looked like a V.R. headset, which was linked to the drone’s video camera. In this way, he could look down at the whale from the drone’s perspective and, it was hoped, plant a recording device, which had been loaded into the claw, on the whale’s back.

The drone took off and zipped toward the whale. It hovered for a few seconds, then dropped vertiginously. For the suction cups to adhere, the drone had to strike the whale at just the right angle, with just the right amount of force. Post impact, Pagani piloted the craft back to the boat with trembling hands. “The nerves get to you,” he said.

“No pressure,” Gruber joked. “It’s not like there’s a New Yorker reporter watching or anything.” Someone asked for a round of applause. A cheer went up from the boat. The whale, for its part, seemed oblivious. It lolled around with the recording device, which was painted bright orange, stuck to its dark-gray skin. Then it dove.

Sperm whales are among the world’s deepest divers. They routinely descend two thousand feet and sometimes more than a mile. (The deepest a human has ever gone with scuba gear is just shy of eleven hundred feet.) If the device stayed on, it would record any sounds the whale made on its travels. It would also log the whale’s route, its heartbeat, and its orientation in the water. The suction was supposed to last around eight hours; after that—assuming all went according to plan—the device would come loose, bob to the surface, and transmit a radio signal that would allow it to be retrieved.

I said it was too bad we couldn’t yet understand what the whales were saying, because perhaps this one, before she dove, had clicked out where she was headed.

“Come back in two years,” Gruber said.

Every sperm whale’s tail is unique. On some, the flukes are divided by a deep notch. On others, they meet almost in a straight line. Some flukes end in points; some are more rounded. Many are missing distinctive chunks, owing, presumably, to orca attacks. To I.D. a whale in the field, researchers usually rely on a photographic database called Flukebook. One of the very few scientists who can do it simply by sight is ceti’s lead field biologist, Shane Gero.

Gero, who is forty-three, is tall and broad, with an eager smile and a pronounced Canadian accent. A scientist-in-residence at Ottawa’s Carleton University, he has been studying the whales off Dominica since 2005. By now, he knows them so well that he can relate their triumphs and travails, as well as who gave birth to whom and when. A decade ago, as Gero started having children of his own, he began referring to his “human family” and his “whale family.” (His human family lives in Ontario.) Another marine biologist once described Gero as sounding “like Captain Ahab after twenty years of psychotherapy.”

When Gruber approached Gero about joining Project ceti, he was, initially, suspicious. “I get a lot of e-mails like ‘Hey, I think whales have crystals in their heads,’ and ‘Maybe we can use them to cure malaria,’ ” Gero told me. “The first e-mail David sent me was, like, ‘Hi, I think we could find some funding to translate whale.’ And I was, like, ‘Oh, boy.’ ”

A few months later, the two men met in person, in Washington, D.C., and hit it off. Two years after that, Gruber did find some funding. ceti received thirty-three million dollars from the Audacious Project, a philanthropic collaborative whose backers include Richard Branson and Ray Dalio. (The grant, which was divided into five annual payments, will run out in 2025.)

The whole time I was in Dominica, Gero was there as well, supervising graduate students and helping with the tagging effort. From him, I learned that the first whale I had seen was named Rita and that the whales that had subsequently been spotted included Raucous, Roger, and Rita’s daughter, Rema. All belonged to a group called Unit R, which Gero characterized as “tightly and actively social.” Apparently, Unit R is also warmhearted. Several years ago, when a group called Unit S got whittled down to just two members—Sally and TBB—the Rs adopted them.

Sperm whales have the biggest brains on the planet—six times the size of humans’. Their social lives are rich, complicated, and, some would say, ideal. The adult members of a unit, which may consist of anywhere from a few to a few dozen individuals, are all female. Male offspring are permitted to travel with the group until they’re around fifteen years old; then, as Gero put it, they are “socially ostracized.” Some continue to hang around their mothers and sisters, clicking away for months unanswered. Eventually, though, they get the message. Fully grown males are solitary creatures. They approach a band of females—presumably not their immediate relatives—only in order to mate. To signal their arrival, they issue deep, booming sounds known as clangs. No one knows exactly what makes a courting sperm whale attractive to a potential mate; Gero told me that he had seen some clanging males greeted with great commotion and others with the cetacean equivalent of a shrug.

Female sperm whales, meanwhile, are exceptionally close. The adults in a unit not only travel and hunt together; they also appear to confer on major decisions. If there’s a new mother in the group, the other members mind the calf while she dives for food. In some units, though not in Unit R, sperm whales even suckle one another’s young. When a family is threatened, the adults cluster together to protect their offspring, and when things are calm the calves fool around.

“It’s like my kids and their cousins,” Gero said.

The day after I watched the successful drone flight, I went out with Gero to try to recover the recording device. More than twenty-four hours had passed, and it still hadn’t been located. Gero decided to drive out along a peninsula called Scotts Head, at the southwestern tip of Dominica, where he thought he might be able to pick up the radio signal. As we wound around on the island’s treacherously narrow roads, he described to me an idea he had for a children’s book that, read in one direction, would recount a story about a human family that lives on a boat and looks down at the water and, read from the other direction, would be about a whale family that lives deep beneath the boat and looks up at the waves.

“For me, the most rewarding part about spending a lot of time in the culture of whales is finding these fundamental similarities, these fundamental patterns,” he said. “And, you know, sure, they won’t have a word for ‘tree.’ And there’s some part of the sperm-whale experience that our primate brain just won’t understand. But those things that we share must be fundamentally important to why we’re here.”

After a while, we reached, quite literally, the end of the road. Beyond that was a hill that had to be climbed on foot. Gero was carrying a portable antenna, which he unfolded when we got to the top. If the recording unit had surfaced anywhere within twenty miles, Gero calculated, we should be able to detect the signal. It occurred to me that we were now trying to listen for a listening device. Gero held the antenna aloft and put his ear to some kind of receiver. He didn’t hear anything, so, after admiring the view for a bit, we headed back down. Gero was hopeful that the device would eventually be recovered. But, as far as I know, it is still out there somewhere, adrift in the Caribbean.

The first scientific, or semi-scientific, study of sperm whales was a pamphlet published in 1835 by a Scottish ship doctor named Thomas Beale. Called “The Natural History of the Sperm Whale,” it proved so popular that Beale expanded the pamphlet into a book, which was issued under the same title four years later.

At the time, sperm-whale hunting was a major industry, both in Britain and in the United States. The animals were particularly prized for their spermaceti, the waxy oil that fills their gigantic heads. Spermaceti is an excellent lubricant, and, burned in a lamp, produces a clean, bright light; in Beale’s day, it could sell for five times as much as ordinary whale oil. (It is the resemblance between semen and spermaceti that accounts for the species’ embarrassing name.)

Beale believed sperm whales to be silent. “It is well known among the most experienced whalers that they never produce any nasal or vocal sounds whatever, except a trifling hissing at the time of the expiration of the spout,” he wrote. The whales, he said, were also gentle—“a most timid and inoffensive animal.” Melville relied heavily on Beale in composing “Moby-Dick.” (His personal copy of “The Natural History of the Sperm Whale” is now housed in Harvard’s Houghton Library.) He attributed to sperm whales a “pyramidical silence.”

“The whale has no voice,” Melville wrote. “But then again,” he went on, “what has the whale to say? Seldom have I known any profound being that had anything to say to this world, unless forced to stammer out something by way of getting a living.”

The silence of the sperm whales went unchallenged until 1957. That year, two researchers from the Woods Hole Oceanographic Institution picked up sounds from a group they’d encountered off the coast of North Carolina. They detected strings of “sharp clicks,” and speculated that these were made for the purpose of echolocation. Twenty years elapsed before one of the researchers, along with a different colleague from Woods Hole, determined that some sperm-whale clicks were issued in distinctive, often repeated patterns, which the pair dubbed “codas.” Codas seemed to be exchanged between whales and so, they reasoned, must serve some communicative function.

Since then, cetologists have spent thousands of hours listening to codas, trying to figure out what that function might be. Gero, who wrote his Ph.D. thesis on vocal communication between sperm whales, told me that one of the “universal truths” about codas is their timing. There are always four seconds between the start of one coda and the beginning of the next. Roughly two of those seconds are given over to clicks; the rest is silence. Only after the pause, which may or may not be analogous to the pause a human speaker would put between words, does the clicking resume.

Codas are clearly learned or, to use the term of art, socially transmitted. Whales in the eastern Pacific exchange one set of codas, those in the eastern Caribbean another, and those in the South Atlantic yet another. Baby sperm whales pick up the codas exchanged by their relatives, and before they can click them out proficiently they “babble.”

The whales around Dominica have a repertoire of around twenty-five codas. These codas differ from one another in the number of their clicks and also in their rhythms. The coda known as three regular, or 3R, for example, consists of three clicks issued at equal intervals. The coda 7R consists of seven evenly spaced clicks. In seven increasing, or 7I, by contrast, the interval between the clicks grows longer; it’s about five-hundredths of a second between the first two clicks, and between the last two it’s twice that long. In four decreasing, or 4D, there’s a fifth of a second between the first two clicks and only a tenth of a second between the last two. Then, there are syncopated codas. The coda most frequently issued by members of Unit R, which has been dubbed 1+1+3, has a cha-cha-esque rhythm and might be rendered in English as click . . . click . . . click-click-click.

If codas are in any way comparable to words, a repertoire of twenty-five represents a pretty limited vocabulary. But, just as no one can yet say what, if anything, codas mean to sperm whales, no one can say exactly what features are significant to them. It may be that there are nuances in, say, pacing or pitch that have so far escaped human detection. Already, ceti team members have identified a new kind of signal—a single click—that may serve as some kind of punctuation mark.

When whales are resting near the surface, their exchanges can last an hour or more. Even by human standards, sperm-whale chatter is insistent and repetitive. “They’re talking on top of each other all the time,” Gero told me.

A snatch of dialogue recorded between two members of Unit R runs as follows. (Both Roger and Rita are adult females.)

Roger: 1+1+3
Rita: 1+1+3, 1+1+3
Roger: 9I
Rita: 1+1+3
Roger: 10I
Rita: 1+1+3, 1+1+3
Roger: 11I
Rita: 1+1+3
Roger: 10I, 11I, 1+1+3
Rita: 1+1+3

The “conversation” continues along much these same lines, until Rita finally changes her tune:

Rita: 1+1+3
Roger: 12R, 10I, 10I, 9I, 9I
Rita: 9I, 8I

Not long ago, suffering from writer’s block, I asked ChatGPT if it could rewrite “Moby-Dick” from the whale’s perspective. The chatbot began modestly. “As an A.I. language model, I can certainly attempt to rewrite a small excerpt,” it told me. Then it launched into what it titled “Moby-Dick—The White Leviathan’s Tale.”

In Moby-Dick’s “Moby-Dick,” the plot turned on an unfortunate case of mistaken identity. Some other whale had gnawed off Ahab’s leg; the white whale was as innocent as the driven snow.

“My heart was one of peace, my spirit one with the rhythm of the currents,” ChatGPT wrote:

I knew not of the vendetta forming against me, for I was but a creature of instinct, a guardian of the seas. But Captain Ahab’s thirst for vengeance loomed large, and I became the object of his undying rage. The scar he bore from a previous encounter with my kind had branded me as his ultimate adversary. In his eyes, I represented a force of nature that defied his understanding, a living enigma of the ocean’s might.

In paragraph seven, I was surprised—and excited—to see the whale/bot allude to the problem of interspecies communication:

Deep within the abyss of the sea, I pondered the futility of the captain’s quest. I longed to communicate with him, to show him that my kind, too, had emotions and families to protect. But the language barrier between our worlds remained an insurmountable chasm.

As anyone who has been conscious for the past ten months knows, ChatGPT is capable of amazing feats. It can write essays, compose sonnets, explain scientific concepts, and produce jokes (though these last are not necessarily funny). If you ask ChatGPT how it was created, it will tell you that first it was trained on a “massive corpus” of data from the Internet. This phase consisted of what’s called “unsupervised machine learning,” which was performed by an intricate array of processing nodes known as a neural network. Basically, the “learning” involved filling in the blanks; according to ChatGPT, the exercise entailed “predicting the next word in a sentence given the context of the previous words.” By digesting millions of Web pages—and calculating and recalculating the odds—ChatGPT got so good at this guessing game that, without ever understanding English, it mastered the language. (Other languages it is “fluent” in include Chinese, Spanish, and French.)

In theory at least, what goes for English (and Chinese and French) also goes for sperm whale. Provided that a computer model can be trained on enough data, it should be able to master coda prediction. It could then—once again in theory—generate sequences of codas that a sperm whale would find convincing. The model wouldn’t understand sperm whale-ese, but it could, in a manner of speaking, speak it. Call it ClickGPT.

Currently, the largest collection of sperm-whale codas is an archive assembled by Gero in his years on and off Dominica. The codas contain roughly a hundred thousand clicks. In a paper published last year, members of the ceti team estimated that, to fulfill its goals, the project would need to assemble some four billion clicks, which is to say, a collection roughly forty thousand times larger than Gero’s.

“One of the key challenges toward the analysis of sperm whale (and more broadly, animal) communication using modern deep learning techniques is the need for sizable datasets,” the team wrote.

In addition to bugging individual whales, ceti is planning to tether a series of three “listening stations” to the floor of the Caribbean Sea. The stations should be able to capture the codas of whales chatting up to twelve miles from shore. (Though inaudible above the waves, sperm-whale clicks can register up to two hundred and thirty decibels, which is louder than a gunshot or a rock concert.) The information gathered by the stations will be less detailed than what the tags can provide, but it should be much more plentiful.

One afternoon, I drove with Gruber and ceti’s station manager, Yaniv Aluma, a former Israeli Navy seal, to the port in Roseau, where pieces of the listening stations were being stored. The pieces were shaped like giant sink plugs and painted bright yellow. Gruber explained that the yellow plugs were buoys, and that the listening equipment—essentially, large collections of hydrophones—would dangle from the bottom of the buoys, on cables. The cables would be weighed down with old train wheels, which would anchor them to the seabed. A stack of wheels, rusted orange, stood nearby. Gruber suddenly turned to Aluma and, pointing to the pile, said, “You know, we’re going to need more of these.” Aluma nodded glumly.

The listening stations have been the source of nearly a year’s worth of delays for ceti. The first was installed last summer, in water six thousand feet deep. Fish were attracted to the buoy, so the spot soon became popular among fishermen. After about a month, the fishermen noticed that the buoy was gone. Members of ceti’s Dominica-based staff set out in the middle of the night on ceti 1 to try to retrieve it. By the time they reached the buoy, it had drifted almost thirty miles offshore. Meanwhile, the hydrophone array, attached to the rusty train wheels, had dropped to the bottom of the sea.

The trouble was soon traced to the cable, which had been manufactured in Texas by a company that specializes in offshore oil-rig equipment. “They deal with infrastructure that’s very solid,” Aluma explained. “But a buoy has its own life. And they didn’t calculate so well the torque or load on different motions—twisting and moving sideways.” The company spent months figuring out why the cable had failed and finally thought it had solved the problem. In June, Aluma flew to Houston to watch a new cable go through stress tests. In the middle of the tests, the new design failed. To avoid further delays, the ceti team reconfigured the stations. One of the reconfigured units was installed late last month. If it doesn’t float off, or in some other way malfunction, the plan is to get the two others in the water sometime this fall.

Asperm whale’s head takes up nearly a third of its body; its narrow lower jaw seems borrowed from a different animal entirely; and its flippers are so small as to be almost dainty. (The formal name for the species is Physeter macrocephalus, which translates roughly as “big-headed blowhole.”) “From just about any angle,” Hal Whitehead, one of the world’s leading sperm-whale experts (and Gero’s thesis adviser), has written, sperm whales appear “very strange.” I wanted to see more of these strange-looking creatures than was visible from a catamaran, and so, on my last day in Dominica, I considered going on a commercial tour that offered customers a chance to swim with whales, assuming that any could be located. In the end—partly because I sensed that Gruber disapproved of the practice—I dropped the idea.

Instead, I joined the crew on ceti 1 for what was supposed to be another round of drone tagging. After we’d been under way for about two hours, codas were picked up, to the northeast. We headed in that direction and soon came upon an extraordinary sight. There were at least ten whales right off the boat’s starboard. They were all facing the same direction, and they were bunched tightly together, in rows. Gero identified them as members of Unit A. The members of Unit A were originally named for characters in Margaret Atwood novels, and they include Lady Oracle, Aurora, and Rounder, Lady Oracle’s daughter.

Earlier that day, the crew on ceti 2 had spotted pilot whales, or blackfish, which are known to harass sperm whales. “This looks very defensive,” Gero said, referring to the formation.

Suddenly, someone yelled out, “Red!” A burst of scarlet spread through the water, like a great banner unfurling. No one knew what was going on. Had the pilot whales stealthily attacked? Was one of the whales in the group injured? The crowding increased until the whales were practically on top of one another.

Then a new head appeared among them. “Holy fucking shit!” Gruber exclaimed.

“Oh, my God!” Gero cried. He ran to the front of the boat, clutching his hair in amazement. “Oh, my God! Oh, my God!” The head belonged to a newborn calf, which was about twelve feet long and weighed maybe a ton. In all his years of studying sperm whales, Gero had never watched one being born. He wasn’t sure anyone ever had.

As one, the whales made a turn toward the catamaran. They were so close I got a view of their huge, eerily faceless heads and pink lower jaws. They seemed oblivious of the boat, which was now in their way. One knocked into the hull, and the foredeck shuddered.

The adults kept pushing the calf around. Its mother and her relatives pressed in so close that the baby was almost lifted out of the water. Gero began to wonder whether something had gone wrong. By now, everyone, including the captain, had gathered on the bow. Pagani and another undergraduate, Aidan Kenny, had launched two drones and were filming the action from the air. Mevorach, meanwhile, was recording the whales through a hydrophone.

To everyone’s relief, the baby began to swim on its own. Then the pilot whales showed up—dozens of them.

“I don’t like the way they’re moving,” Gruber said.

“They’re going to attack for sure,” Gero said. The pilot whales’ distinctive, wave-shaped fins slipped in and out of the water.

What followed was something out of a marine-mammal “Lord of the Rings.” Several of the pilot whales stole in among the sperm whales. All that could be seen from the boat was a great deal of thrashing around. Out of nowhere, more than forty Fraser’s dolphins arrived on the scene. Had they come to participate in the melee or just to rubberneck? It was impossible to tell. They were smaller and thinner than the pilot whales (which, their name notwithstanding, are also technically dolphins).

“I have no prior knowledge upon which to predict what happens next,” Gero announced. After several minutes, the pilot whales retreated. The dolphins curled through the waves. The whales remained bunched together. Calm reigned. Then the pilot whales made another run at the sperm whales. The water bubbled and churned.

“The pilot whales are just being pilot whales,” Gero observed. Clearly, though, in the great “struggle for existence,” everyone on board ceti 1 was on the side of the baby.

The skirmishing continued. The pilot whales retreated, then closed in again. The drones began to run out of power. Pagani and Kenny piloted them back to the catamaran to exchange the batteries. These were so hot they had to be put in the boat’s refrigerator. At one point, Gero thought that he spied the new calf, still alive and well. (He would later, from the drone footage, identify the baby’s mother as Rounder.) “So that’s good news,” he called out.

The pilot whales hung around for more than two hours. Then, all at once, they were gone. The dolphins, too, swam off.

“There will never be a day like this again,” Gero said as ceti 1 headed back to shore.

That evening, everyone who’d been on board ceti 1 and ceti 2 gathered at a dockside restaurant for a dinner in honor of the new calf. Gruber made a toast. He thanked the team for all its hard work. “Let’s hope we can learn the language with that baby whale,” he said.

I was sitting with Gruber and Gero at the end of a long table. In between drinks, Gruber suggested that what we had witnessed might not have been an attack. The scene, he proposed, had been more like the last act of “The Lion King,” when the beasts of the jungle gather to welcome the new cub.

“Three different marine mammals came together to celebrate and protect the birth of an animal with a sixteen-month gestation period,” he said. Perhaps, he hypothesized, this was a survival tactic that had evolved to protect mammalian young against sharks, which would have been attracted by so much blood and which, he pointed out, would have been much more numerous before humans began killing them off.

“You mean the baby whale was being protected by the pilot whales from the sharks that aren’t here?” Gero asked. He said he didn’t even know what it would mean to test such a theory. Gruber said they could look at the drone footage and see if the sperm whales had ever let the pilot whales near the newborn and, if so, how the pilot whales had responded. I couldn’t tell whether he was kidding or not.

“That’s a nice story,” Mevorach interjected.

“I just like to throw ideas out there,” Gruber said.

“My! You don’t say so!” said the Doctor. “You never talked that way to me before.”

“What would have been the good?” said Polynesia, dusting some cracker crumbs off her left wing. “You wouldn’t have understood me if I had.”

—“The Story of Doctor Dolittle.”

The Computer Science and Artificial Intelligence Laboratory (csail), at M.I.T., occupies a Frank Gehry-designed building that appears perpetually on the verge of collapse. Some wings tilt at odd angles; others seem about to split in two. In the lobby of the building, there’s a vending machine that sells electrical cords and another that dispenses caffeinated beverages from around the world. There’s also a yellow sign of the sort you might see in front of an elementary school. It shows a figure wearing a backpack and carrying a briefcase and says “nerd xing.”

Daniela Rus, who runs csail (pronounced “see-sale”), is a roboticist. “There’s such a crazy conversation these days about machines,” she told me. We were sitting in her office, which is dominated by a robot, named Domo, who sits in a glass case. Domo has a metal torso and oversized, goggly eyes. “It’s either machines are going to take us down or machines are going to solve all of our problems. And neither is correct.”

Along with several other researchers at csail, Rus has been thinking about how ceti might eventually push beyond coda prediction to something approaching coda comprehension. This is a formidable challenge. Whales in a unit often chatter before they dive. But what are they chattering about? How deep to go, or who should mind the calves, or something that has no analogue in human experience?

“We are trying to correlate behavior with vocalization,” Rus told me. “Then we can begin to get evidence for the meaning of some of the vocalizations they make.”

She took me down to her lab, where several graduate students were tinkering in a thicket of electronic equipment. In one corner was a transparent plastic tube loaded with circuitry, attached to two white plastic flippers. The setup, Rus explained, was the skeleton of a robotic turtle. Lying on the ground was the turtle’s plastic shell. One of the students hit a switch and the flippers made a paddling motion. Another student brought out a two-foot-long robotic fish. Both the fish and the turtle could be configured to carry all sorts of sensors, including underwater cameras.

“We need new methods for collecting data,” Rus said. “We need ways to get close to the whales, and so we’ve been talking a lot about putting the sea turtle or the fish in water next to the whales, so that we can image what we cannot see.”

csail is an enormous operation, with more than fifteen hundred staff members and students. “People here are kind of audacious,” Rus said. “They really love the wild and crazy ideas that make a difference.” She told me about a diver she had met who had swum with the sperm whales off Dominica and, by his account at least, had befriended one. The whale seemed to like to imitate the diver; for example, when he hung in the water vertically, it did, too.

“The question I’ve been asking myself is: Suppose that we set up experiments where we engage the whales in physical mimicry,” Rus said. “Can we then get them to vocalize while doing a motion? So, can we get them to say, ‘I’m going up’? Or can we get them to say, ‘I’m hovering’? I think that, if we were to find a few snippets of vocalizations that we could associate with some meaning, that would help us get deeper into their conversational structure.”

While we were talking, another csail professor and ceti collaborator, Jacob Andreas, showed up. Andreas, a computer scientist who works on language processing, said that he had been introduced to the whale project at a faculty retreat. “I gave a talk about understanding neural networks as a weird translation problem,” he recalled. “And Daniela came up to me afterwards and she said, ‘Oh, you like weird translation problems? Here’s a weird translation problem.’ ”

Andreas told me that ceti had already made significant strides, just by reanalyzing Gero’s archive. Not only had the team uncovered the new kind of signal but also it had found that codas have much more internal structure than had previously been recognized. “The amount of information that this system can carry is much bigger,” he said.

“The holy grail here—the thing that separates human language from all other animal communication systems—is what’s called ‘duality of patterning,’ ” Andreas went on. “Duality of patterning” refers to the way that meaningless units—in English, sounds like “sp” or “ot”—can be combined to form meaningful units, like “spot.” If, as is suspected, clicks are empty of significance but codas refer to something, then sperm whales, too, would have arrived at duality of patterning. “Based on what we know about how the coda inventory works, I’m optimistic—though still not sure—that this is going to be something that we find in sperm whales,” Andreas said.

The question of whether any species possesses a “communication system” comparable to that of humans is an open and much debated one. In the nineteen-fifties, the behaviorist B. F. Skinner argued that children learn language through positive reinforcement; therefore, other animals should be able to do the same. The linguist Noam Chomsky had a different view. He dismissed the notion that kids acquire language via conditioning, and also the possibility that language was available to other species.

In the early nineteen-seventies, a student of Skinner’s, Herbert Terrace, set out to confirm his mentor’s theory. Terrace, at that point a professor of psychology at Columbia, adopted a chimpanzee, whom he named, tauntingly, Nim Chimpsky. From the age of two weeks, Nim was raised by people and taught American Sign Language. Nim’s interactions with his caregivers were videotaped, so that Terrace would have an objective record of the chimp’s progress. By the time Nim was three years old, he had a repertoire of eighty signs and, significantly, often produced them in sequences, such as “banana me eat banana” or “tickle me Nim play.” Terrace set out to write a book about how Nim had crossed the language barrier and, in so doing, made a monkey of his namesake. But then Terrace double-checked some details of his account against the tapes. When he looked carefully at the videos, he was appalled. Nim hadn’t really learned A.S.L.; he had just learned to imitate the last signs his teachers had made to him.

“The very tapes I planned to use to document Nim’s ability to sign provided decisive evidence that I had vastly overestimated his linguistic competence,” Terrace wrote.

Since Nim, many further efforts have been made to prove that different species—orangutans, bonobos, parrots, dolphins—have a capacity for language. Several of the animals who were the focus of these efforts—Koko the gorilla, Alex the gray parrot—became international celebrities. But most linguists still believe that the only species that possesses language is our own.

Language is “a uniquely human faculty” that is “part of the biological nature of our species,” Stephen R. Anderson, a professor emeritus at Yale and a former president of the Linguistic Society of America, writes in his book “Doctor Dolittle’s Delusion.”

Whether sperm-whale codas could challenge this belief is an issue that just about everyone I talked to on the ceti team said they’d rather not talk about.

“Linguists like Chomsky are very opinionated,” Michael Bronstein, the Oxford professor, told me. “For a computer scientist, usually a language is some formal system, and often we talk about artificial languages.” Sperm-whale codas “might not be as expressive as human language,” he continued. “But I think whether to call it ‘language’ or not is more of a formal question.”

“Ironically, it’s a semantic debate about the meaning of language,” Gero observed.

Of course, the advent of ChatGPT further complicates the debate. Once a set of algorithms can rewrite a novel, what counts as “linguistic competence”? And who—or what—gets to decide?

“When we say that we’re going to succeed in translating whale communication, what do we mean?” Shafi Goldwasser, the Radcliffe Institute fellow who first proposed the idea that led to ceti, asked.

“Everybody’s talking these days about these generative A.I. models like ChatGPT,” Goldwasser, who now directs the Simons Institute for the Theory of Computing, at the University of California, Berkeley, went on. “What are they doing? You are giving them questions or prompts, and then they give you answers, and the way that they do that is by predicting how to complete sentences or what the next word would be. So you could say that’s a goal for ceti—that you don’t necessarily understand what the whales are saying, but that you could predict it with good success. And, therefore, you could maybe generate a conversation that would be understood by a whale, but maybe you don’t understand it. So that’s kind of a weird success.”

Prediction, Goldwasser said, would mean “we’ve realized what the pattern of their speech is. It’s not satisfactory, but it’s something.

“What about the goal of understanding?” she added. “Even on that, I am not a pessimist.”

There are now an estimated eight hundred and fifty thousand sperm whales diving the world’s oceans. This is down from an estimated two million in the days before the species was commercially hunted. It’s often suggested that the darkest period for P. macrocephalus was the middle of the nineteenth century, when Melville shipped out of New Bedford on the Acushnet. In fact, the bulk of the slaughter took place in the middle of the twentieth century, when sperm whales were pursued by diesel-powered ships the size of factories. In the eighteen-forties, at the height of open-boat whaling, some five thousand sperm whales were killed each year; in the nineteen-sixties, the number was six times as high. Sperm whales were boiled down to make margarine, cattle feed, and glue. As recently as the nineteen-seventies, General Motors used spermaceti in its transmission fluid.

Near the peak of industrial whaling, a biologist named Roger Payne heard a radio report that changed his life and, with it, the lives of the world’s remaining cetaceans. The report noted that a whale had washed up on a beach not far from where Payne was working, at Tufts University. Payne, who’d been researching moths, drove out to see it. He was so moved by the dead animal that he switched the focus of his research. His investigations led him to a naval engineer who, while listening for Soviet submarines, had recorded eerie underwater sounds that he attributed to humpback whales. Payne spent years studying the recordings; the sounds, he decided, were so beautiful and so intricately constructed that they deserved to be called “songs.” In 1970, he arranged to have “Songs of the Humpback Whale” released as an LP.

“I just thought: the world has to hear this,” he would later recall. The album sold briskly, was sampled by popular musicians like Judy Collins, and helped launch the “Save the Whales” movement. In 1979, National Geographic issued a “flexi disc” version of the songs, which it distributed as an insert in more than ten million copies of the magazine. Three years later, the International Whaling Commission declared a “moratorium” on commercial hunts which remains in effect today. The move is credited with having rescued several species, including humpbacks and fin whales, from extinction.

Payne, who died in June at the age of eighty-eight, was an early and ardent member of the ceti team. (This was the case, Gruber told me, even though he was disappointed that the project was focussing on sperm whales, rather than on humpbacks, which, he maintained, were more intelligent.) Just a few days before his death, Payne published an op-ed piece explaining why he thought ceti was so important.

Whales, along with just about every other creature on Earth, are now facing grave new threats, he observed, among them climate change. How to motivate “ourselves and our fellow humans” to combat these threats?

“Inspiration is the key,” Payne wrote. “If we could communicate with animals, ask them questions and receive answers—no matter how simple those questions and answers might turn out to be—the world might soon be moved enough to at least start the process of halting our runaway destruction of life.”

Several other ceti team members made a similar point. “One important thing that I hope will be an outcome of this project has to do with how we see life on land and in the oceans,” Bronstein said. “If we understand—or we have evidence, and very clear evidence in the form of language-like communication—that intelligent creatures are living there and that we are destroying them, that could change the way that we approach our Earth.”

“I always look to Roger’s work as a guiding star,” Gruber told me. “The way that he promoted the songs and did the science led to an environmental movement that saved whale species from extinction. And he thought that ceti could be much more impactful. If we could understand what they’re saying, instead of ‘save the whales’ it will be ‘saved by the whales.’

“This project is kind of an offering,” he went on. “Can technology draw us closer to nature? Can we use all this amazing tech we’ve invented for positive purposes?”

ChatGPT shares this hope. Or at least the A.I.-powered language model is shrewd enough to articulate it. In the version of “Moby-Dick” written by algorithms in the voice of a whale, the story ends with a somewhat ponderous but not unaffecting plea for mutuality:

I, the White Leviathan, could only wonder if there would ever come a day when man and whale would understand each other, finding harmony in the vastness of the ocean’s embrace. ♦

Published in the print edition of the September 11, 2023, issue.

entrevista | O sonho como modo de fazer política e como estado de criação (Cult)

Welington Andrade

Revista Cult, edição 292

entrevista | O sonho como modo de fazer política e como estado de criação
Foto: Bob Sousa

Duas semanas antes de completar 86 anos, no dia 30 de março, o diretor de teatro José Celso Martinez Corrêa recebeu a Cult em seu apartamento, no bairro do Ibirapuera, em São Paulo, para falar de seu mais novo projeto: a adaptação para o palco do livro A queda do céu: palavras de um xamã yanomami, de Davi Kopenawa e Bruce Albert. Embora sem a mesma agilidade física de antes, Zé Celso continua imbatível na forma como articula rapidez de raciocínio e destreza verbal. Depois de encenar em 2022, último ano do governo Bolsonaro, uma adaptação do Fausto, de Cristopher Marlowe, na qual o trágico herói “revirava na encruzilhada” daquele Brasil, vislumbrando como saída paródica que o país fosse de todos e Exu estivesse no través de tudo, o diretor quer, neste primeiro ano do governo Lula, falar dos Yanomami a fim de não somente denunciar o massacre que eles vêm sofrendo, como também chamar a atenção para o modo como eles fazem política – através dos sonhos. Uma atividade essencial para as culturas ancestrais.

Você está adaptando o livro A queda do céu: palavras de um xamã yanomami, de Davi Kopenawa e Bruce Albert, para o teatro. Fale um pouco de como está se dando esse processo, por favor.
Há cinco pessoas reunidas no projeto: o dramaturgo Fernando de Carvalho, o arquiteto e iluminador Pedro Felizes [mestre em Antropologia Social, com dissertação sobre os Pirahã, de Roraima], o ator Roderick Himeros, o maestro Felipe Botelho e eu. Estamos trabalhando juntos, diariamente, desde o dia 1º de fevereiro. Houve outras pessoas que começaram, mas desistiram. É dificilíssimo porque o livro é enorme. Tem 729 páginas e 24 capítulos. Nós estamos na metade, no capítulo 12. Praticamente, a adaptação de cada capítulo leva de dois a três dias, porque eles são muito grandes, com coisas maravilhosas. Nós fazemos uma espécie de garimpagem… (Ao terminar de usar uma expressão tão comum, Zé tem um sobressalto e rapidamente se corrige). Não se pode falar de garimpagem em relação aos Yanomami, não é? A gente faz uma espécie de peneira e vai ficando com as coisas mais fortes. Porque não dá para fazer tudo. Aliás, a impressão é que vai ficar maior do que Os sertões. De toda maneira, não queremos dividir o trabalho em partes como eu fiz com o livro do Euclides. Queremos fazer um espetáculo só. A gente só vai poder planejar o espetáculo depois de pronta a adaptação. Pelo menos a primeira versão. É muito apaixonante o livro. Muito bem escrito. Davi Kopenawa não usa pele de papel, como ele fala, mas concordou em gravar para o Bruce Albert inúmeras conversas sobre essa nação – eu considero uma nação – de cultura riquíssima, os Yanomami. No postscriptum [“Quando eu é um outro (e vice-versa)”], o antropólogo francês relata como o livro foi feito. Primeiramente, eu pensei em adaptar essa parte também para o espetáculo, mas decidi que esse material irá para o programa.

O que o mobilizou na leitura do livro? Por que você resolveu trazê-lo para o universo do teatro?
Faz tempo que eu estou querendo adaptá-lo. Desde a primeira vez que li, eu fiquei muito impressionado porque é uma obra grandiosa. É uma obra do nível de Guimarães Rosa, Euclides da Cunha. É extraordinária, e universal. Por isso, está fazendo sucesso, inclusive, no mundo inteiro. Em novembro do ano passado, eu participei da mesa de abertura da Festa Literária da Morada do Sol [FliSol], em Araraquara, ao lado de Ignácio de Loyola Brandão (nós dois somos de lá). Então, o Eryk Rocha, filho do Glauber, me disse que no dia seguinte haveria uma conversa com o Kopenawa. Eu já o tinha visto falar várias vezes. Aí, durante a conferência, eu perguntei se ele não me daria os direitos de adaptação do livro para o teatro. Ele disse que me daria. E me deu. Então, comecei a trabalhar. O processo todo deve durar mais uns dois meses. O livro é lindo, mas complexo. E muito variado também porque há vários aspectos nele. É um livro muito bem montado pelo Bruce Albert.

Você chegou a pensar no risco de apropriação cultural nesse trabalho?
Não vai haver apropriação indevida. Eu vou trabalhar com os Yanomami. Eu não vou trabalhar com atores fazendo o papel dos indígenas. Quero inclusive convidar um daqueles rapazes yanomamis que foram à cerimônia do Oscar entregar a estatueta de Omama, que não é feita de ouro, às atrizes e aos atores vencedores. Ele falou em yanomami. Uma pena que não tenham filmado isso. O elenco será indígena. Serão quatro atores indígenas a fazerem o Davi nas diferentes fases da vida dele. Inclusive uma criança e um adolescente. Eu nunca trabalhei com atores indígenas. Será a primeira vez. Os atores brancos vão fazer os garimpeiros e os missionários, os antagonistas. Felipe Botelho, o maestro, já está estudando a música yanomami, e a ideia é convidarmos músicos yanomamis para fazerem parte do espetáculo. Uma banda yanomami estará no palco. Provavelmente, na dramaturgia eu terei a consultoria de alguém especializado na cultura indígena. No último dia 15 de março de manhã, a Unifesp concedeu o título de doutor honoris causa ao Kopenawa. No mesmo dia, à tarde, o Sesc Vila Mariana o homenageou, abrindo o evento “Efeito Kopenawa”, do qual participaram o Ailton Krenak e a Manuela Carneiro da Cunha, entre tantos outros ativistas importantes. A direção artística da cerimônia ficou a cargo da atriz e pesquisadora da relação entre teatro e povos indígenas Andreia Duarte. Eu participei da abertura do evento e li o primeiro capítulo do livro. O Kopenawa foi muito simpático e me disse: “Você é velhinho, e muito inteligente”. (Risos.)

Como o livro se relaciona com a poética do Oficina?
Eu não penso nisso. A poética do Oficina está em nós que estamos cuidando da adaptação. Mas talvez o elenco de atores brancos não seja necessariamente do Oficina, porque a Camila [Mota] e o Marcelo [Drummond], por exemplo, estão envolvidos em outros projetos.

Para além das coisas específicas de que tratam os espetáculos dirigidos por você, eles também falam sempre das urgências do Brasil…
Pois é, e os Yanomami não são brasileiros. Eles moram no Brasil, mas vieram muito antes dos portugueses.

E nós estamos acabando com eles…
Mas agora com o governo Lula as coisas tendem a melhorar. É um governo muito favorável.

Você está otimista com o governo Lula?
Sim. O ministério dele é luxuoso. Agora temos uma liderança indígena no Ministério dos Povos Indígenas, Sônia Guajajara, e a Anielle Franco como ministra da Igualdade Racial e o Silvio Almeida como ministro dos Direitos Humanos e da Cidadania e a Marina Silva como ministra do Meio Ambiente e Mudança do Clima. Nesse setor, o governo está maravilhoso. O “povo da mercadoria” não dá valor pra isso. Nem nota, mas isso está sendo extremamente importante. Eles trabalham para um outro Brasil. Um Brasil que não atende ao que quer o mercado nem a grande imprensa. Veja o caso da Folha de S.Paulo, que, depois de ter sido por muito tempo um jornal democrático, com a saída do Jânio de Freitas, mudou muito, endireitou. Meu projeto no Oficina era montar Heliogabalo ou O anarquista coroado, de Antonin Artaud. Eu e o Fernando de Carvalho fizemos uma adaptação da peça, publicada pela editora n-1. Mas depois eu achei que nesse momento não cabe. É o momento de trabalhar as questões mais urgentes que estamos vivendo no Brasil hoje. E a grande questão para mim é a crise dos Yanomami. E a presença dos garimpeiros na região. E de seus financiadores. Os garimpeiros ganham miseravelmente, mas o que eles produzem é comprado pela alta burguesia. É a lógica do capitalismo. Ali na terra yanomami há muito capital investido, tanto do garimpo como das missões religiosas. É a fome, a miséria. É Auschwitz. Eu quero fazer esse espetáculo para marcar as transformações tão grandes que estamos vendo surgir no Brasil neste primeiro ano do governo Lula.

A segunda parte – “A fumaça do metal” – talvez seja a mais impactante do livro porque há a denúncia do que está ocorrendo de mais terrível com eles.
O capítulo que fala do massacre é impressionante, porque tem um tratamento bem brechtiano. Primeiramente, ele demonstra que os indígenas ficaram felizes com a chegada dos garimpeiros, porque ganhavam presentes deles. Até que os garimpeiros se enjoam dos yanomamis e começam as agressões, que culminam no massacre. Kopenawa evidencia muito bem o caminho da relação entre os dois grupos, que é muito clara, muito didática. Bem ao estilo de Brecht.

O que nós podemos aprender com os Yanomami? E com Davi Kopenawa?
Tudo. Ele vive na floresta e toma yãkoana, que é um alucinógeno. Ele viaja com os xapiri, que são entidades que ele vê. Praticamente, tudo com o que sonha ele toma como orientação para a vida social. Ele não se baseia na economia, no planejamento. A política dos Yanomami é baseada no sonho, e isso é muito bonito. É uma coisa muito diferente da gente. Eu os entendo, porque durante muitos anos tomei substâncias alucinógenas – ayahuasca, mescalina, peyote – para criar. Eu criei muita coisa. E passei a acreditar muito mais nos sonhos surgidos dessas viagens. E eles moldaram meu trabalho. Por exemplo, um espetáculo como As três irmãs foi todo moldado em torno de alucinógenos. Eu me lembro de que nós fomos para a praia de Bangoracea, em Ubatuba, vestidos com os figurinos da peça e tomamos mescalina. Depois, ficamos nus e fomos para o mar e tivemos uma visão pontilhista. Todos nós estávamos pontilhados. Nós nos demos as mãos, enfiamos as mãos na areia e começamos a ser massageados pela areia, sendo envolvidos por uma profusão de cores. Foi uma das experiências mais fortes que eu tive na vida. Eu entendo a yãkoana, porque quando eu tomava essas substâncias todas eu sonhava muito, mas sonhava com o teatro. Os Yanomami se baseiam em todos os sonhos alucinógenos como se fossem a constituição deles. Eles partem daí e se organizam através dos sonhos. Eu conheço esse estado de criação.

Zé, você faz um teatro que podemos chamar de sapiencial. Um teatro que parte de um profundo entendimento sobre as coisas e que procura transmitir um saber ancestral à plateia. Você seria uma espécie de xamã do teatro brasileiro?
Não sei… Na cultura yanomami, o xamã transforma o que sonha em realidade política. Eu fui desenvolvendo a percepção das coisas em que eu acredito. As minhas peças são a materialização dos sonhos. Se eu não sonho, eu não consigo fazer uma peça. Tenho origens indígenas e também sou muito ligado à cultura negra, ao batuque. Eu gosto muito. Eu ganhei de Mãe Stella de Oxóssi, do candomblé na Bahia, a honraria de “Exu, senhor das artes cênicas”. É um título que me enche de orgulho.

Conhecer o xamanismo pode nos levar a experimentar outros modos de subjetivação?
Sim. E é bonito no livro como o próprio Kopenawa passa por vários processos de identidade. Primeiro, ele quando criança sonha muito e acorda assustado com os sonhos. O padrasto dele, então, vê nisso uma espécie de predestinação para o xamanismo. Não é qualquer indígena que se torna um xamã. O xamanismo é um processo corporal e psíquico a partir da ingestão da yãkoana, o pó da casca da árvore, que leva à viagem alucinatória na qual se veem os xapiri. No espetáculo, inclusive, a gente vai fazer aparecer os xapiri. Nós vamos materializar muitos sonhos dele. Depois, é de uma delicadeza incrível o modo como ele conta que queria ser branco. Ele conta isso com poesia, mas depois sai dessa. Ele passa por vários processos, de acreditar no deus cristão, por exemplo; de acreditar na Funai. Ele vai para a Funai porque ele tinha o desejo de ser branco. E ele não conta nada disso com rancor. É sempre por meio da subjetividade. A subjetividade nessa cultura é muito importante. E resulta na alteridade guerreira.

Como a sua adaptação vai lidar com a imagem apocalíptica: o céu haverá mesmo de cair?
Os Yanomami trabalham para o céu não cair, porque na concepção deles o céu já caiu uma vez. Mas onde ele caiu nasceu a floresta. E na floresta eles trabalham para evitar que o céu novamente caia. É cíclico. Eles sonham com colunas onde moram os espíritos que são bem fincadas na terra e sustentam o céu. A criação dessa imagem no Oficina vai ser muito interessante. A cosmologia indígena está muito ligada à vida cotidiana. A gente aprende muita coisa com eles. E se identifica também. Minha avó era uma indígena que foi capturada pelos bandeirantes em Porto Ferreira. Depois, foi viver em Araraquara (arará kûara, em tupi antigo) e se casou com meu avô português. Já minha bisavó era uma índia louca, que ficava rolando na cama… Eu tenho essas questões já há muito tempo. Quando li o livro, me identifiquei. E ganhei mais experiência. Esse livro está escrito na minha vida. E no meu corpo. Mas é muito difícil. É o desafio maior da minha vida.

Maior do que O rei da vela?
Não tem comparação! N’O rei da vela, o Renato Borghi leu o texto e me disse “vamos embora!”. E eu fui com ele. Os sertões foi muito trabalhoso, mas era sobre um Brasil que a minha geração estudou. Já A queda do céu fala de um outro Brasil, e de um não Brasil. É muito diferente de uma peça norte-americana, francesa, russa. É uma outra subjetividade.

Há no trabalho ecos oswaldianos?
Sem dúvida. Começa que o prefácio do livro foi escrito por Eduardo Viveiros de Castro, cujo trabalho dialoga muito com o do Oswald. O Oswald deu uma importância fundamental ao “tupy or not tupy, that’s the question”. Ele entendeu a questão indígena. A queda do céu é uma peça oswaldiana. Se Oswald estivesse vivo, estaria trabalhando conosco.

Bob Sousa, fotógrafo, mestre em Artes pela Unesp e membro do júri de teatro da APCA, é autor do livro Retratos do teatro (Editora Unesp).

Reinaldo José Lopes: Camadas do fundo de um lago retratam como presença humana transformou radicalmente a Terra (Folha de S.Paulo)

www1.folha.uol.com.br

Opinião

3.dez.2023 às 23h15

“O mundo está mudando: sinto-o na água, sinto-o na terra e farejo-o no ar.” Quem só assistiu aos filmes da série “O Senhor dos Anéis” se acostumou a ouvir essa frase na voz augusta de Cate Blanchett (a elfa Galadriel); nos livros, quem a pronuncia é o ent (gigante arvoresco) Barbárvore. Trata-se, no fundo, de um resumo da conclusão do romance de fantasia de J.R.R. Tolkien: o fim de uma era e o começo de outra, caracterizada pelo Domínio dos Homens. E se fosse possível detectar diretamente algo muito parecido com isso no nosso mundo do século 21? Algo que prove, para além de qualquer dúvida, que a nossa espécie passou a moldar a Terra de forma irreversível?

A resposta a essa pergunta pode ser encontrada em muitos lugares, mas tudo indica que a versão mais contundente e consolidada dela, a que entrará para os livros de geologia e de história, vem do lago Crawford, no Canadá. Os cientistas encarregados de definir formalmente o início do chamado Antropoceno –a época geológica caracterizada pela intervenção humana maciça em diversos aspectos do funcionamento do planeta– estão usando o lago como o exemplo por excelência desse fenômeno.

É por isso que convido o leitor para um mergulho naquelas águas alcalinas. Entender os detalhes que fazem do lugar um exemplo tão útil para entender o Antropoceno é, ao mesmo tempo, uma pequena aula de método científico e um retrato do poderio –frequentemente destrutivo– que desenvolvemos como espécie.

Uma das análises mais completas da lagoa canadense foi publicado na revista científica The Anthropocene Review por uma equipe da Universidade Brock, no Canadá. A primeira coisa a se ter em mente é que o lago Crawford parece um grande funil: relativamente pequeno (2,4 hectares de área) e fundo (24 m entre a superfície e o leito). Isso faz com que as camadas d’água, embora bem oxigenadas, misturem-se pouco. Por causa da salinidade e alcalinidade elevadas, há pouca vida animal no fundo.

E esse é o primeiro grande pulo do gato: tais características fazem com que camadas muito estáveis de sedimento possam se depositar anualmente no leito do lago Crawford. Todo ano é a mesma história: durante o outono, uma lâmina mais escura de matéria orgânica desce ao fundo (como estamos no Canadá, muitas árvores perdem as folhas nessa época); no verão, essa camada é recoberta por outra, mais clara, de minerais ricos em cálcio. Essa regularidade nunca é bagunçada pela chamada bioturbação (invertebrados aquáticos cavando o leito, por exemplo).

Ou seja, o fundo do lago é um reloginho, ou melhor, um calendário. Cilindros de sedimento tirados de seu fundo podem ser datados ano a ano com pouquíssima incerteza.

Isso significa que dá para identificar com precisão o aparecimento do elemento químico plutônio –resultado direto do uso de armas nucleares, principalmente em testes militares– a partir de 1948, com um pico em 1967 e uma queda nos anos 1980. Dada a natureza dos elementos radioativos, essa assinatura estará lá rigorosamente “para sempre” (ao menos do ponto de vista humano).

Algo muito parecido vale para as chamadas SCPs (partículas esferoidais carbonáceas, na sigla inglesa). Elas são produzidas pela queima industrial, em altas temperaturas, de carvão mineral e derivados do petróleo. Começam a aparecer nos sedimentos da segunda metade do século 19, mas sua presença só dispara mesmo, de novo, no começo dos anos 1950. Nada que não seja a ação humana poderia produzir esse fenômeno.

É por isso que os cientistas estão propondo o ano de 1950 como o início do Antropoceno. Ainda que a proposta não “pegue” nesse formato exato, o peso de evidências como as camadas do lago Crawford é dificílimo de contrariar. Está na água, na terra e no ar. E, para o bem ou para o mal, a responsabilidade é nossa.

Mônica Bergamo: Pesquisa Ipec revela que 7 em cada 10 brasileiros já vivenciaram um evento climático extremo (Folha de S.Paulo)

www1.folha.uol.com.br

3.dez.2023 às 23h15


Uma pesquisa inédita feita pelo Ipec (Inteligência em Pesquisa e Consultoria Estratégica) a pedido do Instituto Pólis revela que 7 em cada 10 brasileiros já vivenciaram ao menos um evento extremo ligado às mudanças climáticas.

Entre os episódios sofridos mais citados pelos entrevistados estão chuvas muito fortes (20%), seca e escassez de água (20%), alagamentos, inundações e enchentes (18%), temperaturas extremas (10%), apagão (7%), ciclones e tempestades de vento (6%) e queimadas e incêndios (5%).

O Ipec ouviu 2.000 pessoas com 16 anos ou mais entre os dias 22 e 26 de julho deste ano. A pesquisa encomendada pelo Instituto Pólis, com apoio do Instituto Clima e Sociedade, tem uma margem de erro de dois pontos percentuais, para mais ou para menos, e um índice de confiança de 95%.

O levantamento mostra que as temperaturas extremas —seja muito frio ou muito calor— são as ocorrências mais associadas pela população (44%) à crise climática. Em termos práticos, porém, a falta de água e a seca são os eventos que mais preocupam, sendo apontados por 34% dos respondentes.

Na sequência são citados temores em relação a alagamento, inundação e enchente (23%), incêndios e queimadas (18%) e chuva forte (17%). A preocupação com o advento do calor ou do frio extremo surge em quinto lugar, sendo temido por 16% dos entrevistados.

Ainda de acordo com a pesquisa, as apreensões variam de acordo com a classe e com cor dos entrevistados. Alagamentos, inundações e enchentes preocupam mais as classes D e E, sendo indicadas por 25% dos entrevistados desses segmentos, do que as classes A e B (19%). A média nacional é de 23%.

A população negra, por sua vez, apresenta maior preocupação (25%) em relação a essas mesmas ocorrências do que a população branca (21%).

Para pesquisadores que integram o Pólis, as respostas também indicam que a população brasileira defende o investimento em fontes renováveis de energia para combater as mudanças climáticas.

Do total de entrevistados, 84% dizem se preocupar com o futuro e apoiar o investimento nessas modalidades. Para 57%, a energia solar deveria ser priorizada em termos de investimentos públicos. Fontes hídricas (14%) e a eólicas (13%) são citadas na sequência.

Por outro lado, os entrevistados afirmam que o petróleo (73%), o carvão mineral (72%) e o gás fóssil (67%) são as categorias que mais contribuem para o agravamento das mudanças climáticas.

“A pesquisa indica, de forma inédita, que há uma tendência de custo político cada vez mais elevado se o caminho das decisões governamentais continuar sendo no investimento de fontes não renováveis”, afirma o diretor-executivo do Instituto Pólis, Henrique Frota.

“Os números mostram que os brasileiros querem investimento prioritário em fontes renováveis e entendem essa decisão como fundamental para o combate às mudanças climáticas”, completa Frota.

The way out of burnout (The Economist)

economist.com

The Economist

A psychoanalyst explains why for people feeling “burnt out”, simply trying to relax doesn’t always work

July 28, 2016


By Josh Cohen

A patient of mine named Elliot recently took a week off from his demanding job as a GP. He felt burnt out and badly needed to rest. The plan was to sleep late, read a novel, take the odd leisurely walk, maybe catch up on “Game of Thrones”. But somehow he found himself instead packing his schedule with art museums, concerts, theatre, meetings with friends in hot new bars and restaurants. Then there were the visits to the gym, Spanish lessons, some flat-pack furniture assemblage.

During the first of his twice-weekly evening sessions, he wondered if he shouldn’t slow down. He felt as exhausted as ever. Facebook and Twitter friends were joking about how it all sounded like harder work than work. “I’m trying to figure out how I’ve managed to be doing so much when I didn’t want to be doing anything. Somehow not doing anything seems impossible. I mean, how can you just…do nothing?!”

When Elliot protests that he can’t just do nothing, he is seeing and judging himself from the perspective of a culture that looks with disdain at anything that smacks of inactivity. Under constant self-scrutiny as to whether he is being sufficiently productive, he feels ashamed when he judges himself to have come up short in this regard. But this leaves him at once too drained to work and unable to rest.

As I describe in my feature for the August/September issue of “1843”, this is the basic predicament of the burnout sufferer: a feeling of exhaustion accompanied by a nervy compulsion to go on regardless is a double bind that makes it very difficult to know how to cope. Burnout involves the loss of the capacity to relax, to “just do nothing”. It prevents an individual from embracing the ordinary pleasures – sleep, long baths, strolling, long lunches, meandering conversation – that induce calm and contentment. It can be counterproductive to recommend relaxing activities to someone who complains that the one thing they cannot do is relax.

So what does it take to recover the capacity to do nothing, or very little? I might be expected at this point to leap to psychoanalysis as an instant remedy. But psychoanalysis is emotionally demanding, time-consuming and often expensive. Nor does it work for everyone (a basic truth of all therapies, physical or mental).

In less severe cases of burnout, it is often the case that difficulties inducing nervous exhaustion are more external than internal. Time and energy may be drained by life events (bereavement, divorce, changes in financial status and so on) as well as the demands of work.

In such cases, it is worth turning in the first instance to more external solutions – cutting working hours as much as possible, carving out more time to relax or for contemplative practices such as yoga and meditation. This is as much a matter of discovering a remedy as the remedy itself. Merely listening and attending to the needs of the inner self as opposed to the demands of the outside world can have a transformative effect.

But such solutions will seem unrealistic to some sufferers both practically and psychologically. Practically in the sense that many of us are employed in sectors that demand punishing hours and unstinting commitment; psychologically in the sense that reducing working hours, and so taking oneself out of the highest levels of the game, is likely to induce more rather than less anxiety in someone driven relentlessly to achieve more.

So while there are many means by which we can be helped to relax, the predicament of severe burnout is precisely that you cannot be helped to relax. Where burnout has psychological roots, psychoanalysis may be able to help.

One way is its “form”. The nervous exhaustion of burnout results from their enslavement to an endless to-do list packed with short- and long-range tasks. In a psychotherapy session, you sit or lie down and begin to talk with no particular agenda, letting yourself go wherever your minds takes you. For portions of a session you might be silent, discovering the value of simply being with someone, without having to justify or account for yourself, instilling an appreciation for what the American psychoanalyst Jonathan Lear calls “mental activity without a purpose.”

Another way is the “content” of psychoanalysis. Talking to a therapist can help us discover those elements in our own history and character that make us particularly vulnerable to specific difficulties such as burnout. In my feature for “1843”, I discuss how two patients came from early childhood to associate their worth and value with their levels of achievement. Under constant pressure from within to “be their best”, they were liable to feel empty and exhausted when, inevitably, they felt they’d failed to live up to this ideal self-image.

This was very much the case for Elliot, and goes some way to explaining why the idea of “just doing nothing” so scandalised him. Even today, as they approach old age, Elliot could never imagine his parents putting their feet up talking, reading or watching TV. He remembers family meals taken quickly, with one or both parents in a hurry to rush off to one commitment or another. His own life was heavily scheduled with homework and extra-curricular lessons, and he was never more forcefully admonished by either parent than when he was being “lazy”. “They were kind of compulsively active”, he said, “and made me feel it was shameful to waste time. You could imagine the seats of their chairs were rigged to administer a jolt of current if they sat on them for more than ten minutes.” Only now is he beginning to ask why they, and he in turn, are like this, and why being at rest for any length of time is equivalent in their minds to “wasting” it.

Insight like this can be helpful to challenge our unthinkingly internalised habits of working and our dogmas as to what constitutes a “productive” use of our time. It encourages us to think about what kind of life would be worth living, rather than simply living the life we assume we’re stuck with.


Is there more to burnout than working too hard?

The Economist

Josh Cohen argues that the root of the problem lies deeper than that


June 29, 2016

By Josh Cohen

When Steve first came to my consulting room, it was hard to square the shambling figure slumped low in the chair opposite with the young dynamo who, so he told me, had only recently been putting in 90-hour weeks at an investment bank. Clad in baggy sportswear that had not graced the inside of a washing machine for a while, he listlessly tugged his matted hair, while I tried, without much success, to picture him gliding imperiously down the corridors of some glassy corporate palace.

Steve had grown up as an only child in an affluent suburb. He recalls his parents, now divorced, channelling the frustrations of their loveless, quarrelsome marriage into the ferocious cultivation of their son. The straight-A grades, baseball-team captaincy and Ivy League scholarship he eventually won had, he felt, been destined pretty much from the moment he was born. “It wasn’t so much like I was doing all this great stuff, more like I was slotting into the role they’d already scripted for me.” It seemed as though he’d lived the entirety of his childhood and adolescence on autopilot, so busy living out the life expected of him that he never questioned whether he actually wanted it.

Summoned by the bank from an elite graduate finance programme in Paris, he plunged straight into its turbocharged working culture. For the next two years, he worked on the acquisition of companies with the same breezy mastery he’d once brought to the acquisition of his academic and sporting achievements. Then he realised he was spending a lot of time sunk in strange reveries at his workstation, yearning to go home and sleep. When the phone or the call of his name woke him from his trance, he would be gripped by a terrible panic. “One time this guy asked me if I was OK, like he was really weirded out. So I looked down and my shirt was drenched in sweat.”

One day a few weeks later, when his 5.30am alarm went off, instead of leaping out of bed he switched it off and lay there, staring at the wall, certain only that he wouldn’t be going to work. After six hours of drifting between dreamless sleep and blank wakefulness, he pulled on a tracksuit and set off for the local Tesco Metro, piling his basket with ready meals and doughnuts, the diet that fuelled his box-set binges.

Three months later, he was transformed into the inertial heap now slouched before me. He did nothing; he saw no one. The concerned inquiries of colleagues quickly tailed off. He was intrigued to find the termination of his employment didn’t bother him. He spoke to his parents in Chicago only as often as was needed to throw them off the scent. They knew the hours he’d been working, so didn’t expect to hear from him all that much, and he never told them anything important anyway.

Can anyone say they’ve never felt some small intimation of Steve’s urge to shut down? I certainly have, sitting glassy-eyed on the sofa at the end of a long working day. My listlessness is tugged by the awareness, somewhere at the edge of my consciousness, of an expanding to-do list, and of unread messages and missed calls vibrating unforgivingly a few feet away. But my sullen inertia plateaus when I drop my eyes to the floor and see a glass or a newspaper that needs picking up. The object in question seems suddenly to radiate a repulsive force that prevents me from so much as extending my forearm. My mind and body scream in protest against its outrageous demand that I bend and retrieve it. Why, I plead silently, should I have to do this? Why should I have to do anything ever again?

We commonly use the term “burnout” to describe the state of exhaustion suffered by the likes of Steve. It occurs when we find ourselves taken over by this internal protest against all the demands assailing us from within and without, when the momentary resistance to picking up a glass becomes an ongoing state of mind.

Burnout didn’t become a recognised diagnosis until 1974, when the German-American psychologist Herbert Freudenberger applied the term to the increasing number of cases he encountered of “physical or mental collapse caused by overwork or stress”. The relationship to stress and anxiety is crucial, for it distinguishes burnout from simple exhaustion. Run a marathon, paint your living room, catalogue your collection of tea caddies, and the tiredness you experience will be infused with a deep satisfaction and faintly haloed in smugness – feelings that confirm you’ve discharged your duty to the world for at least the remainder of the day.

The exhaustion experienced in burnout combines an intense yearning for this state of completion with the tormenting sense that it cannot be attained, that there is always some demand or anxiety or distraction which can’t be silenced. In his 1960 novel “A Burnt-Out Case” (the title may have helped bring the term into general circulation), Graham Greene parallels the mental and spiritual burnout of Querry, the protagonist, with the “burnt-out cases” of leprosy he witnesses in the Congo. Querry believes he’s “come to the end of desire”, his emotions amputated like the limbs of the lepers he encounters, and the rest of his life will be endured in a state of weary indifference.

But Querry’s predicament is that, as long as he’s alive, he can’t reach a state of impassivity; there will always be something or someone to disturb him. I frequently hear the same yearning expressed in my consulting room – the wish for the world to disappear, for a cessation of any feelings, whether positive or negative, that intrude on the patient’s peace, alongside the painful awareness that the world’s demands are waiting on the way out.

You feel burnout when you’ve exhausted all your internal resources, yet cannot free yourself of the nervous compulsion to go on regardless. Life becomes something that won’t stop bothering you. Among its most frequent and oppressive symptoms is chronic indecision, as though all the possibilities and choices life confronts you with cancel each other out, leaving only an irritable stasis.

Anxieties about burnout seem to be everywhere these days. A quick glance through the papers yields stories of young children burnt out by exams, teenagers by the never-ending cacophony of social media, women by the competing demands of work and motherhood, couples by a lack of time for each other and their family life.

But while it may seem to be a problem rooted in our cultural circumstances, burnout has a history stretching back many centuries. The condition of melancholic world-weariness was recognised across the ancient world – it is the voice that speaks out in the biblical book of Ecclesiastes (“All is vanity! What does a man gain by all the toil at which he toils under the sun?”), and diagnosed by the earliest Western medical authorities Hippocrates and Galen. It appears in medieval theology as acedia, a listless indifference to worldly life brought about by spiritual exhaustion. During the Renaissance, a period of relentless change, Albrecht Dürer’s 1514 engraving “Melancholia I” was the most celebrated of many images depicting man despondent at the transience of life.

But it was not until the second half of the 19th century that writers began to link this condition to the specific stresses of modern life. In 1879, the American neurologist George Beard published “Neurasthenia: (nervous exhaustion) with remarks on treatment”, identifying neurasthenia as an illness endemic to the pace and strain of modern industrial life. The fin-de-siècle neurasthenic, in whom exhaustion and innervation converge, uncannily anticipates the burnout of today. They have in common an overloaded and overstimulated nervous system. A culture of chronic overwork is prevalent within many professions, from banking and law to media and advertising, health, education and other public services. A 2012 study by the University of Southern California found that every one of the 24 entry-level bankers it followed developed a stress-related illness (such as insomnia, alcoholism or an eating disorder) within a decade on the job. A much larger 2014 survey by eFinancialCareers of 9,000 financial workers in cities across the globe (including Hong Kong, London, New York and Frankfurt), showed bankers typically working between 80 and 120 hours a week, the majority feeling at least “partially” burnt out, with somewhere between 10% and 20% (depending on the country) describing themselves as “totally” burnt out.

A young banker who sees me in the early morning, the only available slot in her working day, often leaves a message at 3am to let me know she won’t make it as she’s only just leaving the office – a predicament especially bitter because her psychoanalytic session is the one hour in the day in which she can switch off her phone and find some respite from her job. Increasing numbers of my patients say they value a session simply because it provides a rare chance for a moment of stillness freed from the obligation to talk.

A walk in the country or a week on the beach should, theoretically, provide a similar sense of relief. But such attempts at recuperation are too often foiled by the nagging sense of being, as one patient put it, “stalked” by the job. A tormenting dilemma arises: keep your phone in your pocket and be flooded by work-related emails and texts; or switch it off and be beset by unshakeable anxiety over missing vital business. Even those who succeed in losing the albatross of work often quickly fall prey to the virus they’ve spent the previous weeks fending off.

Burnout increases as work insinuates itself more and more into every corner of life – if a spare hour can be snatched to read a novel, walk the dog or eat with one’s family, it quickly becomes contaminated by stray thoughts of looming deadlines. Even during sleep, flickering images of spreadsheets and snatches of management speak invade the mind, while slumbering fingers hover over the duvet, tapping away at a phantom keyboard.

Some companies have sought to alleviate the strain by offering sessions in mindfulness. But the problem with scheduling meditation as part of that working day is that it becomes yet another task at which you can succeed or fail. Those who can’t clear out their mind need to try harder – and the very exercises intended to ease anxiety can end up exacerbating it. Schemes cooked up by management theorists since the 1970s to alleviate the tedium and tension of the office through what might be called the David Brent effect – the chummy, backslapping banter, the paintballing away-days, the breakout rooms in bouncy castles – have simply blurred the lines between work and leisure, and so ended up screwing the physical and mental confines of the workplace even tighter.

But it is not just our jobs that overwork our minds. Electronic communication and social media have come to dominate our daily lives, in a transformation that is unprecedented and whose consequences we can therefore only guess at. My consulting room hums daily with the tense expectation induced by unanswered texts and ignored status updates. Our relationships seem to require a perpetual drip-feed of electronic reassurances, and our very sense of self is defined increasingly by an unending wait for the verdicts of an innumerable and invisible crowd of virtual judges.

And, while we wait for reactions to the messages we send out, we are bombarded by alerts on our phones and tablets, dogged by apps that measure and share our personal data, and subjected to an inundation of demands to like, retweet, upload, subscribe or buy. The burnt-out case of today belongs to a culture without an off switch.

In previous generations, depression was likely to result from internal conflicts between what we want to do and what authority figures – parents, teachers, institutions – wish to prevent us from doing. But in our high-performance society, it’s feelings of inadequacy, not conflict, that bring on depression. The pressure to be the best workers, lovers, parents and consumers possible leaves us vulnerable to feeling empty and exhausted when we fail to live up to these ideals. In “The Weariness of the Self” (1998), an influential study of modern depression, the French sociologist Alain Ehrenberg argues that in the liberated society which emerged during the 1960s, guilt and obedience play less of a role in the formation of the self than the drive to achieve. The slogan of the “attainment society” is “I can” rather than “I must”.

A more prohibitive society, which tells us we can’t have everything, sets limits on our sense of self. Choose to be a bus conductor and you can’t be a concert pianist; a full-time parent will never be chairman of the board. In our attainment society, we are constantly told that we can be, do and have anyone or anything we want. But, as anyone who’s tried to decide between 22 nearly identical brands of yoghurt in an American organic hypermarket can confirm, limitless choice debilitates far more than it liberates.

The depressive burnout, Ehrenberg suggests, feels incapable of making meaningful choices. This, as we discovered in the course of analysis, is Steve’s predicament. In his emotionally chilly childhood home, the only attention he received from his parents was their rigorous monitoring of his schoolwork and extra-curricular activities. In his own mind, he was worth caring about only because of his achievements. So while he accrued awards and knowledge and skills, he never learned to be curious about who he might be or what he might want in life. Having unthinkingly acquiesced in his parents’ prescription of what was best for him, he simply didn’t know how to deal with, or even make sense of, the sudden, unexpected feeling that the life he was living wasn’t the one for him.

Steve presents an intriguing paradox: what appears from the outside to have been a life driven by the active pursuit of goals feels to him to be oddly inert, a lifeless slotting-in, as he puts it, to a script he didn’t write. “Genuine force of habit”, suggested the great philosophical misanthrope Arthur Schopenhauer in 1851, might appear to be an expression of our innate character, but “really derives from the inertia which wants to spare the intellect the will, the labour, difficulty and sometimes the danger involved in making a fresh choice.” Schopenhauer has a point. Steve is coming to understand that his life followed the shape it did not from the blooming of his deepest desires but because he never bothered to question what he had been told.

“You know”, he said to me one day, “it’s not like I want to be this pathetic loser. I want to get up tomorrow, get back in the gym, find a new job, see people again. But it’s like even as I say I’m gonna do all this, some voice in me says, ‘no I’m not, no way am I doing that.’ And then I can’t work out if I feel depressed or relieved, and the confusion sends me crazy.”

I suggested to him that he was in this position because he had realised that he had almost no hand in choosing his life. His own desire was like a chronically neglected muscle; perhaps our job was to nurture it for the first time, to train it for the task of making basic life choices.

The same predicament arose in a different, perhaps subtler way in Susan, a successful music producer who first came to see me in the thick of an overwhelming depressive episode. She had come from Berlin to London six months previously to take up a new and prestigious job, the latest move in an impressive career that had seen her work in glamorous locations across the world.

She had grown up in a prosperous and loving family in a green English suburb. Unlike Steve, her parents had been – and continued to be – supportive of the unexpected professional and personal path their daughter had carved for herself. But they resembled Steve’s parents in one respect: the unvarying message, communicated through the course of her childhood, that she had the potential to be and do anything. The emotional and financial investment they made in her musical and academic activities showed their willingness to back up their enthusiasm with actions. While Susan appeared to follow her own chosen path, there came a point where her parent’s unstinting support and encouragement made it difficult to identify where their wishes stopped and hers began.

For all their differences, Steve’s and Susan’s parents were alike in protecting the child from awareness of the limits imposed by both themselves and the world. Susan would complain that the present, the life she was living moment to moment, felt unreal to her. Only the future really mattered, for that was where her ideal life resided. “If I just wait a little longer”, she would remark in a tone of wry despondency, “there’ll be this magically transformative event and everything will come right.”

This belief, she had come to realise, had taken a suffocating hold on her life: “the longer I live in wait for this magical event, the more I’m not living this life, which is sad, given it’s the only one I’ve got.” Forever anticipating the arrival of the day that would change her life for ever, Susan had come to view her current existence with a certain contempt, a travesty of the perfect one she might have. Her house, her job, the man she was seeing – all of these were thin shadows of the ideal she was pursuing. But the problem with an ideal is that nothing in reality can ever be remotely comparable to it; it tantalises with a future that can never be attained.

Feeling exhausted and emptied by this chase, she would retreat into two contradictory impulses: the first was a compulsion to work, asking the hydra-headed beast of the office to eat up all her time and mental energy. But alongside this, frequently accompanied by chronic insomnia, was a yearning for the opposite. She would fantasise in our sessions about going home and sleeping, waking only for stretches of blissfully catatonic inactivity over uninterrupted, featureless weeks. Occasionally she managed to steal the odd day to veg out, only for a rising panic to jolt her back into work. In frenzied activity and depressive inertia, she found a double strategy for escaping the inadequacies of the present.

Susan’s depressive exhaustion arose from the disparity between the enormous effort she dedicated to contemplating her future and the much smaller one she devoted to discovering and realising her desires in the present. In this regard, she is the uncanny mirror image of Steve: Susan was frozen by the suspicion there was always something else to choose; Steve was shackled by the incapacity to choose at all.

Psychoanalysis is often criticised for being expensive, demanding and overlong, so it might seem surprising that Susan and Steve chose it over more time-limited, evidence-based and results-oriented behavioural therapies. But results-oriented efficiency may have been precisely the malaise they were trying to escape. Burnout is not simply a symptom of working too hard. It is also the body and mind crying out for an essential human need: a space free from the incessant demands and expectations of the world. In the consulting room, there are no targets to be hit, no achievements to be crossed off. The amelioration of burnout begins in finding your own pool of tranquillity where you can cool off.■

In this article, the clinical cases have been disguised, and the names changed, to protect confidentiality.

Read more: Josh Cohen explains how he helps his patients find the way out of burnout

ILLUSTRATIONS IZHAR COHEN

Dodô Azevedo: Guimarães Rosa explica os horrores de 2023 (Folha de S.Paulo)


Folha de S.Paulo

12.11.2023

“Todos estão loucos, neste mundo? Porque a cabeça da gente é uma só, e as coisas que há e que estão para haver são demais de muitas, muito maiores diferentes, e a gente tem de necessitar de aumentar a cabeça, para o total. Só se pode viver perto de outro, e conhecer outra pessoa, sem perigo de ódio, se a gente tem amor. Qualquer amor já é um pouquinho de saúde, um descanso na loucura. Todo caminho da gente é resvaloso. Mas, também, cair não prejudica demais —a gente levanta, a gente sobe, a gente volta!”

Reflete Riobaldo Tartarana, mistura de jagunço, miliciano, soldado e terrorista, protagonista de “Grande Sertão: Veredas”, de Guimarães Rosa. Trata-se de um sujeito atormentado com a natureza horrível do ser humano. A obra-prima decolonial do escritor brasileiro é sobre o caráter indomável e não binário do bem e do mal e a surpresa de que a virtude e o indefensável não são atributos do divino. São constituintes e condições da mente humana.

Mais um trecho do livro: “O diabo vige dentro do homem, os crespos do homem —ou é o homem arruinado, ou o homem dos avessos. Solto, por si, cidadão, é que não tem diabo nenhum”. Há um vício acadêmico em domesticar o próprio texto de Rosa, relacionando o que foi escrito à época ao país, à língua.

Não. Como “Ulysses”, do irlandês Joyce, ou “O Bebedor de Vinho de Palma”, do nigeriano Amos Tutuola, “Grande Sertão: Veredas” não pertence a um tempo ou espaço. Ou melhor, transforma, como os dois romances citados, tudo no tempo e espaço proposto na obra.

Em Guimarães Rosa, tudo é sertão. Principalmente dentro de nós —a grande contribuição ontológica e terapêutica do livro para quem, como tanta gente em 2023, anda abismado, chocado e confuso com os horrores que somos capazes de cometer. A recém-lançada adaptação de “Grande Sertão: Veredas” para o cinema, dirigida por Bia Lessa, chama-se “O Diabo na Rua, no Meio do Redemunho”. É o que nós somos, é onde estamos. Procurando nos posicionar diante fatos e narrativas, como o miliciano diletante Riobaldo.

Segue outro trecho: “Eu, quem é que eu era? De que lado eu era? Zé Bebelo ou Joca Ramiro? Hermógenes ou Reinaldo… De ninguém eu era. Eu era de mim. Eu, Riobaldo. Medo. Medo que maneia”. Tentamos nos vitimizar, procurando convencer a nós mesmos que somos consumidos pelo medo. Mas o que acontece é o contrário. Nós consumimos o medo como dependentes químicos que todos somos dele.

O medo é uma commodity que a tudo impulsiona. A mídia, a indústria de remédios, a indústria de armamentos, religiões, ideologias. É com prazer escondido que procuramos “o meio do redemunho”. Entender isso é o que Guimarães Rosa quis dizer com “aumentar a cabeça para o total”. Nós destruímos, nós apavoramos, nós construímos, nós encantamos. Somos nós. Não há culpa ou responsabilidade externa a nós. Na beleza e na feiura (como se essa visão binária de mundo fosse possível). É a nossa jornada conjunta. Não binária. Decolonial.

Como Guimarães termina seu infinito romance: “Diabo não há! É o que eu digo, se for… Existe é homem humano. Travessia.”