Arquivo da tag: neurociências

Marcelo Leite: Virada Psicodélica – Artigo aponta injustiça psicodélica contra saber indígena (Folha de S.Paulo)

www1.folha.uol.com.br

Marcelo Leite

7 de março de 2022


A cena tem algo de surreal: pesquisador europeu com o corpo tomado por grafismos indígenas tem na cabeça um gorro com dezenas de eletrodos para eletroencefalografia (EEG). Um membro do povo Huni Kuin sopra rapé na narina do branco, que traz nas costas mochila com aparelhos portáteis para registrar suas ondas cerebrais.

A Expedition Neuron aconteceu em abril de 2019, em Santa Rosa do Purus (AC). No programa, uma tentativa de diminuir o fosso entre saberes tradicionais sobre uso da ayahuasca e a consagração do chá pelo chamado renascimento psicodélico para a ciência.

O resultado mais palpável da iniciativa, até aqui, apareceu num controverso texto sobre ética, e não dados, de pesquisa.

O título do artigo no periódico Transcultural Psychiatry prometia: “Superando Injustiças Epistêmicas no Estudo Biomédico da Ayahuasca – No Rumo de Regulamentação Ética e Sustentável”. Desde a publicação, em 6 de janeiro, o texto gerou mais calor que luz –mesmo porque tem sido criticado fora das vistas do público, não às claras.

Os autores Eduardo Ekman Schenberg, do Instituto Phaneros, e Konstantin Gerber, da PUC-SP, questionam a autoridade da ciência com base na dificuldade de empregar placebo em experimentos com psicodélicos, na ênfase dada a aspectos moleculares e no mal avaliado peso do contexto (setting) para a segurança do uso, quesito em que cientistas teriam muito a aprender com indígenas.

Entre os alvos das críticas figuram pesquisas empreendidas na última década pelos grupos de Jaime Hallak na USP de Ribeirão Preto e de Dráulio de Araújo no Instituto do Cérebro da UFRN, em particular sobre efeito da ayahuasca na depressão. Procurados, cientistas e colaboradores desses grupos não responderam ou preferiram não se pronunciar.

O potencial antidepressivo da dimetiltriptamina (DMT), principal composto psicoativo do chá, está no foco também de pesquisadores de outros países. Mas outras substâncias psicodélicas, como MDMA e psilocibina, estão mais próximas de obter reconhecimento de reguladores como medicamentos psiquiátricos.

Dado o efeito óbvio de substâncias como a ayahuasca na mente e no comportamento da pessoa, argumentam Schenberg e Gerber, o sistema duplo-cego (padrão ouro de ensaios biomédicos) ficaria inviabilizado: tanto o voluntário quanto o experimentador quase sempre sabem se o primeiro tomou um composto ativo ou não. Isso aniquilaria o valor supremo atribuído a estudos desse tipo no campo psicodélico e na biomedicina em geral.

Outro ponto criticado por eles está na descontextualização e no reducionismo de experimentos realizados em hospitais ou laboratórios, com o paciente cercado de aparelhos e submetido a doses fixadas em miligramas por quilo de peso. A precisão é ilusória, afirmam, com base no erro de um artigo que cita concentração de 0,8 mg/ml de DMT e depois fala em 0,08 mg/ml.

A sanitização cultural do setting, por seu lado, faria pouco caso dos elementos contextuais (floresta, cânticos, cosmologia, rapé, danças, xamãs) que para povos como os Huni Kuin são indissociáveis do que a ayahuasca tem a oferecer e ensinar. Ao ignorá-los, cientistas estariam desprezando tudo o que os indígenas sabem sobre uso seguro e coletivo da substância.

Mais ainda, estariam ao mesmo tempo se apropriando e desrespeitando esse conhecimento tradicional. Uma atitude mais ética de pesquisadores implicaria reconhecer essa contribuição, desenvolver protocolos de pesquisa com participação indígena, registrar coautoria em publicações científicas, reconhecer propriedade intelectual e repartir eventuais lucros com tratamentos e patentes.

“A complementaridade entre antropologia, psicanálise e psiquiatria é um dos desafios da etnopsiquiatria”, escrevem Schenberg e Gerber. “A iniciativa de levar ciência biomédica à floresta pode ser criticada como uma tentativa de medicalizar o xamanismo, mas também pode constituir uma possibilidade de diálogo intercultural centrado na inovação e na resolução de ‘redes de problemas’.”

“É particularmente notável que a biomedicina se aventure agora em conceitos como ‘conexão’ e ‘identificação com a natureza’ [nature-relatedness] como efeito de psicodélicos, mais uma vez, portanto, se aproximando de conclusões epistêmicas derivadas de práticas xamânicas. O desafio final seria, assim, entender a relação entre bem-estar da comunidade e ecologia e como isso pode ser traduzido num conceito ocidental de saúde integrada.”

As reações dos poucos a criticar abertamente o texto e suas ideias grandiosas podem ser resumidas num velho dito maldoso da academia: há coisas boas e novas no artigo, mas as coisas boas não são novas e as coisas novas não são boas. Levar EEG para a floresta do Acre, por exemplo, não resolveria todos os problemas.

Schenberg é o elo de ligação entre o artigo na Transcultural Psychiatry e a Expedition Neuron, pois integrou a incursão ao Acre em 2019 e colabora nesse estudo de EEG com o pesquisador Tomas Palenicek, do Instituto Nacional de Saúde Mental da República Checa. Eis um vídeo de apresentação, em inglês:

“Estamos engajados, Konstantin e eu, em projeto inovador com os Huni Kuin e pesquisadores europeus, buscando construir uma parceria epistemicamente justa, há mais de três anos”, respondeu Schenberg quando questionado sobre o cumprimento, pelo estudo com EEG, das exigências éticas apresentadas no artigo.

Na apresentação da Expedition Neuron, ele afirma: “Nessa primeira expedição curta e exploratória [de 2019], confirmamos que há interesse mútuo de cientistas e uma cultura indígena tradicional da Amazônia em explorar conjuntamente a natureza da consciência e como sua cura tradicional funciona, incluindo –pela primeira vez– registros de atividade cerebral num cenário que muitos considerariam demasiado desafiador tecnicamente”.

“Consideramos de supremo valor investigar conjuntamente como os rituais e medicinas dos Huni Kuin afetam a cognição humana, as emoções e os vínculos de grupo e analisar a base neural desses estados alterados de consciência, incluindo possivelmente experiências místicas na floresta.”

Schenberg e seus colaboradores planejam nova expedição aos Huni Kuin para promover registros de EEG múltiplos e simultâneos com até sete indígenas durante cerimônias com ayahuasca. A ideia é testar a “possibilidade muito intrigante” de sincronia entre cérebros:

“Interpretada pelos Huni Kuin e outros povos ameríndios como um tipo de portal para o mundo espiritual, a ayahuasca é conhecida por fortalecer intensa e rapidamente vínculos comunitários e sentimentos de empatia e proximidade com os outros.”

Os propósitos de Schenberg e Gerber não convenceram a antropóloga brasileira Bia Labate, diretora do Instituto Chacruna em São Francisco (EUA). “Indígenas não parecem ter sido consultados para a produção do texto, não há vozes nativas, não são coautores, e não temos propostas específicas do que seria uma pesquisa verdadeiramente interétnica e intercultural.”

Para a antropóloga, ainda que a Expedition Neuron tenha conseguido autorização para a pesquisa, algo positivo, não configura “epistemologia alternativa à abordagem cientificista e etnocêntrica”. Uma pesquisa interétnica, em sua maneira de ver, implicaria promover uma etnografia que levasse a sério a noção indígena de que plantas são espíritos, têm agência própria, e que o mundo natural também é cultural, tem subjetividade, intencionalidade.

“Todos sabemos que a bebida ayahuasca não é a mesma coisa que ayahuasca freeze dried [liofilizada]; que o contexto importa; que os rituais e coletivos que participam fazem diferença. Coisas iguais ou análogas já haviam sido apontadas pela literatura antropológica, cujas referências foram deixadas de lado pelos autores.”

Labate também discorda de que os estudos com ayahuasca no Brasil negligenciem o reconhecimento de quem chegou antes a ela: “Do ponto de vista global, é justamente uma marca e um diferencial da pesquisa científica brasileira o fato de que houve, sim, diálogo com participantes das religiões ayahuasqueiras. Estes também são sujeitos legítimos de pesquisa, e não apenas os povos originários”.

Schenberg e Palenicek participaram em 2020 de um encontro com outra antropóloga, a franco-colombiana Emilia Sanabria, líder no projeto Encontros de Cura, do Conselho Nacional de Pesquisa Científica da França (CNRS). Ao lado do indígena Leopardo Yawa Bane, o trio debateu o estudo com EEG no painel virtual “Levando o Laboratório até a Ayahuasca”, da Conferência Interdisciplinar sobre Pesquisa Psicodélica (ICPR). Há vídeo disponível, em inglês:

Sanabria, que fala português e conhece os Huni Kuin, chegou a ser convidada por Schenberg para integrar a expedição, mas declinou, por avaliar que não se resolveria a “incomensurabilidade epistemológica” entre o pensamento indígena e o que a biomedicina quer provar. Entende que a discussão proposta na Transcultural Psychiatry é importante, apesar de complexa e não exatamente nova.

Em entrevista ao blog, afirmou que o artigo parece reinventar a roda, ao desconsiderar um longo debate sobre a assimilação de plantas e práticas tradicionais (como a medicina chinesa) pela ciência ocidental: “Não citam a reflexão anterior. É bom que ponham a discussão na mesa, mas há bibliografia de mais de um século”.

A antropóloga declarou ver problema na postura do artigo, ao apresentar-se como salvador dos nativos. “Não tem interlocutor indígena citado como autor”, pondera, corroborando a crítica de Labate, como se os povos originários precisassem ser representados por não índios. “A gente te dá um espacinho aqui no nosso mundo.”

A questão central de uma colaboração respeitosa, para Sanabria, é haver prioridade e utilidade no estudo também para os Huni Kuin, e não só para os cientistas.

Ao apresentar esse questionamento no painel, recebeu respostas genéricas de Schenberg e Palenicek, não direta e concretamente benéficas para os Huni Kuin –por exemplo, que a ciência pode ajudar na rejeição de patentes sobre ayahuasca.

Na visão da antropóloga, “é linda a ideia de levar o laboratório para condições naturalistas”, mas não fica claro como aquela maquinaria toda se enquadraria na lógica indígena. No fundo, trata-se de um argumento simétrico ao brandido pelos autores do artigo contra a pesquisa psicodélica em ambiente hospitalar: num caso se descontextualiza a experiência psicodélica total, socializada; no outro, é a descontextualização tecnológica que viaja e invade a aldeia.

Sanabria vê um dilema quase insolúvel, para povos indígenas, na pactuação de protocolos de pesquisa com a renascida ciência psicodélica. O que em 2014 parecia para muitos uma nova maneira de fazer ciência, com outros referenciais de avaliação e prova, sofreu uma “virada capitalista” desde 2018 e terminou dominado pela lógica bioquímica e de propriedade intelectual.

“Os povos indígenas não podem cair fora porque perdem seus direitos. Mas também não podem entrar [nessa lógica], porque aí perdem sua perspectiva identitária.”

“Molecularizar na floresta ou no laboratório dá no mesmo”, diz Sanabria. “Não vejo como reparação de qualquer injustiça epistêmica. Não vejo diferença radical entre essa pesquisa e o estudo da Fernanda [Palhano-Fontes]”, referindo-se à crítica “agressiva” de Schenberg e Gerber ao teste clínico de ayahuasca para depressão no Instituto do Cérebro da UFRN, extensiva aos trabalhos da USP de Ribeirão Preto.

A dupla destacou, por exemplo, o fato de autores do estudo da UFRN indicarem no artigo de 2019 que 4 dos 29 voluntários no experimento ficaram pelo menos uma semana internados no Hospital Universitário Onofre Lopes, em Natal. Lançaram, com isso, a suspeita de que a segurança na administração de ayahuasca tivesse sido inadequadamente tratada.

“Nenhum desses estudos tentou formalmente comparar a segurança no ambiente de laboratório com qualquer um dos contextos culturais em que ayahuasca é comumente usada”, pontificaram Schenberg e Gerber. “Porém, segundo nosso melhor conhecimento, nunca se relatou que 14% das pessoas participantes de um ritual de ayahuasca tenham requerido uma semana de hospitalização.”

O motivo de internação, contudo, foi trivial: pacientes portadores de depressão resistente a medicamentos convencionais, eles já estavam hospitalizados devido à gravidade de seu transtorno mental e permaneceram internados após a intervenção. Ou seja, a internação não teve a ver com terem tomado ayahuasca.

Este blog também questionou Schenberg sobre o possível exagero em pinçar um erro que poderia ser de digitação (0,8 mg/ml ou 0,08 mg/ml), no artigo de 2015 da USP de Ribeirão, como flagrante de imprecisão que poria em dúvida a superioridade epistêmica da biomedicina psicodélica.

“Se dessem mais atenção aos relatos dos voluntários/pacientes, talvez tivessem se percebido do fato”, retorquiu o pesquisador do Instituto Phaneros. “Além da injustiça epistêmica com os indígenas, existe a injustiça epistêmica com os voluntários/pacientes, que também discutimos brevemente no artigo.”

Schenberg tem vários trabalhos publicados que se encaixariam no paradigma biomédico agora em sua mira. Seria seu artigo com Gerber uma autocrítica sobre a atividade pregressa?

“Sempre fui crítico de certas limitações biomédicas e foi somente com muito esforço que consegui fazer meu pós-doc sem, por exemplo, usar um grupo placebo, apesar de a maioria dos colegas insistirem que assim eu deveria fazer, caso contrário ‘não seria científico’…”.

“No fundo, o argumento é circular, usando a biomedicina como critério último para dar respostas à crítica à biomedicina”, contesta Bia Labate. “O texto não resolve o que se propõe a resolver, mas aprofunda o gap [desvão] entre epistemologias originárias e biomédicas ao advogar por novas maneiras de produzir biomedicina a partir de critérios de validação… biomédicos.”

Can you think yourself young? (The Guardian)

theguardian.com

David Robson, Sun 2 Jan 2022 12.00 GMT

Illustration by Observer design/Getty/Freepik.

Research shows that a positive attitude to ageing can lead to a longer, healthier life, while negative beliefs can have hugely detrimental effects

For more than a decade, Paddy Jones has been wowing audiences across the world with her salsa dancing. She came to fame on the Spanish talent show Tú Sí Que Vales (You’re Worth It) in 2009 and has since found success in the UK, through Britain’s Got Talent; in Germany, on Das Supertalent; in Argentina, on the dancing show Bailando; and in Italy, where she performed at the Sanremo music festival in 2018 alongside the band Lo Stato Sociale.

Jones also happens to be in her mid-80s, making her the world’s oldest acrobatic salsa dancer, according to Guinness World Records. Growing up in the UK, Jones had been a keen dancer and had performed professionally before she married her husband, David, at 22 and had four children. It was only in retirement that she began dancing again – to widespread acclaim. “I don’t plead my age because I don’t feel 80 or act it,” Jones told an interviewer in 2014.

According to a wealth of research that now spans five decades, we would all do well to embrace the same attitude – since it can act as a potent elixir of life. People who see the ageing process as a potential for personal growth tend to enjoy much better health into their 70s, 80s and 90s than people who associate ageing with helplessness and decline, differences that are reflected in their cells’ biological ageing and their overall life span.

Salsa dancer Paddy Jones, centre.
Salsa dancer Paddy Jones, centre. Photograph: Alberto Teren

Of all the claims I have investigated for my new book on the mind-body connection, the idea that our thoughts could shape our ageing and longevity was by far the most surprising. The science, however, turns out to be incredibly robust. “There’s just such a solid base of literature now,” says Prof Allyson Brothers at Colorado State University. “There are different labs in different countries using different measurements and different statistical approaches and yet the answer is always the same.”

If I could turn back time

The first hints that our thoughts and expectations could either accelerate or decelerate the ageing process came from a remarkable experiment by the psychologist Ellen Langer at Harvard University.

In 1979, she asked a group of 70- and 80-year-olds to complete various cognitive and physical tests, before taking them to a week-long retreat at a nearby monastery that had been redecorated in the style of the late 1950s. Everything at the location, from the magazines in the living room to the music playing on the radio and the films available to watch, was carefully chosen for historical accuracy.

The researchers asked the participants to live as if it were 1959. They had to write a biography of themselves for that era in the present tense and they were told to act as independently as possible. (They were discouraged from asking for help to carry their belongings to their room, for example.) The researchers also organised twice-daily discussions in which the participants had to talk about the political and sporting events of 1959 as if they were currently in progress – without talking about events since that point. The aim was to evoke their younger selves through all these associations.

To create a comparison, the researchers ran a second retreat a week later with a new set of participants. While factors such as the decor, diet and social contact remained the same, these participants were asked to reminisce about the past, without overtly acting as if they were reliving that period.

Most of the participants showed some improvements from the baseline tests to the after-retreat ones, but it was those in the first group, who had more fully immersed themselves in the world of 1959, who saw the greatest benefits. Sixty-three per cent made a significant gain on the cognitive tests, for example, compared to just 44% in the control condition. Their vision became sharper, their joints more flexible and their hands more dextrous, as some of the inflammation from their arthritis receded.

As enticing as these findings might seem, Langer’s was based on a very small sample size. Extraordinary claims need extraordinary evidence and the idea that our mindset could somehow influence our physical ageing is about as extraordinary as scientific theories come.

Becca Levy, at the Yale School of Public Health, has been leading the way to provide that proof. In one of her earliest – and most eye-catching – papers, she examined data from the Ohio Longitudinal Study of Aging and Retirement that examined more than 1,000 participants since 1975.

The participants’ average age at the start of the survey was 63 years old and soon after joining they were asked to give their views on ageing. For example, they were asked to rate their agreement with the statement: “As you get older, you are less useful”. Quite astonishingly, Levy found the average person with a more positive attitude lived on for 22.6 years after the study commenced, while the average person with poorer interpretations of ageing survived for just 15 years. That link remained even after Levy had controlled for their actual health status at the start of the survey, as well as other known risk factors, such as socioeconomic status or feelings of loneliness, which could influence longevity.

The implications of the finding are as remarkable today as they were in 2002, when the study was first published. “If a previously unidentified virus was found to diminish life expectancy by over seven years, considerable effort would probably be devoted to identifying the cause and implementing a remedy,” Levy and her colleagues wrote. “In the present case, one of the likely causes is known: societally sanctioned denigration of the aged.”

Later studies have since reinforced the link between people’s expectations and their physical ageing, while dismissing some of the more obvious – and less interesting – explanations. You might expect that people’s attitudes would reflect their decline rather than contribute to the degeneration, for example. Yet many people will endorse certain ageist beliefs, such as the idea that “old people are helpless”, long before they should have started experiencing age-related disability themselves. And Levy has found that those kinds of views, expressed in people’s mid-30s, can predict their subsequent risk of cardiovascular disease up to 38 years later.

The most recent findings suggest that age beliefs may play a key role in the development of Alzheimer’s disease. Tracking 4,765 participants over four years, the researchers found that positive expectations of ageing halved the risk of developing the disease, compared to those who saw old age as an inevitable period of decline. Astonishingly, this was even true of people who carried a harmful variant of the APOE gene, which is known to render people more susceptible to the disease. The positive mindset can counteract an inherited misfortune, protecting against the build-up of the toxic plaques and neuronal loss that characterise the disease.

How could this be?

Behaviour is undoubtedly important. If you associate age with frailty and disability, you may be less likely to exercise as you get older and that lack of activity is certainly going to increase your predisposition to many illnesses, including heart disease and Alzheimer’s.

Importantly, however, our age beliefs can also have a direct effect on our physiology. Elderly people who have been primed with negative age stereotypes tend to have higher systolic blood pressure in response to challenges, while those who have seen positive stereotypes demonstrate a more muted reaction. This makes sense: if you believe that you are frail and helpless, small difficulties will start to feel more threatening. Over the long term, this heightened stress response increases levels of the hormone cortisol and bodily inflammation, which could both raise the risk of ill health.

The consequences can even be seen within the nuclei of the individual cells, where our genetic blueprint is stored. Our genes are wrapped tightly in each cell’s chromosomes, which have tiny protective caps, called telomeres, which keep the DNA stable and stop it from becoming frayed and damaged. Telomeres tend to shorten as we age and this reduces their protective abilities and can cause the cell to malfunction. In people with negative age beliefs, that process seems to be accelerated – their cells look biologically older. In those with the positive attitudes, it is much slower – their cells look younger.

For many scientists, the link between age beliefs and long-term health and longevity is practically beyond doubt. “It’s now very well established,” says Dr David Weiss, who studies the psychology of ageing at Martin-Luther University of Halle-Wittenberg in Germany. And it has critical implications for people of all generations.

Birthday cards sent to Captain Tom Moore for his 100th birthday
Birthday cards sent to Captain Tom Moore for his 100th birthday – many cards for older people have a less respectful tone. Photograph: Shaun Botterill/Getty Images

Our culture is saturated with messages that reinforce the damaging age beliefs. Just consider greetings cards, which commonly play on of images depicting confused and forgetful older people. “The other day, I went to buy a happy 70th birthday card for a friend and I couldn’t find a single one that wasn’t a joke,” says Martha Boudreau, the chief communications officer of AARP, a special interest group (formerly known as the American Association of Retired Persons) that focuses on the issues of over-50s.

She would like to see greater awareness – and intolerance – of age stereotypes, in much the same way that people now show greater sensitivity to sexism and racism. “Celebrities, thought leaders and influencers need to step forward,” says Boudreau.

In the meantime, we can try to rethink our perceptions of our own ageing. Various studies show that our mindsets are malleable. By learning to reject fatalistic beliefs and appreciate some of the positive changes that come with age, we may avoid the amplified stress responses that arise from exposure to negative stereotypes and we may be more motivated to exercise our bodies and minds and to embrace new challenges.

We could all, in other words, learn to live like Paddy Jones.

When I interviewed Jones, she was careful to emphasise the potential role of luck in her good health. But she agrees that many people have needlessly pessimistic views of their capabilities, over what could be their golden years, and encourages them to question the supposed limits. “If you feel there’s something you want to do, and it inspires you, try it!” she told me. “And if you find you can’t do it, then look for something else you can achieve.”

Whatever our current age, that’s surely a winning attitude that will set us up for greater health and happiness for decades to come.

This is an edited extract fromThe Expectation Effect: How your Mindset Can Transform Your Life by David Robson, published by Canongate on 6 January (£18.99).

Our brains exist in a state of “controlled hallucination” (MIT Technology Review)

technologyreview.com

Matthew Hutson – August 25, 2021

Three new books lay bare the weirdness of how our brains process the world around us.

Eventually, vision scientists figured out what was happening. It wasn’t our computer screens or our eyes. It was the mental calculations that brains make when we see. Some people unconsciously inferred that the dress was in direct light and mentally subtracted yellow from the image, so they saw blue and black stripes. Others saw it as being in shadow, where bluish light dominates. Their brains mentally subtracted blue from the image, and came up with a white and gold dress. 

Not only does thinking filter reality; it constructs it, inferring an outside world from ambiguous input. In Being You, Anil Seth, a neuroscientist at the University of Sussex, relates his explanation for how the “inner universe of subjective experience relates to, and can be explained in terms of, biological and physical processes unfolding in brains and bodies.” He contends that “experiences of being you, or of being me, emerge from the way the brain predicts and controls the internal state of the body.” 

Prediction has come into vogue in academic circles in recent years. Seth and the philosopher Andy Clark, a colleague at Sussex, refer to predictions made by the brain as “controlled hallucinations.” The idea is that the brain is always constructing models of the world to explain and predict incoming information; it updates these models when prediction and the experience we get from our sensory inputs diverge. 

“Chairs aren’t red,” Seth writes, “just as they aren’t ugly or old-fashioned or avant-garde … When I look at a red chair, the redness I experience depends both on properties of the chair and on properties of my brain. It corresponds to the content of a set of perceptual predictions about the ways in which a specific kind of surface reflects light.” 

Seth is not particularly interested in redness, or even in color more generally. Rather his larger claim is that this same process applies to all of perception: “The entirety of perceptual experience is a neuronal fantasy that remains yoked to the world through a continuous making and remaking of perceptual best guesses, of controlled hallucinations. You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.”

Cognitive scientists often rely on atypical examples to gain understanding of what’s really happening. Seth takes the reader through a fun litany of optical illusions and demonstrations, some quite familiar and others less so. Squares that are in fact the same shade appear to be different; spirals printed on paper appear to spontaneously rotate; an obscure image turns out to be a woman kissing a horse; a face shows up in a bathroom sink. Re-creating the mind’s psychedelic powers in silicon, an artificial-intelligence-powered virtual-reality setup that he and his colleagues created produces a Hunter Thompson–esque menagerie of animal parts emerging piecemeal from other objects in a square on the Sussex University campus. This series of examples, in Seth’s telling, “chips away at the beguiling but unhelpful intuition that consciousness is one thing—one big scary mystery in search of one big scary solution.” Seth’s perspective might be unsettling to those who prefer to believe that things are as they seem to be: “Experiences of free will are perceptions. The flow of time is a perception.” 

Seth is on comparatively solid ground when he describes how the brain shapes experience, what philosophers call the “easy” problems of consciousness. They’re easy only in comparison to the “hard” problem: why subjective experience exists at all as a feature of the universe. Here he treads awkwardly, introducing the “real” problem, which is to “explain, predict, and control the phenomenological properties of conscious experience.” It’s not clear how the real problem differs from the easy problems, but somehow, he says, tackling it will get us some way toward resolving the hard problem. Now that would be a neat trick.

Where Seth relates, for the most part, the experiences of people with typical brains wrestling with atypical stimuli, in Coming to Our Senses, Susan Barry, an emeritus professor of neurobiology at Mount Holyoke college, tells the stories of two people who acquired new senses later in life than is usual. Liam McCoy, who had been nearly blind since he was an infant, was able to see almost clearly after a series of operations when he was 15 years old. Zohra Damji was profoundly deaf until she was given a cochlear implant at the unusually late age of 12. As Barry explains, Damji’s surgeon “told her aunt that, had he known the length and degree of Zohra’s deafness, he would not have performed the operation.” Barry’s compassionate, nuanced, and observant exposition is informed by her own experience:

At age forty-eight, I experienced a dramatic improvement in my vision, a change that repeatedly brought me moments of childlike glee. Cross-eyed from early infancy, I had seen the world primarily through one eye. Then, in mid-life, I learned, through a program of vision therapy, to use my eyes together. With each glance, everything I saw took on a new look. I could see the volume and 3D shape of the empty space between things. Tree branches reached out toward me; light fixtures floated. A visit to the produce section of the supermarket, with all its colors and 3D shapes, could send me into a sort of ecstasy. 

Barry was overwhelmed with joy at her new capacities, which she describes as “seeing in a new way.” She takes pains to point out how different this is from “seeing for the first time.” A person who has grown up with eyesight can grasp a scene in a single glance. “But where we perceive a three-dimensional landscape full of objects and people, a newly sighted adult sees a hodgepodge of lines and patches of colors appearing on one flat plane.” As McCoy described his experience of walking up and down stairs to Barry: 

The upstairs are large alternating bars of light and dark and the downstairs are a series of small lines. My main focus is to balance and step IN BETWEEN lines, never on one … Of course going downstairs you step in between every line but upstairs you skip every other bar. All the while, when I move, the stairs are skewing and changing.

Even a sidewalk was tricky, at first, to navigate. He had to judge whether a line “indicated the junction between flat sidewalk blocks, a crack in the cement, the outline of a stick, a shadow cast by an upright pole, or the presence of a sidewalk step,” Barry explains. “Should he step up, down, or over the line, or should he ignore it entirely?” As McCoy says, the complexity of his perceptual confusion probably cannot be fully explained in terms that sighted people are used to.

The same, of course, is true of hearing. Raw audio can be hard to untangle. Barry describes her own ability to listen to the radio while working, effortlessly distinguishing the background sounds in the room from her own typing and from the flute and violin music coming over the radio. “Like object recognition, sound recognition depends upon communication between lower and higher sensory areas in the brain … This neural attention to frequency helps with sound source recognition. Drop a spoon on a tiled kitchen floor, and you know immediately whether the spoon is metal or wood by the high- or low-frequency sound waves it produces upon impact.” Most people acquire such capacities in infancy. Damji didn’t. She would often ask others what she was hearing, but had an easier time learning to distinguish sounds that she made herself. She was surprised by how noisy eating potato chips was, telling Barry: “To me, potato chips were always such a delicate thing, the way they were so lightweight, and so fragile that you could break them easily, and I expected them to be soft-sounding. But the amount of noise they make when you crunch them was something out of place. So loud.” 

As Barry recounts, at first Damji was frightened by all sounds, “because they were meaningless.” But as she grew accustomed to her new capabilities, Damji found that “a sound is not a noise anymore but more like a story or an event.” The sound of laughter came to her as a complete surprise, and she told Barry it was her favorite. As Barry writes, “Although we may be hardly conscious of background sounds, we are also dependent upon them for our emotional well-being.” One strength of the book is in the depth of her connection with both McCoy and Damji. She spent years speaking with them and corresponding as they progressed through their careers: McCoy is now an ophthalmology researcher at Washington University in St. Louis, while Damji is a doctor. From the details of how they learned to see and hear, Barry concludes, convincingly, that “since the world and everything in it is constantly changing, it’s surprising that we can recognize anything at all.”

In What Makes Us Smart, Samuel Gershman, a psychology professor at Harvard, says that there are “two fundamental principles governing the organization of human intelligence.” Gershman’s book is not particularly accessible; it lacks connective tissue and is peppered with equations that are incompletely explained. He writes that intelligence is governed by “inductive bias,” meaning we prefer certain hypotheses before making observations, and “approximation bias,” which means we take mental shortcuts when faced with limited resources. Gershman uses these ideas to explain everything from visual illusions to conspiracy theories to the development of language, asserting that what looks dumb is often “smart.”

“The brain is evolution’s solution to the twin problems of limited data and limited computation,” he writes. 

He portrays the mind as a raucous committee of modules that somehow helps us fumble our way through the day. “Our mind consists of multiple systems for learning and decision making that only exchange limited amounts of information with one another,” he writes. If he’s correct, it’s impossible for even the most introspective and insightful among us to fully grasp what’s going  on inside our own head. As Damji wrote in a letter to Barry: 

When I had no choice but to learn Swahili in medical school in order to be able to talk to the patients—that is when I realized how much potential we have—especially when we are pushed out of our comfort zone. The brain learns it somehow.

Matthew Hutson is a contributing writer at The New Yorker and a freelance science and tech writer.

The Mind issue

This story was part of our September 2021 issue

Crows are self-aware just like us, says new study (Big Think)

Neuropsych — September 29, 2020

Crows have their own version of the human cerebral cortex.
Credit: Amarnath Tade/ Unsplash

Robby Berman Share Crows are self-aware just like us, says new study on Facebook Share Crows are self-aware just like us, says new study on Twitter Share Crows are self-aware just like us, says new study on LinkedIn Crows and the rest of the corvid family keep turning out to be smarter and smarter. New research observes them thinking about what they’ve just seen and associating it with an appropriate response. A corvid’s pallium is packed with more neurons than a great ape’s.


It’s no surprise that corvids — the “crow family” of birds that also includes ravens, jays, magpies, and nutcrackers — are smart. They use tools, recognize faces, leave gifts for people they like, and there’s even a video on Facebook showing a crow nudging a stubborn little hedgehog out of traffic. Corvids will also drop rocks into water to push floating food their way.

What is perhaps surprising is what the authors of a new study published last week in the journal Science have found: Crows are capable of thinking about their own thoughts as they work out problems. This is a level of self-awareness previously believed to signify the kind of higher intelligence that only humans and possibly a few other mammals possess. A crow knows what a crow knows, and if this brings the word sentience to your mind, you may be right.

Credit: Neoplantski/Alexey Pushkin/Shutterstock/Big Think

It’s long been assumed that higher intellectual functioning is strictly the product of a layered cerebral cortex. But bird brains are different. The authors of the study found crows’ unlayered but neuron-dense pallium may play a similar role for the avians. Supporting this possibility, another study published last week in Science finds that the neuroanatomy of pigeons and barn owls may also support higher intelligence.

“It has been a good week for bird brains!” crow expert John Marzluff of the University of Washington tells Stat. (He was not involved in either study.)

Corvids are known to be as mentally capable as monkeys and great apes. However, bird neurons are so much smaller that their palliums actually contain more of them than would be found in an equivalent-sized primate cortex. This may constitute a clue regarding their expansive mental capabilities.

In any event, there appears to be a general correspondence between the number of neurons an animal has in its pallium and its intelligence, says Suzana Herculano-Houzel in her commentary on both new studies for Science. Humans, she says, sit “satisfyingly” atop this comparative chart, having even more neurons there than elephants, despite our much smaller body size. It’s estimated that crow brains have about 1.5 billion neurons.

Ozzie and Glenn not pictured. Credit: narubono/Unsplash

The kind of higher intelligence crows exhibited in the new research is similar to the way we solve problems. We catalog relevant knowledge and then explore different combinations of what we know to arrive at an action or solution.

The researchers, led by neurobiologist Andreas Nieder of the University of Tübingen in Germany, trained two carrion crows (Corvus corone), Ozzie and Glenn.

The crows were trained to watch for a flash — which didn’t always appear — and then peck at a red or blue target to register whether or not a flash of light was seen. Ozzie and Glenn were also taught to understand a changing “rule key” that specified whether red or blue signified the presence of a flash with the other color signifying that no flash occurred.

In each round of a test, after a flash did or didn’t appear, the crows were presented a rule key describing the current meaning of the red and blue targets, after which they pecked their response.

This sequence prevented the crows from simply rehearsing their response on auto-pilot, so to speak. In each test, they had to take the entire process from the top, seeing a flash or no flash, and then figuring out which target to peck.

As all this occurred, the researchers monitored their neuronal activity. When Ozzie or Glenn saw a flash, sensory neurons fired and then stopped as the bird worked out which target to peck. When there was no flash, no firing of the sensory neurons was observed before the crow paused to figure out the correct target.

Nieder’s interpretation of this sequence is that Ozzie or Glenn had to see or not see a flash, deliberately note that there had or hadn’t been a flash — exhibiting self-awareness of what had just been experienced — and then, in a few moments, connect that recollection to their knowledge of the current rule key before pecking the correct target.

During those few moments after the sensory neuron activity had died down, Nieder reported activity among a large population of neurons as the crows put the pieces together preparing to report what they’d seen. Among the busy areas in the crows’ brains during this phase of the sequence was, not surprisingly, the pallium.

Overall, the study may eliminate the layered cerebral cortex as a requirement for higher intelligence. As we learn more about the intelligence of crows, we can at least say with some certainty that it would be wise to avoid angering one.

A theory of my own mind (AEON)

Knowing the content of one’s own mind might seem straightforward but in fact it’s much more like mindreading other people

https://pbs.twimg.com/media/D9xE74lW4AEArgC.jpg:large
Tokyo, 1996. Photo by Harry Gruyaert/Magnum

Stephen M Fleming is professor of cognitive neuroscience at University College London, where he leads the Metacognition Group. He is author of Know Thyself: The Science of Self-awareness (2021). Edited by Pam Weintraub

23 September 2021

In 1978, David Premack and Guy Woodruff published a paper that would go on to become famous in the world of academic psychology. Its title posed a simple question: does the chimpanzee have a theory of mind?

In coining the term ‘theory of mind’, Premack and Woodruff were referring to the ability to keep track of what someone else thinks, feels or knows, even if this is not immediately obvious from their behaviour. We use theory of mind when checking whether our colleagues have noticed us zoning out on a Zoom call – did they just see that? A defining feature of theory of mind is that it entails second-order representations, which might or might not be true. I might think that someone else thinks that I was not paying attention but, actually, they might not be thinking that at all. And the success or failure of theory of mind often turns on an ability to appropriately represent another person’s outlook on a situation. For instance, I can text my wife and say: ‘I’m on my way,’ and she will know that by this I mean that I’m on my way to collect our son from nursery, not on my way home, to the zoo, or to Mars. Sometimes this can be difficult to do, as captured by a New Yorker cartoon caption of a couple at loggerheads: ‘Of course I care about how you imagined I thought you perceived I wanted you to feel.’

Premack and Woodruff’s article sparked a deluge of innovative research into the origins of theory of mind. We now know that a fluency in reading minds is not something humans are born with, nor is it something guaranteed to emerge in development. In one classic experiment, children were told stories such as the following:

Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate?

Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is (the drawer), rather than where he thinks it is (in the cupboard). They are using their knowledge of the reality to answer the question, rather than what they know about where Maxi had put the chocolate before he left. Autistic children also tend to give the wrong answer, suggesting problems with tracking the mental states of others. This test is known as a ‘false belief’ test – passing it requires one to realise that Maxi has a different (and false) belief about the world.

Many researchers now believe that the answer to Premack and Woodruff’s question is, in part, ‘no’ – suggesting that fully fledged theory of mind might be unique to humans. If chimpanzees are given an ape equivalent of the Maxi test, they don’t use the fact that another chimpanzee has a false belief about the location of the food to sneak in and grab it. Chimpanzees can track knowledge states – for instance, being aware of what others see or do not see, and knowing that, when someone is blindfolded, they won’t be able to catch them stealing food. There is also evidence that they track the difference between true and false beliefs in the pattern of their eye movements, similar to findings in human infants. Dogs also have similarly sophisticated perspective-taking abilities, preferring to choose toys that are in their owner’s line of sight when asked to fetch. But so far, at least, only adult humans have been found to act on an understanding that other minds can hold different beliefs about the world to their own.

Research on theory of mind has rapidly become a cornerstone of modern psychology. But there is an underappreciated aspect of Premack and Woodruff’s paper that is only now causing ripples in the pond of psychological science. Theory of mind as it was originally defined identified a capacity to impute mental states not only to others but also to ourselves. The implication is that thinking about others is just one manifestation of a rich – and perhaps much broader – capacity to build what philosophers call metarepresentations, or representations of representations. When I wonder whether you know that it’s raining, and that our plans need to change, I am metarepresenting the state of your knowledge about the weather.

Intriguingly, metarepresentations are – at least in theory – symmetric with respect to self and other: I can think about your mind, and I can think about my own mind too. The field of metacognition research, which is what my lab at University College London works on, is interested in the latter – people’s judgments about their own cognitive processes. The beguiling question, then – and one we don’t yet have an answer to – is whether these two types of ‘meta’ are related. A potential symmetry between self-knowledge and other-knowledge – and the idea that humans, in some sense, have learned to turn theory of mind on themselves – remains largely an elegant hypothesis. But an answer to this question has profound consequences. If self-awareness is ‘just’ theory of mind directed at ourselves, perhaps it is less special than we like to believe. And if we learn about ourselves in the same way as we learn about others, perhaps we can also learn to know ourselves better.

A common view is that self-knowledge is special, and immune to error, because it is gained through introspection – literally, ‘looking within’. While we might be mistaken about things we perceive in the outside world (such as thinking a bird is a plane), it seems odd to say that we are wrong about our own minds. If I think that I’m feeling sad or anxious, then there is a sense in which I am feeling sad or anxious. We have untrammelled access to our own minds, so the argument goes, and this immediacy of introspection means that we are rarely wrong about ourselves.

This is known as the ‘privileged access’ view of self-knowledge, and has been dominant in philosophy in various guises for much of the 20th century. René Descartes relied on self-reflection in this way to reach his conclusion ‘I think, therefore I am,’ noting along the way that: ‘I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind.’

An alternative view suggests that we infer what we think or believe from a variety of cues – just as we infer what others think or feel from observing their behaviour. This suggests that self-knowledge is not as immediate as it seems. For instance, I might infer that I am anxious about an upcoming presentation because my heart is racing and my breathing is heavier. But I might be wrong about this – perhaps I am just feeling excited. This kind of psychological reframing is often used by sports coaches to help athletes maintain composure under pressure.

The philosopher most often associated with the inferential view is Gilbert Ryle, who proposed in The Concept of Mind (1949) that we gain self-knowledge by applying the tools we use to understand other minds to ourselves: ‘The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.’ Ryle’s idea is neatly summarised by another New Yorker cartoon in which a husband says to his wife: ‘How should I know what I’m thinking? I’m not a mind reader.’

Many philosophers since Ryle have considered the strong inferential view as somewhat crazy, and written it off before it could even get going. The philosopher Quassim Cassam, author of Self-knowledge for Humans (2014), describes the situation:

Philosophers who defend inferentialism – Ryle is usually mentioned in this context – are then berated for defending a patently absurd view. The assumption that intentional self-knowledge is normally immediate … is rarely defended; it’s just seen as obviously correct.

But if we take a longer view of history, the idea that we have some sort of special, direct access to our minds is the exception, rather than the rule. For the ancient Greeks, self-knowledge was not all-encompassing, but a work in progress, and something to be striven toward, as captured by the exhortation to ‘know thyself’ carved on the Temple of Delphi. The implication is that most of us don’t know ourselves very well. This view persisted into medieval religious traditions: the Italian priest and philosopher Saint Thomas Aquinas suggested that, while God knows himself by default, we need to put in time and effort to know our own minds. And a similar notion of striving toward self-awareness is found in Eastern traditions, with the founder of Chinese Taoism, Lao Tzu, endorsing a similar goal: ‘To know that one does not know is best; not to know but to believe that one knows is a disease.’

Self-awareness is something that can be cultivated

Other aspects of the mind – most famously, perception – also appear to operate on the principles of an (often unconscious) inference. The idea is that the brain isn’t directly in touch with the outside world (it’s locked up in a dark skull, after all) – and instead has to ‘infer’ what is really out there by constructing and updating an internal model of the environment, based on noisy sensory data. For instance, you might know that your friend owns a Labrador, and so you expect to see a dog when you walk into her house, but don’t know exactly where in your visual field the dog will appear. This higher-level expectation – the spatially invariant concept of ‘dog’ – provides the relevant context for lower levels of the visual system to easily interpret dog-shaped blurs that rush toward you as you open the door.

Adelson’s checkerboard. Courtesy Wikipedia

Elegant evidence for this perception-as-inference view comes from a range of striking visual illusions. In one called Adelson’s checkerboard, two patches with the same objective luminance are perceived as lighter and darker because the brain assumes that, to reflect the same amount of light, the one in shadow must have started out brighter. Another powerful illusion is the ‘light from above’ effect – we have an automatic tendency to assume that natural light falls from above, whereas uplighting – such as when light from a fire illuminates the side of a cliff – is less common. This can lead the brain to interpret the same image as either bumps or dips in a surface, depending on whether the shadows are consistent with light falling from above. Other classic experiments show that information from one sensory modality, such as sight, can act as a constraint on how we perceive another, such as sound – an illusion used to great effect in ventriloquism. The real skill of ventriloquists is being able to talk without moving the mouth. Once this is achieved, the brains of the audience do the rest, pulling the sound to its next most likely source, the puppet.

These striking illusions are simply clever ways of exposing the workings of a system finely tuned for perceptual inference. And a powerful idea is that self-knowledge relies on similar principles – whereas perceiving the outside world relies on building a model of what is out there, we are also continuously building and updating a similar model of ourselves – our skills, abilities and characteristics. And just as we can sometimes be mistaken about what we perceive, sometimes the model of ourselves can also be wrong.

Let’s see how this might work in practice. If I need to remember something complicated, such as a shopping list, I might judge I will fail unless I write it down somewhere. This is a metacognitive judgment about how good my memory is. And this model can be updated – as I grow older, I might think to myself that my recall is not as good as it used to be (perhaps after experiencing myself forgetting things at the supermarket), and so I lean more heavily on list-writing. In extreme cases, this self-model can become completely decoupled from reality: in functional memory disorders, patients believe their memory is poor (and might worry they have dementia) when it is actually perfectly fine when assessed with objective tests.

We now know from laboratory research that metacognition, just like perception, is also subject to powerful illusions and distortions – lending credence to the inferential view. A standard measure here is whether people’s confidence tracks their performance on simple tests of perception, memory and decision-making. Even in otherwise healthy people, judgments of confidence are subject to systematic illusions – we might feel more confident about our decisions when we act more quickly, even if faster decisions are not associated with greater accuracy. In our research, we have also found surprisingly large and consistent differences between individuals on these measures – one person might have limited insight into how well they are doing from one moment to the next, while another might have good awareness of whether are likely to be right or wrong.

This metacognitive prowess is independent of general cognitive ability, and correlated with differences in the structure and function of the prefrontal and parietal cortex. In turn, people with disease or damage to these brain regions can suffer from what neurologists refer to as anosognosia – literally, the absence of knowing. For instance, in Alzheimer’s disease, patients can suffer a cruel double hit – the disease attacks not only brain regions supporting memory, but also those involved in metacognition, leaving people unable to understand what they have lost.

This all suggests – more in line with Socrates than Descartes – that self-awareness is something that can be cultivated, that it is not a given, and that it can fail in myriad interesting ways. And it also provides newfound impetus to seek to understand the computations that might support self-awareness. This is where Premack and Woodruff’s more expansive notion of theory of mind might be long overdue another look.

Saying that self-awareness depends on similar machinery to theory of mind is all well and good, but it begs the question – what is this machinery? What do we mean by a ‘model’ of a mind, exactly?

Some intriguing insights come from an unlikely quarter – spatial navigation. In classic studies, the psychologist Edward Tolman realised that the rats running in mazes were building a ‘map’ of the maze, rather than just learning which turns to make when. If the shortest route from a starting point towards the cheese is suddenly blocked, then rats readily take the next quickest route – without having to try all the remaining alternatives. This suggests that they have not just rote-learned the quickest path through the maze, but instead know something about its overall layout.

A few decades later, the neuroscientist John O’Keefe found that cells in the rodent hippocampus encoded this internal knowledge about physical space. Cells that fired in different locations became known as ‘place’ cells. Each place cell would have a preference for a specific position in the maze but, when combined together, could provide an internal ‘map’ or model of the maze as a whole. And then, in the early 2000s, the neuroscientists May-Britt Moser, Edvard Moser and their colleagues in Norway found an additional type of cell – ‘grid’ cells, which fire in multiple locations, in a way that tiles the environment with a hexagonal grid. The idea is that grid cells support a metric, or coordinate system, for space – their firing patterns tell the animal how far it has moved in different directions, a bit like an in-built GPS system.

There is now tantalising evidence that similar types of brain cell also encode abstract conceptual spaces. For instance, if I am thinking about buying a new car, then I might think about how environmentally friendly the car is, and how much it costs. These two properties map out a two-dimensional ‘space’ on which I can place different cars – for instance, a cheap diesel car will occupy one part of the space, and an expensive electric car another part of the space. The idea is that, when I am comparing these different options, my brain is relying on the same kind of systems that I use to navigate through physical space. In one experiment by Timothy Behrens and his team at the University of Oxford, people were asked to imagine morphing images of birds that could have different neck and leg lengths – forming a two-dimensional bird space. A grid-like signature was found in the fMRI data when people were thinking about the birds, even though they never saw them presented in 2D.

Clear overlap between brain activations involved in metacognition and mindreading was observed

So far, these lines of work – on abstract conceptual models of the world, and on how we think about other minds – have remained relatively disconnected, but they are coming together in fascinating ways. For instance, grid-like codes are also found for conceptual maps of the social world – whether other individuals are more or less competent or popular – suggesting that our thoughts about others seem to be derived from an internal model similar to those used to navigate physical space. And one of the brain regions involved in maintaining these models of other minds – the medial prefrontal cortex (PFC) – is also implicated in metacognition about our own beliefs and decisions. For instance, research in my group has discovered that medial prefrontal regions not only track confidence in individual decisions, but also ‘global’ metacognitive estimates of our abilities over longer timescales – exactly the kind of self-estimates that were distorted in the patients with functional memory problems.

Recently, the psychologist Anthony G Vaccaro and I surveyed the accumulating literature on theory of mind and metacognition, and created a brain map that aggregated the patterns of activations reported across multiple papers. Clear overlap between brain activations involved in metacognition and mindreading was observed in the medial PFC. This is what we would expect if there was a common system building models not only about other people, but also of ourselves – and perhaps about ourselves in relation to other people. Tantalisingly, this very same region has been shown to carry grid-like signatures of abstract, conceptual spaces.

At the same time, computational models are being built that can mimic features of both theory of mind and metacognition. These models suggest that a key part of the solution is the learning of second-order parameters – those that encode information about how our minds are working, for instance whether our percepts or memories tend to be more or less accurate. Sometimes, this system can become confused. In work led by the neuroscientist Marco Wittmann at the University of Oxford, people were asked to play a game involving tracking the colour or duration of simple stimuli. They were then given feedback about both their own performance and that of other people. Strikingly, people tended to ‘merge’ their feedback with those of others – if others were performing better, they tended to think they themselves were performing a bit better too, and vice-versa. This intertwining of our models of self-performance and other-performance was associated with differences in activity in the dorsomedial PFC. Disrupting activity in this area using transcranial magnetic stimulation (TMS) led to more self-other mergence – suggesting that one function of this brain region is not only to create models of ourselves and others, but also to keep these models apart.

Another implication of a symmetry between metacognition and mindreading is that both abilities should emerge around the same time in childhood. By the time that children become adept at solving false-belief tasks – around the age of four – they are also more likely to engage in self-doubt, and recognise when they themselves were wrong about something. In one study, children were first presented with ‘trick’ objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained not sweets but pencils. When asked what they first thought the object was, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognised that their first impression of the object was false – they could recognise they had been in error.

Indeed, when Simon Baron-Cohen, Alan Leslie and Uta Frith outlined their influential theory of autism in the 1980s, they proposed that theory of mind was only ‘one of the manifestations of a basic metarepresentational capacity’. The implication is that there should also be noticeable differences in metacognition that are linked to changes in theory of mind. In line with this idea, several recent studies have shown that autistic individuals also show differences in metacognition. And in a recent study of more than 450 people, Elisa van der Plas, a PhD student in my group, has shown that theory of mind ability (measured by people’s ability to track the feelings of characters in simple animations) and metacognition (measured by the degree to which their confidence tracks their task performance) are significantly correlated with each other. People who were better at theory of mind also formed their confidence differently – they were more sensitive to subtle cues, such as their response times, that indicated whether they had made a good or bad decision.

Recognising a symmetry between self-awareness and theory of mind might even help us understand why human self-awareness emerged in the first place. The need to coordinate and collaborate with others in large social groups is likely to have prized the abilities for metacognition and mindreading. The neuroscientist Suzana Herculano-Houzel has proposed that primates have unusually efficient ways of cramming neurons into a given brain volume – meaning there is simply more processing power devoted to so-called higher-order functions – those that, like theory of mind, go above and beyond the maintenance of homeostasis, perception and action. This idea fits with what we know about the areas of the brain involved in theory of mind, which tend to be the most distant in terms of their connections to primary sensory and motor areas.

A symmetry between self-awareness and other-awareness also offers a subversive take on what it means for other agents such as animals and robots to be self-aware. In the film Her (2013), Joaquin Phoenix’s character Theodore falls in love with his virtual assistant, Samantha, who is so human-like that he is convinced she is conscious. If the inferential view of self-awareness is correct, there is a sense in which Theodore’s belief that Samantha is aware is sufficient to make her aware, in his eyes at least. This is not quite true, of course, because the ultimate test is if she is able to also recursively model Theodore’s mind, and create a similar model of herself. But being convincing enough to share an intimate connection with another conscious agent (as Theodore does with Samantha), replete with mindreading and reciprocal modelling, might be possible only if both agents have similar recursive capabilities firmly in place. In other words, attributing awareness to ourselves and to others might be what makes them, and us, conscious.

A simple route for improving self-awareness is to take a third-person perspective on ourselves

Finally, a symmetry between self-awareness and other-awareness also suggests novel routes towards boosting our own self-awareness. In a clever experiment conducted by the psychologists and metacognition experts Rakefet Ackerman and Asher Koriat in Israel, students were asked to judge both how well they had learned a topic, and how well other students had learned the same material, by watching a video of them studying. When judging themselves, they fell into a trap – they believed that spending less time studying was a signal of being confident in knowing the material. But when judging others, this relationship was reversed: they (correctly) judged that spending longer on a topic would lead to better learning. These results suggest that a simple route for improving self-awareness is to take a third-person perspective on ourselves. In a similar way, literary novels (and soap operas) encourage us to think about the minds of others, and in turn might shed light on our own lives.

There is still much to learn about the relationship between theory of mind and metacognition. Most current research on metacognition focuses on the ability to think about our experiences and mental states – such as being confident in what we see or hear. But this aspect of metacognition might be distinct from how we come to know our own, or others’, character and preferences – aspects that are often the focus of research on theory of mind. New and creative experiments will be needed to cross this divide. But it seems safe to say that Descartes’s classical notion of introspection is increasingly at odds with what we know of how the brain works. Instead, our knowledge of ourselves is (meta)knowledge like any other – hard-won, and always subject to revision. Realising this is perhaps particularly useful in an online world deluged with information and opinion, when it’s often hard to gain a check and balance on what we think and believe. In such situations, the benefits of accurate metacognition are myriad – helping us recognise our faults and collaborate effectively with others. As the poet Robert Burns tells us:

O wad some Power the giftie gie us
To see oursels as ithers see us!
It wad frae mony a blunder free us…

(Oh, would some Power give us the gift
To see ourselves as others see us!
It would from many a blunder free us )

How big science failed to unlock the mysteries of the human brain (MIT Technology Review)

technologyreview.com

Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

Emily Mullin

August 25, 2021


In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. 

At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? 

Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” 

New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. 

The audacious proposal intrigued the Obama administration and laid the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” 

But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk

In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. 

An impossible dream?

A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? 

From the beginning, both projects had critics.

EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. 

In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. 

Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.  

Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)

There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.

While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.

In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. 

In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. 

“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” 

The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”

But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.

The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain. 

New day

Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. 

Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. 

And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.

The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.

While these are all important steps forward, though, they’re far from the initial grand ambitions. 

Lasting legacy

We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.

When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. 

“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” 

Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. 

Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. 

For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.

“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” 

Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. 

“I have to be honest,” says Yuste. “We had higher hopes.”

Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

Is everything in the world a little bit conscious? (MIT Technology Review)

technologyreview.com

Christof Koch – August 25, 2021

The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can.

Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism.

As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. 

IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none.

A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. 

IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. 

Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.)

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences.

Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems. 

The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. 

Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. 

Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. 

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness?

Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI).

In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. 

This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. 

Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed.

Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.

The Mind issue

This story was part of our September 2021 issue

Why Our Brains Weren’t Made To Deal With Climate Change (NPR)

npr.org

Listen to audio

April 19, 201612:00 AM ET

SHANKAR VEDANTAM, HOST:

This is HIDDEN BRAIN. I’m Shankar Vedantam. Last year, my family and I took a vacation to Alaska. This was a much needed long-planned break. The best part, I got to walk on the top of a glacier.

(SOUNDBITE OF FOOTSTEPS)

VEDANTAM: The pale blue ice was translucent. Sharp ridges opened up into crevices dozens of feet deep. Every geological feature, every hill, every valley was sculpted in ice. It was a sunny day, and I spotted a small stream of melted water. I got on the ground and drank some. I wondered how long this water had remained frozen.

The little stream is not the only ice that’s melting in Alaska. The Mendenhall Glacier, one of the chief tourist attractions in Juneau, has retreated over one and a half miles in the last half-century. Today, you can only see a small sliver of the glacier’s tongue from a lookout. I caught up with John Neary, a forest service official, who tries to explain to visitors the scale of the changes that they’re witnessing.

JOHN NEARY: I would say that right now, we’re looking at a glacier that’s filling up. Out of our 180-degree view we have, we’re looking at maybe 10 or 15 degrees of it, whereas if we stood in this same place 100 years ago, it would have filled up about 160 degrees of our view.

VEDANTAM: You are kidding, 160 degrees of our view.

NEARY: Exactly. That’s the reality of how big this was, and it’s been retreating up this valley at about 40 or 50 feet a year, most recently 400 feet a year. And even more dramatically recently is the thinning and the narrowing as it’s just sort of collapsed in on itself in the bottom of this valley. Instead of dominating much of the valley and being able to see white as a large portion of the landscape, it’s now becoming this little ribbon that’s at the bottom.

VEDANTAM: John is a quiet, soft-spoken man. In recent years, as he’s watched the glacier literally recede before his eyes, he started to speak up, not just about what’s happening but what it means.

But as I was chatting with John, a visitor came up to talk to him. The man said he used to serve in the Air Force and had last seen the Mendenhall Glacier a quarter-century ago. There was a look in the man’s eyes. It was a combination of awe and horror. How could this have happened, the man asked John? Why is this happening?

NEARY: In many ways, people don’t want to grasp the reality. It’s a scary reality to try to grasp. And so what they naturally want to do is assume, well, this has always happened. It will happen in the future, and we’ll survive, won’t we? They want an assurance from me. But I don’t give give it to them. I don’t think it’s my job to give them that assurance.

I think they need to grasp the reality of the fact that we are entering into a time when, yes, glacial advance and retreat has happened 25 different times to North America over its long life but never at the rate and the scale that we see now. And in the very quick rapidity of it means that species probably won’t be able to adapt the way that they have in the past over a longer period of time.

VEDANTAM: To be clear, the Mendenhall Glacier’s retreat in and of itself is not proof of climate change. That evidence comes from a range of scientific measurements and calculations. But the glacier is a visible symbol of the changes that scientists are documenting.

It’s interesting I think when we – people think about climate change, it tends to be an abstract issue most of the time for most people, that you’re standing in front this magnificent glacier right now and to actually see it receding makes it feel real and visceral in a way that it just isn’t when I’m living in Washington, D.C.

NEARY: No, I agree. I think that for too many people, the issue is some Micronesian island that’s having an extra inch of water this year on their shorelines or it’s some polar bears far up in the Arctic that they’re really not connected with.

But when they realize, they come here and they’re on this nice day like we’re experiencing right now with the warm sun and they start to think about this glacier melting and why it’s receding, why it’s disappearing, why it doesn’t look like that photo just 30 years ago up in the visitor’s center, it becomes real for them, and they have to start grapple with the issues behind it.

(SOUNDBITE OF MUSIC)

VEDANTAM: I could see tourists turning these questions over in their minds as they watch the glacier. So even though I had not planned to do any reporting, I started interviewing people using the only device I had available, my phone.

DALE SINGER: I just think it’s a shame that we are losing something pretty precious and pretty different in the world.

VEDANTAM: This is Dale Singer (ph). She and her family came to Alaska on a cruise to celebrate a couple of family birthdays. This was her second trip to Mendenhall.

She came about nine years ago, but the weather was so foggy, she couldn’t get a good look. She felt compelled to come back. I asked Dale why she thought the glacier was retreating.

SINGER: Global warming, whether we like to admit it or not, it’s our fault. Or something we’re doing is affecting climate change.

VEDANTAM: Others are not so sure. For some of Dale’s fellow passengers on her cruise, this is a touchy topic.

SINGER: Somebody just said they went to a lecture and – on the ship, and the lecturer did not use the word global warming nor climate change because he didn’t want to offend passengers. So there are still people who refuse to admit it.

(SOUNDBITE OF MUSIC)

VEDANTAM: As I was standing next to John, one man carefully came up and listened to his account of the science of climate change. When John was done talking, the man told him that he wouldn’t trust scientists as far as he could throw them. Climate change was all about politics, he said.

I asked the man for an interview, but he declined. He said his company had contracts with the federal government. And if bureaucrats in the Obama administration heard his skeptical views on climate change, those contracts might mysteriously disappear. I caught up with another tourist. I asked Michael Bull (ph) if he believed climate change was real.

MICHAEL BULL: No, I think there’s global climate change, but I question whether it’s all due to human interaction with the Earth. Yes, you can’t deny that the climate is changing.

VEDANTAM: Yeah.

BULL: But the causation of that I’m not sold on as being our fault.

VEDANTAM: Michael was worried his tour bus might leave without him, so he answered my question about whether the glacier’s retreat was cause for alarm standing next to the idling bus.

BULL: So what’s the bad part of the glacier receding? And, you know, from what John said to me, if it’s the rate that which – and the Earth can’t adapt, that makes sense to me. But I think the final story is yet to be written.

VEDANTAM: Yeah.

BULL: I think Mother Earth pushes back. So I don’t think we’re going to destroy her because I think she’ll take care of us before we take care of her.

(SOUNDBITE OF MUSIC)

VEDANTAM: Nugget Falls is a beautiful waterfall that empties into Mendenhall Lake. When John first came to Alaska in 1982, the waterfall was adjacent to the glacier. Today, there’s a gap of three-quarters of a mile between the waterfall and the glacier.

SUE SCHULTZ: The glacier has receded unbelievably. It’s quite shocking.

VEDANTAM: This is Sue Schultz. She said she lived in Juneau back in the 1980s. This was her first time back in 28 years. What did it look like 28 years ago?

SCHULTZ: The bare rock that you see to the left as you face the glacier was glacier. And we used to hike on the other side of it. And you could take a trail right onto the glacier.

VEDANTAM: And what about this way? I understand the glacier actually came significantly over to this side…

SCHULTZ: Yes.

VEDANTAM: …Close to Nugget Falls.

SCHULTZ: Yes, it – that’s true. It was really close. In fact, the lake was a lot smaller, obviously (laughter). I mean, yeah, it’s quite incredible.

VEDANTAM: And so what’s your reaction when you see it?

SCHULTZ: Global warming, we need to pay attention.

(SOUNDBITE OF MUSIC)

TERRY LAMBERT: Even if it all melts, it’s not going to be the end of the world, so I’m not worried.

VEDANTAM: Terry Lambert is a tourist from Southern California. He’s never visited Mendenhall before. He thinks the melting glacier is just part of nature’s plan.

LAMBERT: Well, it’s just like earthquakes and floods and hurricanes. They’re all just all part of what’s going on. You can’t control it. You can’t change it. And I personally don’t think it’s something that man’s doing that’s making that melt.

VEDANTAM: I mentioned to Terry some of the possible consequences of climate change on various species. They could be changes. Species could – some species could be advantaged. Some species could be disadvantaged.

The ecosystem is changing. You’re going to have flooding. You’re going to have weather events, right? There could be consequences that affect you and I.

LAMBERT: Yes, but like I say, it’s so far in the future I’m not worried about it.

VEDANTAM: I realized at that moment that the debate over climate change is no longer really about science unless the science you’re talking about is the study of human behavior.

I asked John why he thought so many people were unwilling to accept the scientific consensus that climate change was having real consequences.

NEARY: The inability to do anything about it themselves – because it’s threatening to think about giving up your car, giving up your oil heater in your house or giving up, you know, many of the things that you’ve become accustomed to. They seem very threatening to them.

And, you know, really, I’ve looked at some of the brain science, actually, and talked to folks at NASA and Earth and Sky, and they’ve actually talked about how when that fear becomes overriding for people, they use a part of their brain that’s the very primitive part that has to react.

It has to instantly come to a conclusion so that it can lead to an action, whereas what we need to think about is get rid of that fear and start thinking logically. Start thinking creatively. Allow a different part of the brain to kick in and really think how we as humans can reverse this trend that we’ve caused.

VEDANTAM: Coming up, we explore why the human brain might not be well-designed to grapple with the threat of climate change and what we can do about it. Stay with us.

(SOUNDBITE OF MUSIC)

VEDANTAM: This is HIDDEN BRAIN. I’m Shankar Vedantam. While visiting the Mendenhall Glacier with my family last year, I started thinking more and more about the intersection between climate change and human behavior.

When I got back to Washington, D.C., I called George Marshall. He’s an environmentalist who, like John Neary, tries to educate people about global climate change.

GEORGE MARSHALL: I am the founder of Climate Outreach, and I’m the author of “Don’t Even Think About It: Why Our Brains Are Wired To Ignore Climate Change.”

VEDANTAM: As the book’s title suggests, George believes that the biggest roadblock in the battle against climate change may lie inside the human brain. I call George at his home in Wales.

(SOUNDBITE OF MUSIC)

VEDANTAM: You’ve spent some time talking with Daniel Kahneman, the famous psychologist who won the Nobel Prize in economics. And he actually presented a very pessimistic view that we would actually come to terms with the threat of climate change.

MARSHALL: He said to me that we are as humans very poor at dealing with issues further in the future. We tend to be very focused on the short term. We tend to discount would be the economic term that – to reduce the value of things happening in the future the further away they are.

He says we’re very cost averse. So that’s to say when there is a reward, we respond strongly. But when there’s a cost, we prefer to push it away just as, you know, I myself would try and leave until the very last minute, you know, filling in my tax return. I mean, it’s just I want to deal with these things. And he says, well, we’re reluctant to deal with uncertainty.

If things aren’t certain, we – or we perceive them to be, we just say, well, come back and tell me when they’re certain. What he said to me was in his view that climate change is the worst possible combination because it’s not only in the future but it’s also in the future and uncertain, and it’s in the future uncertain and involving costs.

And his own experiments – and he’s done many, many of these over the years – show that in this combination, we have a very strong tendency just to push things on one side. And I think this in some ways explains how so many people if you ask them will say, yes, I regard climate change to be a threat.

But if you go and you ask them – and this happens every year in surveys – what are the most important issues, what are the – strangely, almost everybody seems to forget about climate change. So when we focus on it, we know it’s there, but we can somehow push it away.

VEDANTAM: You tell an amusing story in your book about some colleagues who were worried about a cellphone tower being erected in their neighborhood…

MARSHALL: (Laughter).

VEDANTAM: …And the very, very different reaction of these colleagues to the cellphone tower then to it’s sort of the amorphous threat of climate change.

MARSHALL: They were my neighbors, my entire community. I was living at that time in Oxford, which is – many of your listeners know is a university town. So it would be like living in, you know, Harvard or Berkeley or somewhere where most of the people were in various ways involved in the university, highly educated. A mobile phone master is being set up in the middle alongside actually, a school playground, enormous outcry. Everybody mobilized.

Down to the local church hall, they were all going to stop it. People were even going to play lay themselves down in front of a bulldozers to prevent it because it was here. It was now. There was an enemy, which was this external mobile phone company. We’re going to come, and they were going to put up this mast. It brings in the threat psychologists would call the absolute fear of radiation. This is what’s called a dread fear and so on.

Now, the science, if we go back to the core science, says that this mobile phone master is as far as we could possibly say harmless. You know, the amount of radiation or – of any kind you get off a single mobile phone mast has never been found to have the slightest impact on anyone. But they were very mobilized. At the same – oh, thank you for having me on. None of them would come. It simply didn’t have those qualities.

VEDANTAM: You have a very revealing anecdote in your book about the economist Thomas Schelling, who was once in a major traffic jam.

MARSHALL: So Schelling, again, a Nobel prize-winning economist, and he’s wondering what’s going on. The traffic is moving very, very, very slowly, and then they’re creeping along and creeping along, and half an hour along the road, they finally realized what had happened.

But there’s a mattress lying right in the middle of the middle lane of the road. What happens, he notices – and he does the same – is what when they reach the mattress, people will simply drive past it and keep going. In other words, the thing that had caused them to become delayed was not something that anyone was prepared to stop and remove from the road.

They just leave the mattress there, and then they keep driving past. Because in a way, why would they remove that mattress from the road because they have already paid the price of getting there? They’ve already had the delay. It’s something where the benefit goes to other people. The argument being that, of course, it’s very hard, especially when people are motivated largely through personal rewards, to get them to do things.

VEDANTAM: It’s interesting that the same narrative affects the way we talk about climate change internationally. There are many countries who now say, look, you know, I’ve already paid the price. I’m paying the price right now for the actions of other people for the, you know, things that other people have or have not done.

I’m bearing that cost, and you’re asking me now to get out of my car, pull the mattress off the road to bear an additional cost. And the only people who will benefit from that are people who are not me. The collective problems in the end have personal consequences.

MARSHALL: I have to say that the way what one talks about this also shows a way that interpretation is biased by your own politics or your own view. This has been labeled for a long time the tragedy of the commons, the idea being that unless – that people will – if it’s in their own self-interest, destroy the very thing that sustains them because it’s not in their personal interest to do something if they don’t see other people doing it. And in a way, it’s understandable.

But of course, that depends on a view of a world where you see people as being motivated entirely by their own personal rewards. We also know that people are motivated by their sense of identity and their sense of belonging. And we know very well not least of all in times of major conflict or war that people are prepared to make enormous personal sacrifices from which they personally derive nothing except loss, but they’re making that in the interests of the greater good.

For a long time with climate change, we’ve made a mistake of talking about this solely in terms of something which is economic. What are the economic costs, and what are the economic benefits? And we still do this. But of course, really, the motivations for why we want to act on this is what we want to defend the world what we care about and a world we love, and we want to do so for ourselves and for the people who are then to come.

VEDANTAM: So, George, there obviously is one domain in life where you can see people constantly placing these sacred values above their selfish self-interest. You know, I’m thinking here about the many, many religions we have in the world that get people to do all kinds of things that an economist would say is not in their rational self-interest.

People give up food. People give up water. People have, you know, suffer enormous personal privations. People sometimes choose chastity for life, I mean, huge costs No, people are willing to bear. And they’re not doing it because someone says, at the end of the year, I’m going to give you an extra 200 bucks in your paycheck or an extra $2,000 in your paycheck. They’re doing it because they believe these are sacred values that are not negotiable.

MARSHALL: No, well, and not just economists would find those behaviors strange, but Professor Kahneman or kind of pure cognitive psychology might as well because these are people who are struggling with and – but also believe passionately in things which are in the long-term extremely uncertain and require personal cost. And yet people do so.

It’s very important to stress that, you know, when we try and when we talk about climate change and religion that there’s absolutely no sense at all that climate change is or can or should ever be like a religion. It’s not. It’s grounded in science. But we can also learn

I think a great deal from religions about how to approach these issues, these uncertain issues and how to create I think a community of shared belief and shared conviction that something is important.

VEDANTAM: Right. I mean, if you look at sort of human history with sort of the broad view, you know, you don’t actually have to be a religious person to acknowledge that religion has played a very, very important role in the lives of millions of people over thousands of years.

And if it’s done so, then a scientific approach would say, there is something about the nature of religious belief or the practice of religion that harnesses what our brains can accommodate, that they harness our yearning to be part of a tribe, our yearning to be connected to deeper and grander values than ourselves, our yearning in some ways to do things for our fellow person in a way that might not be tangible in the here and now but might actually pay off as you say not just for future generations but even in the hereafter.

MARSHALL: Well, and the faiths that dominate, the half a dozen faiths which are the strongest ones in the world, are the ones that have been best at doing that. There’s a big mistake with climate change because it comes from science, what we assume it just somehow soaks into us.

It’s very clear that just hitting people over the head with more and more and more data and graphs isn’t working. On my Internet feed – I’m on all of the main scientific feeds – there is a new paper every day that says that not only is it bad, but it’s worse than we thought, and it’s extremely, extremely serious, so serious, actually, that we’re finding it very hard to even to find the words to describe it. That doesn’t move people. In fact, actually, it tends to push them away.

However, if we can understand that there are other things which bind us together, I think that we can find yet new language. I think it’s also very important to recognize that the divides that are on climate change are social, not scientific. They’re social and political, that the single biggest determinants of whether you accept it or you don’t accept it is your political values.

And that suggests for the solutions to this are not scientific and maybe psychology. They’re cultural. We have to find ways of saying, sure, you know, we are going to disagree on things politically, but we have things in common that we all care about that are going to have to bring us together.

VEDANTAM: George Marshall is the author of “Don’t Even Think About It: Why Our Brains Are Wired To Ignore Climate Change.” George, thank you for joining me today on HIDDEN BRAIN.

MARSHALL: You’re very welcome. I enjoyed it. Thank you.

VEDANTAM: The HIDDEN BRAIN podcast is produced by Kara McGuirk-Alison, Maggie Penman and Max Nesterak. Special thanks this week to Daniel Schuken (ph). To continue the conversation about human behavior and climate change, join us on Facebook and Twitter.

If you liked this episode, consider giving us a review on iTunes or wherever you listen to your podcasts so others can find us. I’m Shankar Vedantam, and this is NPR.

Copyright © 2016 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

How to mend your broken pandemic brain (MIT Technology Review)

technologyreview.com

Life under covid has messed with our brains. Luckily, they were designed to bounce back.

Dana Smith – July 16, 2021


Orgies are back. Or at least that’s what advertisers want you to believe. One commercial for chewing gum—whose sales tanked during 2020 because who cares what your breath smells like when you’re wearing a mask—depicts the end of the pandemic as a raucous free-for-all with people embracing in the streets and making out in parks. 

The reality is a little different. Americans are slowly coming out of the pandemic, but as they reemerge, there’s still a lot of trauma to process. It’s not just our families, our communities, and our jobs that have changed; our brains have changed too. We’re not the same people we were 18 months ago. 

During the winter of 2020, more than 40% of Americans reported symptoms of anxiety or depression, double the rate of the previous year. That number dropped to 30% in June 2021 as vaccinations rose and covid-19 cases fell, but that still leaves nearly one in three Americans struggling with their mental health. In addition to diagnosable symptoms, plenty of people reported experiencing pandemic brain fog, including forgetfulness, difficulty concentrating, and general fuzziness. 

Now the question is, can our brains change back? And how can we help them do that?

How stress affects the brain

Every experience changes your brain, either helping you to gain new synapses—the connections between brain cells—or causing you to lose them. This is known as neuroplasticity, and it’s how our brains develop through childhood and adolescence. Neuroplasticity is how we continue to learn and create memories in adulthood, too, although our brains become less flexible as we get older. The process is vital for learning, memory, and general healthy brain function.

But many experiences also cause the brain to lose cells and connections that you wanted or needed to keep. For instance, stress—something almost everyone experienced during the pandemic—can not only destroy existing synapses but also inhibit the growth of new ones. 

One way stress does this is by triggering the release of hormones called glucocorticoids, most notably cortisol. In small doses, glucocorticoids help the brain and body respond to a stressor (think: fight or flight) by changing heart rate, respiration, inflammation, and more to increase one’s odds of survival. Once the stressor is gone, the hormone levels recede. With chronic stress, however, the stressor never goes away, and the brain remains flooded with the chemicals. In the long term, elevated levels of glucocorticoids can cause changes that may lead to depression, anxiety, forgetfulness, and inattention. 

Scientists haven’t been able to directly study these types of physical brain changes during the pandemic, but they can make inferences from the many mental health surveys conducted over the last 18 months and what they know about stress and the brain from years of previous research.

For example, one study showed that people who experienced financial stressors, like a job loss or economic insecurity, during the pandemic were more likely to develop depression. One of the brain areas hardest hit by chronic stress is the hippocampus, which is important for both memory and mood. These financial stressors would have flooded the hippocampus with glucocorticoids for months, damaging cells, destroying synapses, and ultimately shrinking the region. A smaller hippocampus is one of the hallmarks of depression. 

Chronic stress can also alter the prefrontal cortex, the brain’s executive control center, and the amygdala, the fear and anxiety hub. Too many glucocorticoids for too long can impair the connections both within the prefrontal cortex and between it and the amygdala. As a result, the prefrontal cortex loses its ability to control the amygdala, leaving the fear and anxiety center to run unchecked. This pattern of brain activity (too much action in the amygdala and not enough communication with the prefrontal cortex) is common in people who have post-traumatic stress disorder (PTSD), another condition that spiked during the pandemic, particularly among frontline health-care workers.

The social isolation brought on by the pandemic was also likely detrimental to the brain’s structure and function. Loneliness has been linked to reduced volume in the hippocampus and amygdala, as well as decreased connectivity in the prefrontal cortex. Perhaps unsurprisingly, people who lived alone during the pandemic experienced higher rates of depression and anxiety.

Finally, damage to these brain areas affects people not only emotionally but cognitively as well. Many psychologists have attributed pandemic brain fog to chronic stress’s impact on the prefrontal cortex, where it can impair concentration and working memory.

Reversal time

So that’s the bad news. The pandemic hit our brains hard. These negative changes ultimately come down to a stress-induced decrease in neuroplasticity—a loss of cells and synapses instead of the growth of new ones. But don’t despair; there’s some good news. For many people, the brain can spontaneously recover its plasticity once the stress goes away. If life begins to return to normal, so might our brains.

“In a lot of cases, the changes that occur with chronic stress actually abate over time,” says James Herman, a professor of psychiatry and behavioral neuroscience at the University of Cincinnati. “At the level of the brain, you can see a reversal of a lot of these negative effects.” 

“If you create for yourself a more enriched environment where you have more possible inputs and interactions and stimuli, then [your brain] will respond to that.”

Rebecca Price, associate professor of psychiatry and psychology at the University of Pittsburgh

In other words, as your routine returns to its pre-pandemic state, your brain should too. The stress hormones will recede as vaccinations continue and the anxiety about dying from a new virus (or killing someone else) subsides. And as you venture out into the world again, all the little things that used to make you happy or challenged you in a good way will do so again, helping your brain to repair the lost connections that those behaviors had once built. For example, just as social isolation is bad for the brain, social interaction is especially good for it. People with larger social networks have more volume and connections in the prefrontal cortexamygdala, and other brain regions. 

Even if you don’t feel like socializing again just yet, maybe push yourself a little anyway. Don’t do anything that feels unsafe, but there is an aspect of “fake it till you make it” in treating some mental illness. In clinical speak, it’s called behavioral activation, which emphasizes getting out and doing things even if you don’t want to. At first, you might not experience the same feelings of joy or fun you used to get from going to a bar or a backyard barbecue, but if you stick with it, these activities will often start to feel easier and can help lift feelings of depression.

Rebecca Price, an associate professor of psychiatry and psychology at the University of Pittsburgh, says behavioral activation might work by enriching your environment, which scientists know leads to the growth of new brain cells, at least in animal models. “Your brain is going to react to the environment that you present to it, so if you are in a deprived, not-enriched environment because you’ve been stuck at home alone, that will probably cause some decreases in the pathways that are available,” she says. “If you create for yourself a more enriched environment where you have more possible inputs and interactions and stimuli, then [your brain] will respond to that.” So get off your couch and go check out a museum, a botanical garden, or an outdoor concert. Your brain will thank you.

Exercise can help too. Chronic stress depletes levels of an important chemical called brain-derived neurotrophic factor (BDNF), which helps promote neuroplasticity. Without BDNF, the brain is less able to repair or replace the cells and connections that are lost to chronic stress. Exercise increases levels of BDNF, especially in the hippocampus and prefrontal cortex, which at least partially explains why exercise can boost both cognition and mood. 

Not only does BDNF help new synapses grow, but it may help produce new neurons in the hippocampus, too. For decades, scientists thought that neurogenesis in humans stopped after adolescence, but recent research has shown signs of neuron growth well into old age (though the issue is still hotly contested). Regardless of whether it works through neurogenesis or not, exercise has been shown time and again to improve people’s mood, attention, and cognition; some therapists even prescribe it to treat depression and anxiety. Time to get out there and start sweating.

Turn to treatment

There’s a lot of variation in how people’s brains recover from stress and trauma, and not everyone will bounce back from the pandemic so easily.

“Some people just seem to be more vulnerable to getting into a chronic state where they get stuck in something like depression or anxiety,” says Price. In these situations, therapy or medication might be required.

Some scientists now think that psychotherapy for depression and anxiety works at least in part by changing brain activity, and that getting the brain to fire in new patterns is a first step to getting it to wire in new patterns. A review paper that assessed psychotherapy for different anxiety disorders found that the treatment was most effective in people who displayed more activity in the prefrontal cortex after several weeks of therapy than they did beforehand—particularly when the area was exerting control over the brain’s fear center. 

Other researchers are trying to change people’s brain activity using video games. Adam Gazzaley, a professor of neurology at the University of California, San Francisco, developed the first brain-training game to receive FDA approval for its ability to treat ADHD in kids. The game has also been shown to improve attention span in adults. What’s more, EEG studies revealed greater functional connectivity involving the prefrontal cortex, suggesting a boost in neuroplasticity in the region.

Now Gazzaley wants to use the game to treat people with pandemic brain fog. “We think in terms of covid recovery there’s an incredible opportunity here,” he says. “I believe that attention as a system can help across the breadth of [mental health] conditions and symptoms that people are suffering, especially due to covid.”

While the effects of brain-training games on mental health and neuroplasticity are still up for debate, there’s abundant evidence for the benefits of psychoactive medications. In 1996, psychiatrist Yvette Sheline, now a professor at the University of Pennsylvania, was the first to show that people with depression had significantly smaller hippocampi than non-depressed people, and that the size of that brain region was related to how long and how severely they had been depressed. Seven years later, she found that if people with depression took antidepressants, they had less volume loss in the region.

That discovery shifted many researchers’ perspectives on how traditional antidepressants, particularly selective serotonin reuptake inhibitors (SSRIs), help people with depression and anxiety. As their name suggests, SSRIs target the neurochemical serotonin, increasing its levels in synapses. Serotonin is involved in several basic bodily functions, including digestion and sleep. It also helps to regulate mood, and scientists long assumed that was how the drugs worked as antidepressants. However, recent research suggests that SSRIs may also have a neuroplastic effect by boosting BDNF, especially in the hippocampus, which could help restore healthy brain function in the area. One of the newest antidepressants approved in the US, ketamine, also appears to increase BDNF levels and promote synapse growth in the brain, providing additional support for the neuroplasticity theory. 

The next frontier in pharmaceutical research for mental illness involves experimental psychedelics like MDMA and psilocybin, the active ingredient in hallucinogenic mushrooms. Some researchers think that these drugs also enhance plasticity in the brain and, when paired with psychotherapy, can be a powerful treatment.

Not all the changes to our brains from the past year are negative. Neuroscientist David Eagleman, author of the book Livewired: The Inside Story of the Ever-Changing Brain, says that some of those changes may actually have been beneficial. By forcing us out of our ruts and changing our routines, the pandemic may have caused our brains to stretch and grow in new ways.

“This past 14 months have been full of tons of stress, anxiety, depression—they’ve been really hard on everybody,” Eagleman says. “The tiny silver lining is from the point of view of brain plasticity, because we have challenged our brains to do new things and find new ways of doing things. If we hadn’t experienced 2020, we’d still have an old internal model of the world, and we wouldn’t have pushed our brains to make the changes they’ve already made. From a neuroscience point of view, this is most important thing you can do—constantly challenge it, build new pathways, find new ways of seeing the world.”


How to help your brain help itself

While everyone’s brain is different, try these activities to give your brain the best chance of recovering from the pandemic.

  1. Get out and socialize. People with larger social networks have more volume and connectivity in the prefrontal cortexamygdala, and other brain regions.
  2. Try working out. Exercise increases levels of a protein called BDNF that helps promote neuroplasticity and may even contribute to the growth of new neurons.
  3. Talk to a therapist. Therapy can help you view yourself from a different perspective, and changing your thought patterns can change your brain patterns.
  4. Enrich your environment. Get out of your pandemic rut and stimulate your brain with a trip to the museum, a botanical garden, or an outdoor concert.
  5. Take some drugs—but make sure they’re prescribed! Both classic antidepressant drugs, such as SSRIs, and more experimental ones like ketamine and psychedelics are thought to work in part by boosting neuroplasticity.
  6. Strengthen your prefrontal cortex by exercising your self-control. If you don’t have access to an (FDA-approved) attention-boosting video game, meditation can have a similar benefit. 

Human Brain Limit of ‘150 Friends’ Doesn’t Check Out, New Study Claims (Science Alert)

Peter Dockrill – 5 MAY 2021


It’s called Dunbar’s number: an influential and oft-repeated theory suggesting the average person can only maintain about 150 stable social relationships with other people.

Proposed by British anthropologist and evolutionary psychologist Robin Dunbar in the early 1990s, Dunbar’s number, extrapolated from research into primate brain sizes and their social groups, has since become a ubiquitous part of the discourse on human social networks.

But just how legitimate is the science behind Dunbar’s number anyway? According to a new analysis by researchers from Stockholm University in Sweden, Dunbar’s famous figure doesn’t add up.

“The theoretical foundation of Dunbar’s number is shaky,” says zoologist and cultural evolution researcher Patrik Lindenfors.

“Other primates’ brains do not handle information exactly as human brains do, and primate sociality is primarily explained by other factors than the brain, such as what they eat and who their predators are.”

Dunbar’s number was originally predicated on the idea that the volume of the neocortex in primate brains functions as a constraint on the size of the social groups they circulate amongst.

“It is suggested that the number of neocortical neurons limits the organism’s information-processing capacity and that this then limits the number of relationships that an individual can monitor simultaneously,” Dunbar explained in his foundational 1992 study.

“When a group’s size exceeds this limit, it becomes unstable and begins to fragment. This then places an upper limit on the size of groups which any given species can maintain as cohesive social units through time.”

Dunbar began extrapolating the theory to human networks in 1993, and in the decades since has authored and co-authored copious related research output examining the behavioral and cognitive mechanisms underpinning sociality in both humans and other primates.

But as to the original question of whether neocortex size serves as a valid constraint on group size beyond non-human primates, Lindenfors and his team aren’t so sure.

While a number of studies have offered support for Dunbar’s ideas, the new study debunks the claim that neocortex size in primates is equally pertinent to human socialization parameters.

“It is not possible to make an estimate for humans with any precision using available methods and data,” says evolutionary biologist Andreas Wartel.

In their study, the researchers used modern statistical methods including Bayesian and generalized least-squares (GLS) analyses to take another look at the relationship between group size and brain/neocortex sizes in primate brains, with the advantage of updated datasets on primate brains.

The results suggested that stable human group sizes might ultimately be much smaller than 150 individuals – with one analysis suggesting up to 42 individuals could be the average limit, with another estimate ranging between a group of 70 to 107.

Ultimately, however, enormous amounts of imprecision in the statistics suggest that any method like this – trying to compute an average number of stable relationships for any human individual based off brain volume considerations – is unreliable at best.

“Specifying any one number is futile,” the researchers write in their study. “A cognitive limit on human group size cannot be derived in this manner.”

Despite the mainstream attention Dunbar’s number enjoys, the researchers say the majority of primate social evolution research focuses on socio-ecological factors, including foraging and predation, infanticide, and sexual selection – not so much calculations dependent on brain or neocortex volume.

Further, the researchers argue that Dunbar’s number ignores other significant differences in brain physiology between human and non-human primate brains – including that humans develop cultural mechanisms and social structures that can counter socially limiting cognitive factors that might otherwise apply to non-human primates.

“Ecological research on primate sociality, the uniqueness of human thinking, and empirical observations all indicate that there is no hard cognitive limit on human sociality,” the team explains.

“It is our hope, though perhaps futile, that this study will put an end to the use of ‘Dunbar’s number’ within science and in popular media.”

The findings are reported in Biology Letters.

A memory without a brain (ScienceDaily)

How a single cell slime mold makes smart decisions without a central nervous system

Date: February 23, 2021

Source: Technical University of Munich (TUM)

Summary: Researchers have identified how the slime mold Physarum polycephalum saves memories — although it has no nervous system.


Slime mold on dead leaves (stock image). Credit: © Iuliia / stock.adobe.com

Having a memory of past events enables us to take smarter decisions about the future. Researchers at the Max-Planck Institute for Dynamics and Self-Organization (MPI-DS) and the Technical University of Munich (TUM) have now identified how the slime mold Physarum polycephalum saves memories — although it has no nervous system.

The ability to store and recover information gives an organism a clear advantage when searching for food or avoiding harmful environments. Traditionally it has been attributed to organisms that have a nervous system.

A new study authored by Mirna Kramar (MPI-DS) and Prof. Karen Alim (TUM and MPI-DS) challenges this view by uncovering the surprising abilities of a highly dynamic, single-celled organism to store and retrieve information about its environment.

Window into the past

The slime mold Physarum polycephalum has been puzzling researchers for many decades. Existing at the crossroads between the kingdoms of animals, plants and fungi, this unique organism provides insight into the early evolutionary history of eukaryotes — to which also humans belong.

Its body is a giant single cell made up of interconnected tubes that form intricate networks. This single amoeba-like cell may stretch several centimeters or even meters, featuring as the largest cell on earth in the Guinness Book of World Records.

Decision making on the most basic levels of life

The striking abilities of the slime mold to solve complex problems, such as finding the shortest path through a maze, earned it the attribute “intelligent.” It intrigued the research community and kindled questions about decision making on the most basic levels of life.

The decision-making ability of Physarum is especially fascinating given that its tubular network constantly undergoes fast reorganization — growing and disintegrating its tubes — while completely lacking an organizing center.

The researchers discovered that the organism weaves memories of food encounters directly into the architecture of the network-like body and uses the stored information when making future decisions.

The network architecture as a memory of the past

“It is very exciting when a project develops from a simple experimental observation,” says Karen Alim, head of the Biological Physics and Morphogenesis group at the MPI-DS and professor on Theory of Biological Networks at the Technical University of Munich.

When the researchers followed the migration and feeding process of the organism and observed a distinct imprint of a food source on the pattern of thicker and thinner tubes of the network long after feeding.

“Given P. polycephalum‘s highly dynamic network reorganization, the persistence of this imprint sparked the idea that the network architecture itself could serve as memory of the past,” says Karen Alim. However, they first needed to explain the mechanism behind the imprint formation.

Decisions are guided by memories

For this purpose the researchers combined microscopic observations of the adaption of the tubular network with theoretical modeling. An encounter with food triggers the release of a chemical that travels from the location where food was found throughout the organism and softens the tubes in the network, making the whole organism reorient its migration towards the food.

“The gradual softening is where the existing imprints of previous food sources come into play and where information is stored and retrieved,” says first author Mirna Kramar. “Past feeding events are embedded in the hierarchy of tube diameters, specifically in the arrangement of thick and thin tubes in the network.”

“For the softening chemical that is now transported, the thick tubes in the network act as highways in traffic networks, enabling quick transport across the whole organism,” adds Mirna Kramar. “Previous encounters imprinted in the network architecture thus weigh into the decision about the future direction of migration.”

Design based on universal principles

“Given the simplicity of this living network, the ability of Physarum to form memories is intriguing. It is remarkable that the organism relies on such a simple mechanism and yet controls it in such a fine-tuned manner,” says Karen Alim.

“These results present an important piece of the puzzle in understanding the behavior of this ancient organism and at the same time points to universal principles underlying behavior. We envision potential applications of our findings in designing smart materials and building soft robots that navigate through complex environments,” concludes Karen Alim.


Story Source:

Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length.


Journal Reference:

  1. Mirna Kramar, Karen Alim. Encoding memory in tube diameter hierarchy of living flow network. Proceedings of the National Academy of Sciences, 2021; 118 (10): e2007815118 DOI: 10.1073/pnas.2007815118

4 efeitos do racismo no cérebro e no corpo de crianças, segundo Harvard (BBC)

Paula Adamo Idoeta

Da BBC News Brasil em São Paulo

9 dezembro 2020, 06:01 -03

Criança com a mãe
Viver o racismo, direta ou indiretamente, tem efeitos de longo prazo sobre desenvolvimento, comportamento, saúde física e mental

Episódios diários de racismo, desde ser alvo de preconceito até assistir a casos de violência sofridos por outras pessoas da mesma raça, têm um efeito às vezes “invisível”, mas duradouro e cruel sobre a saúde, o corpo e o cérebro de crianças.

A conclusão é do Centro de Desenvolvimento Infantil da Universidade de Harvard, que compilou estudos documentando como a vivência cotidiana do racismo estrutural, de suas formas mais escancaradas às mais sutis ou ao acesso pior a serviços públicos, impacta “o aprendizado, o comportamento, a saúde física e mental” infantil.

No longo prazo, isso resulta em custos bilionários adicionais em saúde, na perpetuação das disparidades raciais e em mais dificuldades para grande parcela da população em atingir seu pleno potencial humano e capacidade produtiva.

Embora os estudos sejam dos EUA, dados estatísticos — além do fato de o Brasil também ter histórico de escravidão e desigualdade — permitem traçar paralelos entre os dois cenários.

Aqui, casos recentes de violência contra pessoas negras incluem o de Beto Freitas, espancado até a morte dentro de um supermercado Carrefour em Porto Alegre em 20 de novembro, e o das primas Emilly, 4, e Rebeca, 7, mortas por disparos de balas enquanto brincavam na porta de casa, em Duque de Caxias em 4 de dezembro.

No Brasil, 54% da população é negra, percentual que é de 13% na população dos EUA.

A seguir, quatro impactos do ciclo vicioso do racismo, segundo o documento de Harvard. Para discutir as particularidades disso no Brasil, a reportagem entrevistou a psicóloga Cristiane Ribeiro, autora de um estudo recente sobre como a população negra lida com o sofrimento físico e mental, que foi tema de sua dissertação de mestrado pelo Programa de Pós-graduação em Promoção da Saúde e Prevenção da Violência da UFMG.

1. Corpo em estado de alerta constante

O racismo e a violência dentro da comunidade (e a ausência de apoio para lidar com isso) estão entre o que Harvard chama de “experiências adversas na infância”. Passar constantemente por essas experiências faz com que o cérebro se mantenha em estado constante de alerta, provocando o chamado “estresse tóxico”.

“Anos de estudos científicos mostram que, quando os sistemas de estresse das crianças ficam ativados em alto nível por longo período de tempo, há um desgaste significativo nos seus cérebros em desenvolvimento e outros sistemas biológicos”, diz o Centro de Desenvolvimento Infantil da universidade.

Na prática, áreas do cérebro dedicadas à resposta ao medo, à ansiedade e a reações impulsivas podem produzir um excesso de conexões neurais, ao mesmo tempo em que áreas cerebrais dedicadas à racionalização, ao planejamento e ao controle de comportamento vão produzir menos conexões neurais.

Protesto pela morte de Beto Freitas, em Porto Alegre, 20 de novembro
Protesto pela morte de Beto Freitas, em Porto Alegre, 20 de novembro; assistir cenas de violência contra pessoas da mesma raça tem efeito traumático – é o chamado ‘racismo indireto’

“Isso pode ter efeito de longo prazo no aprendizado, comportamento, saúde física e mental”, prossegue o centro. “Um crescente corpo de evidências das ciências biológicas e sociais conecta esse conceito de desgaste (do cérebro) ao racismo. Essas pesquisas sugerem que ter de lidar constantemente com o racismo sistêmico e a discriminação cotidiana é um ativador potente da resposta de estresse.”

“Embora possam ser invisíveis para quem não passa por isso, não há dúvidas de que o racismo sistêmico e a discriminação interpessoal podem levar à ativação crônica do estresse, impondo adversidades significativas nas famílias que cuidam de crianças pequenas”, conclui o documento de Harvard.

2. Mais chance de doenças crônicas ao longo da vida

Essa exposição ao estresse tóxico é um dos fatores que ajudam a explicar diferenças raciais na incidência de doenças crônicas, prossegue o centro de Harvard:

“As evidências são enormes: pessoas negras, indígenas e de outras raças nos EUA têm, em média, mais problemas crônicos de saúde e vidas mais curtas do que as pessoas brancas, em todos os níveis de renda.”

Alguns dados apontam para situação semelhante no Brasil. Homens e mulheres negros têm, historicamente, incidência maior de diabetes — 9% mais prevalente em negros do que em brancos; 50% mais prevalente em negras do que em brancas, segundo o Ministério da Saúde — e pressão alta, por exemplo.

Os números mais marcantes, porém, são os de violência armada, como a que vitimou as meninas Emilly e Rebeca. O Atlas da Violência aponta que negros foram 75,7% das vítimas de homicídio no Brasil em 2018.

A taxa de homicídios de brasileiros negros é de 37,8 para cada 100 mil habitantes, contra 13,9 de não negros.

Há, ainda, uma incidência possivelmente maior de problemas de saúde mental: de cada dez suicídios em adolescentes em 2016, seis foram de jovens negros e quatro de brancos, segundo pesquisa do Ministério da Saúde publicada no ano passado.

“O adoecimento (pela vivência do racismo) é constante, e vemos nos dados escancarados, como os da violência, mas também na depressão, no adoecimento psíquico e nos altos números de suicídio”, afirma a psicóloga Cristiane Ribeiro.

Protesto pela morte de Beto Freitas
“Embora possam ser invisíveis para quem não passa por isso, não há dúvidas de que o racismo sistêmico e a discriminação interpessoal podem levar à ativação crônica do estresse, impondo adversidades significativas nas famílias que cuidam de crianças pequenas”, diz o documento de Harvard

“E por que essa é violência é tão marcante entre pessoas negras? Porque aprendemos que nosso semelhante é o pior possível e o quanto mais longe estivermos dele, melhor. A criança materializa isso de alguma forma. Temos estatísticas de que crianças negras são menos abraçadas na educação infantil, recebem menos afeto dos professores. (Algumas) ouvem desde cedo ‘esse menino não aprende mesmo, é burro’ ou ‘nasceu pra ser bandido'”, prossegue Ribeiro.

Embora muitos conseguem superar essa narrativa, outros têm sua vida marcada por ela, diz Ribeiro. “Trabalhei durante muito tempo no sistema socioeducativo (com jovens infratores), e essas sentenças são muito recorrentes: o menino que escuta desde pequeno que ‘não vai ser nada na vida’. São trajetórias sentenciadas.”

3. Disparidades na saúde e na educação

Os problemas descritos acima são potencializados pelo menor acesso aos serviços públicos de saúde, aponta Harvard.

“Pessoas de cor recebem tratamento desigual quando interagem em sistemas como o de saúde e educação, além de terem menos acesso a educação e serviços de saúde de alta qualidade, a oportunidades econômicas e a caminhos para o acúmulo de riqueza”, diz o documento do Centro de Desenvolvimento infantil.

“Tudo isso reflete formas como o legado do racismo estrutural nos EUA desproporcionalmente enfraquece a saúde e o desenvolvimento de crianças de cor.”

Mais uma vez, os números brasileiros apontam para um quadro parecido. Segundo levantamento do Ministério da Saúde, 67% do público do SUS (Sistema Único de Saúde) é negro. No entanto, a população negra realiza proporcionalmente menos consultas médicas e atendimentos de pré-natal.

E, entre os 10% de pessoas com menor renda no Brasil, 75% delas são pretas ou pardas.

Na educação, as disparidades persistem. Crianças negras de 0 a 3 anos têm percentual menor de matrículas em creches. Na outra ponta do ensino, 53,9% dos jovens declarados negros concluíram o ensino médio até os 19 anos — 20 pontos percentuais a menos que a taxa de jovens brancos, apontam dados de 2018 do movimento Todos Pela Educação.

Familiares das meninas Emilly e Rebecca, mortas a tiros,em encontro com o governador em exercício do Rio, Claudio Castro
Familiares das meninas Emilly e Rebecca, mortas a tiros,em encontro com o governador em exercício do Rio, Claudio Castro; Atlas da Violência aponta que negros foram 75,7% das vítimas de homicídio no Brasil em 2018

4. Cuidadores mais fragilizados e ‘racismo indireto’

Os efeitos do estresse não se limitam às crianças: se estendem também aos pais e responsáveis por elas — e, como em um efeito bumerangue, voltam a afetar as crianças indiretamente.

“Múltiplos estudos documentaram como os estresses da discriminação no dia a dia em pais e outros cuidadores, como ser associado a estereótipos negativos, têm efeitos nocivos no comportamento desses adultos e em sua saúde mental”, prossegue o Centro de Desenvolvimento Infantil.

Um dos estudos usados para embasar essa conclusão é uma revisão de dezenas de pesquisas clínicas feita em 2018, que aborda o que os pesquisadores chamam de “exposição indireta ao racismo”: mesmo quando as crianças não são alvo direto de ofensas ou violência racista, podem ficar traumatizadas ao testemunhar ou escutar sobre eventos que tenham afetado pessoas próximas a elas.

“Especialmente para crianças de minorias (raciais), a exposição frequente ao racismo indireto pode forçá-las a dar sentido cognitivamente a um mundo que sistematicamente as desvaloriza e marginaliza”, concluem os pesquisadores.

O estudo identificou, como efeito desse “racismo indireto”, impactos tanto em cuidadores (que tinham autoestima mais fragilizada) como nas crianças, que nasciam de mais partos prematuros, com menor peso ao nascer e mais chances de adoecer ao longo da vida ou de desenvolver depressão.

Na infância, diz a psicóloga Cristiane Ribeiro, é quando começamos a construir nossa capacidade de acreditar no próprio potencial para viver no mundo. No caso da população negra, essa construção é afetada negativamente pelos estereótipos racistas, sejam características físicas ou sociais — como o “cabelo pixaim” ou “serviço de preto”.

Homem penteando cabelo de menina negra
Valorização e representatividade impactam positivamente as crianças e, por consequência, suas famílias

“A gente precisa ter referências mais positivas da população negra como aquela que também é responsável pela constituição social do Brasil. A única representação que a gente tem no livro didático de história é de uma pessoa (escravizada) acorrentada, em uma situação de extrema vulnerabilidade e que está ali porque ‘não se esforçou para não estar'”, diz a pesquisadora.

Mesmo atos “sutis” — como pessoas negras sendo seguidas por seguranças em shopping centers ou recebendo atendimento pior em uma loja qualquer —, que muitas vezes passam despercebidos para observadores brancos, podem ter efeitos devastadores sobre a autoestima, prossegue Ribeiro.

“Isso que a gente costuma chamar de sutileza do racismo não tem nada de sutil na minha perspectiva. Quando alguém grita ‘macaco’ no meio da rua, as pessoas compartilham a indignação. É diferente do olhar (preconceituoso), que só o sujeito viu e só ele percebeu. Mesmo para a militante mais empoderada e ciente de seus direitos — porque é uma luta sem descanso —, tem dias que não tem jeito, esse olhar te destroça. A gente fala muito da força da mulher negra, mas e o direito à fragilidade? será que ser frágil também é um privilégio?”

Como romper o ciclo

“Avanços na ciência apresentam um retrato cada vez mais claro de como a adversidade forte na vida de crianças pequenas pode afetar o desenvolvimento do cérebro e outros sistemas biológicos. Essas perturbações iniciais podem enfraquecer as oportunidades dessas crianças em alcançar seu pleno potencial”, diz o documento de Harvard.

Mas é possível romper esse ciclo, embora lembrando que as formas de combatê-lo são complexas e múltiplas.

Cristiane Ribeiro
“A gente fala muito da força da mulher negra, mas e o direito à fragilidade? será que ser frágil também é um privilégio?”, diz Cristiane Ribeiro

“Precisamos criar novas estratégias para lidar com essas desigualdades que sistematicamente ameaçam a saúde e o bem-estar das crianças pequenas de cor e os adultos que cuidam delas. Isso inclui buscar ativamente e reduzir os preconceitos em nós e nas políticas socioeconômicas, por meio de iniciativas como contratações justas, oferta de crédito, programas de habitação, treinamento antipreconceito e iniciativas de policiamento comunitário”, diz o Centro de Desenvolvimento Infantil de Harvard.

Para Cristiane Ribeiro, passos fundamentais nessa direção envolvem mais representatividade negra e mais discussões sobre o tema dentro das escolas.

“Se tenho uma escola repleta de negros ou pessoas de diferentes orientações sexuais, mas isso não é dito, não é tratado, você tem a mesma segregação que nos outros espaços”, opina.

“Precisamos extinguir a ideia do ‘lápis cor de pele’. Tem tanta cor de pele, porque um lápis rosa a representa? Tem também a criança com cabelo crespo em uma escola onde só são penteados os cabelos lisos. Se a professora der conta de tratar aquele cabelo de uma forma tão afetiva quanto ela trata o cabelo lisinho, ela mudará o mundo daquela criança, inclusive incluindo nessa criança defesa para que ela responda quando seu cabelo for chamado de duro, de feio. E daí ela se olha no espelho e vê beleza, que é um direito que está sendo conquistado muito aos poucos. A chance é de que faça diferença pra família inteira. A criança negra que fala ‘não, mãe, meu cabelo não é feio’ desloca aquele ciclo naquela família, de todas as mulheres alisarem o cabelo. (…) Um olhar afetivo nessa história quebra o ciclo.”

O afeto e a construção de redes de apoio também são apontados por Harvard como formas de aliviar o peso do estresse tóxico e construir resiliência em crianças e famílias.

“É claro que a ciência não consegue lidar com esses desafios sozinha, mas o pensamento informado pela ciência combinado com o conhecimento em mudar sistemas entrincheirados e as experiências vividas pelas famílias que criam seus filhos sob diferentes condições podem ser poderosos catalisadores de estratégias eficientes,” defende o Centro para o Desenvolvimento Infantil.

Como a educação brasileira acentua desigualdade racial e apaga os heróis negros da história do Brasil

Crianças reproduzem racismo? O debate que transformou escola em SP

When you’re smiling, the whole world really does smile with you (Science Daily)

Date: August 13, 2020

Source: University of South Australia

Summary: From Sinatra to Katy Perry, celebrities have long sung about the power of a smile — how it picks you up, changes your outlook, and generally makes you feel better. But is it all smoke and mirrors, or is there a scientific backing to the claim? Groundbreaking research confirms that the act of smiling can trick your mind into being more positive, simply by moving your facial muscles.

Smiling friends (stock | Credit: © fizkes / stock.adobe.com

Smiling friends (stock image). Credit: © fizkes / stock.adobe.com

From Sinatra to Katy Perry, celebrities have long sung about the power of a smile — how it picks you up, changes your outlook, and generally makes you feel better. But is it all smoke and mirrors, or is there a scientific backing to the claim?

Groundbreaking research from the University of South Australia confirms that the act of smiling can trick your mind into being more positive, simply by moving your facial muscles.

With the world in crisis amid COVID-19, and alarming rises of anxiety and depression in Australia and around the world, the findings could not be more timely.

The study, published in Experimental Psychology, evaluated the impact of a covert smile on perception of face and body expressions. In both scenarios, a smile was induced by participants holding a pen between their teeth, forcing their facial muscles to replicate the movement of a smile.

The research found that facial muscular activity not only alters the recognition of facial expressions but also body expressions, with both generating more positive emotions.

Lead researcher and human and artificial cognition expert, UniSA’s Dr Fernando Marmolejo-Ramos says the finding has important insights for mental health.

“When your muscles say you’re happy, you’re more likely to see the world around you in a positive way,” Dr Marmolejo-Ramos says.

“In our research we found that when you forcefully practise smiling, it stimulates the amygdala — the emotional centre of the brain — which releases neurotransmitters to encourage an emotionally positive state.

“For mental health, this has interesting implications. If we can trick the brain into perceiving stimuli as ‘happy’, then we can potentially use this mechanism to help boost mental health.”

The study replicated findings from the ‘covert’ smile experiment by evaluating how people interpret a range of facial expressions (spanning frowns to smiles) using the pen-in-teeth mechanism; it then extended this using point-light motion images (spanning sad walking videos to happy walking videos) as the visual stimuli.

Dr Marmolejo-Ramos says there is a strong link between action and perception.

“In a nutshell, perceptual and motor systems are intertwined when we emotionally process stimuli,” Dr Marmolejo-Ramos says.

“A ‘fake it ‘til you make it’ approach could have more credit than we expect.”


Story Source:

Materials provided by University of South Australia. Note: Content may be edited for style and length.


Journal Reference:

  1. Fernando Marmolejo-Ramos, Aiko Murata, Kyoshiro Sasaki, Yuki Yamada, Ayumi Ikeda, José A. Hinojosa, Katsumi Watanabe, Michal Parzuchowski, Carlos Tirado, Raydonal Ospina. Your Face and Moves Seem Happier When I Smile. Experimental Psychology, 2020; 67 (1): 14 DOI: 10.1027/1618-3169/a000470

Repetitive negative thinking linked to dementia risk (Science Daily)

Date: June 7, 2020

Source: University College London

Summary: Persistently engaging in negative thinking patterns may raise the risk of Alzheimer’s disease, finds a new UCL-led study published in Alzheimer’s & Dementia.

Persistently engaging in negative thinking patterns may raise the risk of Alzheimer’s disease, finds a new UCL-led study.

In the study of people aged over 55, published in Alzheimer’s & Dementia, researchers found ‘repetitive negative thinking’ (RNT) is linked to subsequent cognitive decline as well as the deposition of harmful brain proteins linked to Alzheimer’s.

The researchers say RNT should now be further investigated as a potential risk factor for dementia, and psychological tools, such as mindfulness or meditation, should be studied to see if these could reduce dementia risk.

Lead author Dr Natalie Marchant (UCL Psychiatry) said: “Depression and anxiety in mid-life and old age are already known to be risk factors for dementia. Here, we found that certain thinking patterns implicated in depression and anxiety could be an underlying reason why people with those disorders are more likely to develop dementia.

“Taken alongside other studies, which link depression and anxiety with dementia risk, we expect that chronic negative thinking patterns over a long period of time could increase the risk of dementia. We do not think the evidence suggests that short-term setbacks would increase one’s risk of dementia.

“We hope that our findings could be used to develop strategies to lower people’s risk of dementia by helping them to reduce their negative thinking patterns.”

For the Alzheimer’s Society-supported study, the research team from UCL, INSERM and McGill University studied 292 people over the age of 55 who were part of the PREVENT-AD cohort study, and a further 68 people from the IMAP+ cohort.

Over a period of two years, the study participants responded to questions about how they typically think about negative experiences, focusing on RNT patterns like rumination about the past and worry about the future. The participants also completed measures of depression and anxiety symptoms.

Their cognitive function was assessed, measuring memory, attention, spatial cognition, and language. Some (113) of the participants also underwent PET brain scans, measuring deposits of tau and amyloid, two proteins which cause the most common type of dementia, Alzheimer’s disease, when they build up in the brain.

The researchers found that people who exhibited higher RNT patterns experienced more cognitive decline over a four-year period, and declines in memory (which is among the earlier signs of Alzheimer’s disease), and they were more likely to have amyloid and tau deposits in their brain.

Depression and anxiety were associated with subsequent cognitive decline but not with either amyloid or tau deposition, suggesting that RNT could be the main reason why depression and anxiety contribute to Alzheimer’s disease risk.

“We propose that repetitive negative thinking may be a new risk factor for dementia as it could contribute to dementia in a unique way,” said Dr Marchant.

The researchers suggest that RNT may contribute to Alzheimer’s risk via its impact on indicators of stress such as high blood pressure, as other studies have found that physiological stress can contribute to amyloid and tau deposition.

Co-author Dr Gael Chételat (INSERM and Université de Caen-Normandie) commented: “Our thoughts can have a biological impact on our physical health, which might be positive or negative. Mental training practices such as meditation might help promoting positive- while down-regulating negative-associated mental schemes.

“Looking after your mental health is important, and it should be a major public health priority, as it’s not only important for people’s health and well-being in the short term, but it could also impact your eventual risk of dementia.”

The researchers hope to find out if reducing RNT, possibly through mindfulness training or targeted talk therapy, could in turn reduce the risk of dementia. Dr Marchant and Dr Chételat and other European researchers are currently working on a large project to see if interventions such as meditation may help reduce dementia risk by supporting mental health in old age.

Fiona Carragher, Director of Research and Influencing at Alzheimer’s Society, said: “Understanding the factors that can increase the risk of dementia is vital in helping us improve our knowledge of this devastating condition and, where possible, developing prevention strategies. The link shown between repeated negative thinking patterns and both cognitive decline and harmful deposits is interesting although we need further investigation to understand this better. Most of the people in the study were already identified as being at higher risk of Alzheimer’s disease, so we would need to see if these results are echoed within the general population and if repeated negative thinking increases the risk of Alzheimer’s disease itself.

“During these unstable times, we are hearing from people every day on our Alzheimer’s Society Dementia Connect line who are feeling scared, confused, or struggling with their mental health. So it’s important to point out that this isn’t saying a short-term period of negative thinking will cause Alzheimer’s disease. Mental health could be a vital cog in the prevention and treatment of dementia; more research will tell us to what extent.”


Story Source:

Materials provided by University College London. Note: Content may be edited for style and length.


Journal Reference:

  1. Natalie L. Marchant, Lise R. Lovland, Rebecca Jones, Alexa Pichet Binette, Julie Gonneaud, Eider M. Arenaza-Urquijo, Gael Chételat, Sylvia Villeneuve. Repetitive negative thinking is associated with amyloid, tau, and cognitive decline. Alzheimer’s & Dementia, 2020; DOI: 10.1002/alz.12116

Why did humans evolve such large brains? Because smarter people have more friends (The Conversation)

June 19, 2017 10.01am EDT

Humans are the only ultrasocial creature on the planet. We have outcompeted, interbred or even killed off all other hominin species. We cohabit in cities of tens of millions of people and, despite what the media tell us, violence between individuals is extremely rare. This is because we have an extremely large, flexible and complex “social brain”.

To truly understand how the brain maintains our human intellect, we would need to know about the state of all 86 billion neurons and their 100 trillion interconnections, as well as the varying strengths with which they are connected, and the state of more than 1,000 proteins that exist at each connection point. Neurobiologist Steven Rose suggests that even this is not enough – we would still need know how these connections have evolved over a person’s lifetime and even the social context in which they had occurred. It may take centuries just to figure out basic neuronal connectivity.

Many people assume that our brain operates like a powerful computer. But Robert Epstein, a psychologist at the American Institute for Behavioural Research and Technology, says this is just shoddy thinking and is holding back our understanding of the human brain. Because, while humans start with senses, reflexes and learning mechanisms, we are not born with any of the information, rules, algorithms or other key design elements that allow computers to behave somewhat intelligently. For instance, computers store exact copies of data that persist for long periods of time, even when the power is switched off. Our brains, meanwhile, are capable of creating false data or false memories, and they only maintain our intellect as long as we remain alive.

We are organisms, not computers

Of course, we can see many advantages in having a large brain. In my recent book on human evolution I suggest it firstly allows humans to exist in a group size of about 150. This builds resilience to environmental changes by increasing and diversifying food production and sharing.

 

As our ancestors got smarter, they became capable of living in larger and larger groups. Mark Maslin, Author provided

A social brain also allows specialisation of skills so individuals can concentrate on supporting childbirth, tool-making, fire setting, hunting or resource allocation. Humans have no natural weapons, but working in large groups and having tools allowed us to become the apex predator, hunting animals as large as mammoths to extinction.

Our social groups are large and complex, but this creates high stress levels for individuals because the rewards in terms of food, safety and reproduction are so great. Hence, Oxford anthropologist Robin Dunbar argues our huge brain is primarily developed to keep track of rapidly changing relationships. It takes a huge amount of cognitive ability to exist in large social groups, and if you fall out of the group you lose access to food and mates and are unlikely to reproduce and pass on your genes.

 

Great. But what about your soap opera knowledge? ronstik / shutterstock

My undergraduates come to university thinking they are extremely smart as they can do differential equations and understand the use of split infinitives. But I point out to them that almost anyone walking down the street has the capacity to hold the moral and ethical dilemmas of at least five soap operas in their head at any one time. And that is what being smart really means. It is the detailed knowledge of society and the need to track and control the ever changing relationship between people around us that has created our huge complex brain.

It seems our brains could be even more flexible that we previously thought. Recent genetic evidence suggests the modern human brain is more malleable and is modelled more by the surrounding environment than that of chimpanzees. The anatomy of the chimpanzee brain is strongly controlled by their genes, whereas the modern human brain is extensively shaped by the environment, no matter what the genetics.

This means the human brain is pre-programmed to be extremely flexible; its cerebral organisation is adjusted by the environment and society in which it is raised. So each new generation’s brain structure can adapt to the new environmental and social challenges without the need to physically evolve.

 

Evolution at work. OtmarW / shutterstock

This may also explain why we all complain that we do not understand the next generation as their brains are wired differently, having grown up in a different physical and social environment. An example of this is the ease with which the latest generation interacts with technology almost if they had co-evolved with it.

So next time you turn on a computer just remember how big and complex your brain is – to keep a track of your friends and enemies.

Sente com as entranhas? Seu corpo tem um segundo cérebro dentro da barriga (UOL Saúde)

30/05/201704h00

Tem um segundo cérebro dentro da sua barriga

Tem um segundo cérebro dentro da sua barriga. Getty Images/iStockphoto

Sabe esse seu cérebro aí na cabeça? Ele não é tão único assim não como a gente imagina e conta com uma grande ajuda de um parceiro para controlar nossas emoções, nosso humor e nosso comportamento. Isso porque o corpo humano tem o que muitos chamam de um “segundo cérebro”. E em um lugar bem especial: na nossa barriga.

O “segundo cérebro”, como é chamado informalmente, está situado bem ao longo dos nove metros de seu intestino e reúne milhões de neurônios. Na verdade, faz parte de algo com uma nomenclatura um pouquinho mais complicada: o Sistema Nervoso Entérico.

Getty Images

Dentro do nosso intestino há entre 200 e 600 milhões de neurônios

Funções que até o cérebro duvida

Uma das razões principais para ele ser considerado um cérebro é a grande e complexa rede de neurônios existentes nesse sistema. Para se ter uma ideia, nós temos ali entre 200 milhões e 600 milhões de neurônios, de acordo com pesquisadores da Universidade de Melbourne, na Austrália, que trabalham em conjunto com o cérebro principal.

É como se tivéssemos o cérebro de um gato na nossa barriga. Ele tem 20 diferentes tipos de neurônios, a mesma diversidade encontrada no nosso cérebro grande, onde temos 100 bilhões de neurônios”

Heribert Watzke, cientista alimentar durante em uma palestra na TED Talks

As funções desse cérebro são várias e ocorrem de forma autônoma e integrada ao grande cérebro. Antes, imaginava-se que o cérebro maior enviava sinais para comandar esse outro cérebro, Mas, na verdade, é o contrário: o cérebro em nosso intestino envia sinais por meio de uma grande “rodovia” de neurônios para a cabeça, que pode aceitar ou não as indicações.

“O cérebro de cima pode interferir nesses sinais, modificando-os ou inibindo-os. Há sinais de fome, que nosso estômago vazio envia para o cérebro. Tem sinais que mandam a gente parar de comer quando estamos cheios. Se o sinal da fome é ignorado, pode gerar a doença anorexia, por exemplo. O mais comum é o de continuar comendo, mesmo depois que nossos sinais do estômago dizem ‘ok, pare, transferimos energia suficiente'”, complementa Watzke.

A quantidade de neurônios assusta, mas faz sentido se pensarmos nos perigos da alimentação. Assim como a pele, o intestino tem que parar imediatamente potenciais invasores perigosos em nosso organismo, como bactérias e vírus.

Esse segundo cérebro pode ativar uma diarreia ou alertar o seu “superior”, que pode decidir por acionar vômitos. É um trabalho em grupo e de vital importância.

iStock

Muito além da digestão

É claro que uma das funções principais tem a ver com a nossa digestão e excreção – como se o cérebro maior não quisesse “sujar as mãos”, né? Ele inclusive controla contrações musculares, liberação de substâncias químicas e afins. O segundo cérebro não é usado em funções como pensamentos, religião, filosofia ou poesia, mas está ligado ao nosso humor.

O sistema entérico nervoso nos ajuda a “sentir” nosso mundo interior e seu conteúdo. Segundo a revista Scientific American, é provável que boa parte das nossas emoções sejam influenciadas por causa dos neurônios em nosso intestino.

Já ouviu a expressão “borboletas no estômago”? A sensação é um exemplo disso, como uma resposta a um estresse psicológico.

É por conta disso que algumas pesquisas tentam até tratamento de depressão atuando nos neurônios do intestino. O sistema nervoso entérico tem 95% de nossa serotonina (substância conhecida como uma das responsáveis pela felicidade). Ele pode até ter um papel no autismo.

Há ainda relatos de outras doenças que possam ter a ver com esse segundo cérebro. Um estudo da Nature em 2010 apontou que modificações no funcionamento do sistema podem evitar a osteoporose.

Getty Images

Vida nas entranhas

O “segundo cérebro” tem como uma de suas principais funções a defesa do nosso corpo, já que é um dos grandes responsáveis por controlar nossos anticorpos. Um estudo de 2016 com apoio da Fapesp mostrou como os neurônios se comunicam com as células de defesa no intestino. Há até uma “conversa” com micróbios, já que o sistema nervoso ajuda a ditar quais deles podem habitar o intestino.

Pesquisas apontam que a importância do segundo cérebro é realmente enorme. Em uma delas, foi percebido que ratos recém-nascidos cujos estômagos foram expostos a um químico irritante são mais depressivos e ansiosos do que outros ratos, com os sintomas prosseguindo por um bom tempo depois do dano físico. O mesmo não ocorreu com outros danos, como uma irritação na pele.

Com tudo isso em vista, tenho certeza que você vai olhar para suas vísceras de uma maneira diferente agora, né? Pensa bem: na próxima vez que você estiver estressado ou triste e for comer aquela comida bem gorda para confortar, pode não ser culpa só da sua cabeça.

Nobody understands what consciousness is or how it works. Nobody understands quantum mechanics either. Could that be more than coincidence? (BBC)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

Quantum mechanics is the best theory we have for describing the world at the nuts-and-bolts level of atoms and subatomic particles. Perhaps the most renowned of its mysteries is the fact that the outcome of a quantum experiment can change depending on whether or not we choose to measure some property of the particles involved.

When this “observer effect” was first noticed by the early pioneers of quantum theory, they were deeply troubled. It seemed to undermine the basic assumption behind all science: that there is an objective world out there, irrespective of us. If the way the world behaves depends on how – or if – we look at it, what can “reality” really mean?

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”

Some of those researchers felt forced to conclude that objectivity was an illusion, and that consciousness has to be allowed an active role in quantum theory. To others, that did not make sense. Surely, Albert Einstein once complained, the Moon does not exist only when we look at it!

Today some physicists suspect that, whether or not consciousness influences quantum mechanics, it might in fact arise because of it. They think that quantum theory might be needed to fully understand how the brain works.

Might it be that, just as quantum objects can apparently be in two places at once, so a quantum brain can hold onto two mutually-exclusive ideas at the same time?

These ideas are speculative, and it may turn out that quantum physics has no fundamental role either for or in the workings of the mind. But if nothing else, these possibilities show just how strangely quantum theory forces us to think.

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”. Imagine shining a beam of light at a screen that contains two closely-spaced parallel slits. Some of the light passes through the slits, whereupon it strikes another screen.

Light can be thought of as a kind of wave, and when waves emerge from two slits like this they can interfere with each other. If their peaks coincide, they reinforce each other, whereas if a peak and a trough coincide, they cancel out. This wave interference is called diffraction, and it produces a series of alternating bright and dark stripes on the back screen, where the light waves are either reinforced or cancelled out.

The implication seems to be that each particle passes simultaneously through both slits

This experiment was understood to be a characteristic of wave behaviour over 200 years ago, well before quantum theory existed.

The double slit experiment can also be performed with quantum particles like electrons; tiny charged particles that are components of atoms. In a counter-intuitive twist, these particles can behave like waves. That means they can undergo diffraction when a stream of them passes through the two slits, producing an interference pattern.

Now suppose that the quantum particles are sent through the slits one by one, and their arrival at the screen is likewise seen one by one. Now there is apparently nothing for each particle to interfere with along its route – yet nevertheless the pattern of particle impacts that builds up over time reveals interference bands.

The implication seems to be that each particle passes simultaneously through both slits and interferes with itself. This combination of “both paths at once” is known as a superposition state.

But here is the really odd thing.

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

If we place a detector inside or just behind one slit, we can find out whether any given particle goes through it or not. In that case, however, the interference vanishes. Simply by observing a particle’s path – even if that observation should not disturb the particle’s motion – we change the outcome.

The physicist Pascual Jordan, who worked with quantum guru Niels Bohr in Copenhagen in the 1920s, put it like this: “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements.”

If that is so, objective reality seems to go out of the window.

And it gets even stranger.

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

If nature seems to be changing its behaviour depending on whether we “look” or not, we could try to trick it into showing its hand. To do so, we could measure which path a particle took through the double slits, but only after it has passed through them. By then, it ought to have “decided” whether to take one path or both.

The sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse

An experiment for doing this was proposed in the 1970s by the American physicist John Wheeler, and this “delayed choice” experiment was performed in the following decade. It uses clever techniques to make measurements on the paths of quantum particles (generally, particles of light, called photons) after they should have chosen whether to take one path or a superposition of two.

It turns out that, just as Bohr confidently predicted, it makes no difference whether we delay the measurement or not. As long as we measure the photon’s path before its arrival at a detector is finally registered, we lose all interference.

It is as if nature “knows” not just if we are looking, but if we are planning to look.

(Credit: Emilio Segre Visual Archives/American Institute Physics/Science Photo Library)

Eugene Wigner (Credit: Emilio Segre Visual Archives/American Institute of Physics/Science Photo Library)

Whenever, in these experiments, we discover the path of a quantum particle, its cloud of possible routes “collapses” into a single well-defined state. What’s more, the delayed-choice experiment implies that the sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse. But does this mean that true collapse has only happened when the result of a measurement impinges on our consciousness?

It is hard to avoid the implication that consciousness and quantum mechanics are somehow linked

That possibility was admitted in the 1930s by the Hungarian physicist Eugene Wigner. “It follows that the quantum description of objects is influenced by impressions entering my consciousness,” he wrote. “Solipsism may be logically consistent with present quantum mechanics.”

Wheeler even entertained the thought that the presence of living beings, which are capable of “noticing”, has transformed what was previously a multitude of possible quantum pasts into one concrete history. In this sense, Wheeler said, we become participants in the evolution of the Universe since its very beginning. In his words, we live in a “participatory universe.”

To this day, physicists do not agree on the best way to interpret these quantum experiments, and to some extent what you make of them is (at the moment) up to you. But one way or another, it is hard to avoid the implication that consciousness and quantum mechanics are somehow linked.

Beginning in the 1980s, the British physicist Roger Penrosesuggested that the link might work in the other direction. Whether or not consciousness can affect quantum mechanics, he said, perhaps quantum mechanics is involved in consciousness.

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

What if, Penrose asked, there are molecular structures in our brains that are able to alter their state in response to a single quantum event. Could not these structures then adopt a superposition state, just like the particles in the double slit experiment? And might those quantum superpositions then show up in the ways neurons are triggered to communicate via electrical signals?

Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception, but a real quantum effect.

Perhaps quantum mechanics is involved in consciousness

After all, the human brain seems able to handle cognitive processes that still far exceed the capabilities of digital computers. Perhaps we can even carry out computational tasks that are impossible on ordinary computers, which use classical digital logic.

Penrose first proposed that quantum effects feature in human cognition in his 1989 book The Emperor’s New Mind. The idea is called Orch-OR, which is short for “orchestrated objective reduction”. The phrase “objective reduction” means that, as Penrose believes, the collapse of quantum interference and superposition is a real, physical process, like the bursting of a bubble.

Orch-OR draws on Penrose’s suggestion that gravity is responsible for the fact that everyday objects, such as chairs and planets, do not display quantum effects. Penrose believes that quantum superpositions become impossible for objects much larger than atoms, because their gravitational effects would then force two incompatible versions of space-time to coexist.

Penrose developed this idea further with American physician Stuart Hameroff. In his 1994 book Shadows of the Mind, he suggested that the structures involved in this quantum cognition might be protein strands called microtubules. These are found in most of our cells, including the neurons in our brains. Penrose and Hameroff argue that vibrations of microtubules can adopt a quantum superposition.

But there is no evidence that such a thing is remotely feasible.

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

It has been suggested that the idea of quantum superpositions in microtubules is supported by experiments described in 2013, but in fact those studies made no mention of quantum effects.

Besides, most researchers think that the Orch-OR idea was ruled out by a study published in 2000. Physicist Max Tegmark calculated that quantum superpositions of the molecules involved in neural signaling could not survive for even a fraction of the time needed for such a signal to get anywhere.

Other researchers have found evidence for quantum effects in living beings

Quantum effects such as superposition are easily destroyed, because of a process called decoherence. This is caused by the interactions of a quantum object with its surrounding environment, through which the “quantumness” leaks away.

Decoherence is expected to be extremely rapid in warm and wet environments like living cells.

Nerve signals are electrical pulses, caused by the passage of electrically-charged atoms across the walls of nerve cells. If one of these atoms was in a superposition and then collided with a neuron, Tegmark showed that the superposition should decay in less than one billion billionth of a second. It takes at least ten thousand trillion times as long for a neuron to discharge a signal.

As a result, ideas about quantum effects in the brain are viewed with great skepticism.

However, Penrose is unmoved by those arguments and stands by the Orch-OR hypothesis. And despite Tegmark’s prediction of ultra-fast decoherence in cells, other researchers have found evidence for quantum effects in living beings. Some argue that quantum mechanics is harnessed by migratory birds that use magnetic navigation, and by green plants when they use sunlight to make sugars in photosynthesis.

Besides, the idea that the brain might employ quantum tricks shows no sign of going away. For there is now another, quite different argument for it.

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

In a study published in 2015, physicist Matthew Fisher of the University of California at Santa Barbara argued that the brain might contain molecules capable of sustaining more robust quantum superpositions. Specifically, he thinks that the nuclei of phosphorus atoms may have this ability.

Phosphorus atoms are everywhere in living cells. They often take the form of phosphate ions, in which one phosphorus atom joins up with four oxygen atoms.

Such ions are the basic unit of energy within cells. Much of the cell’s energy is stored in molecules called ATP, which contain a string of three phosphate groups joined to an organic molecule. When one of the phosphates is cut free, energy is released for the cell to use.

Cells have molecular machinery for assembling phosphate ions into groups and cleaving them off again. Fisher suggested a scheme in which two phosphate ions might be placed in a special kind of superposition called an “entangled state”.

Phosphorus spins could resist decoherence for a day or so, even in living cells

The phosphorus nuclei have a quantum property called spin, which makes them rather like little magnets with poles pointing in particular directions. In an entangled state, the spin of one phosphorus nucleus depends on that of the other.

Put another way, entangled states are really superposition states involving more than one quantum particle.

Fisher says that the quantum-mechanical behaviour of these nuclear spins could plausibly resist decoherence on human timescales. He agrees with Tegmark that quantum vibrations, like those postulated by Penrose and Hameroff, will be strongly affected by their surroundings “and will decohere almost immediately”. But nuclear spins do not interact very strongly with their surroundings.

All the same, quantum behaviour in the phosphorus nuclear spins would have to be “protected” from decoherence.

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

This might happen, Fisher says, if the phosphorus atoms are incorporated into larger objects called “Posner molecules”. These are clusters of six phosphate ions, combined with nine calcium ions. There is some evidence that they can exist in living cells, though this is currently far from conclusive.

I decided… to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions

In Posner molecules, Fisher argues, phosphorus spins could resist decoherence for a day or so, even in living cells. That means they could influence how the brain works.

The idea is that Posner molecules can be swallowed up by neurons. Once inside, the Posner molecules could trigger the firing of a signal to another neuron, by falling apart and releasing their calcium ions.

Because of entanglement in Posner molecules, two such signals might thus in turn become entangled: a kind of quantum superposition of a “thought”, you might say. “If quantum processing with nuclear spins is in fact present in the brain, it would be an extremely common occurrence, happening pretty much all the time,” Fisher says.

He first got this idea when he started thinking about mental illness.

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

“My entry into the biochemistry of the brain started when I decided three or four years ago to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions,” Fisher says.

At this point, Fisher’s proposal is no more than an intriguing idea

Lithium drugs are widely used for treating bipolar disorder. They work, but nobody really knows how.

“I wasn’t looking for a quantum explanation,” Fisher says. But then he came across a paper reporting that lithium drugs had different effects on the behaviour of rats, depending on what form – or “isotope” – of lithium was used.

On the face of it, that was extremely puzzling. In chemical terms, different isotopes behave almost identically, so if the lithium worked like a conventional drug the isotopes should all have had the same effect.

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

But Fisher realised that the nuclei of the atoms of different lithium isotopes can have different spins. This quantum property might affect the way lithium drugs act. For example, if lithium substitutes for calcium in Posner molecules, the lithium spins might “feel” and influence those of phosphorus atoms, and so interfere with their entanglement.

We do not even know what consciousness is

If this is true, it would help to explain why lithium can treat bipolar disorder.

At this point, Fisher’s proposal is no more than an intriguing idea. But there are several ways in which its plausibility can be tested, starting with the idea that phosphorus spins in Posner molecules can keep their quantum coherence for long periods. That is what Fisher aims to do next.

All the same, he is wary of being associated with the earlier ideas about “quantum consciousness”, which he sees as highly speculative at best.

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Physicists are not terribly comfortable with finding themselves inside their theories. Most hope that consciousness and the brain can be kept out of quantum theory, and perhaps vice versa. After all, we do not even know what consciousness is, let alone have a theory to describe it.

We all know what red is like, but we have no way to communicate the sensation

It does not help that there is now a New Age cottage industrydevoted to notions of “quantum consciousness“, claiming that quantum mechanics offers plausible rationales for such things as telepathy and telekinesis.

As a result, physicists are often embarrassed to even mention the words “quantum” and “consciousness” in the same sentence.

But setting that aside, the idea has a long history. Ever since the “observer effect” and the mind first insinuated themselves into quantum theory in the early days, it has been devilishly hard to kick them out. A few researchers think we might never manage to do so.

In 2016, Adrian Kent of the University of Cambridge in the UK, one of the most respected “quantum philosophers”, speculated that consciousness might alter the behaviour of quantum systems in subtle but detectable ways.

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

Kent is very cautious about this idea. “There is no compelling reason of principle to believe that quantum theory is the right theory in which to try to formulate a theory of consciousness, or that the problems of quantum theory must have anything to do with the problem of consciousness,” he admits.

Every line of thought on the relationship of consciousness to physics runs into deep trouble

But he says that it is hard to see how a description of consciousness based purely on pre-quantum physics can account for all the features it seems to have.

One particularly puzzling question is how our conscious minds can experience unique sensations, such as the colour red or the smell of frying bacon. With the exception of people with visual impairments, we all know what red is like, but we have no way to communicate the sensation and there is nothing in physics that tells us what it should be like.

Sensations like this are called “qualia”. We perceive them as unified properties of the outside world, but in fact they are products of our consciousness – and that is hard to explain. Indeed, in 1995 philosopher David Chalmers dubbed it “the hard problem” of consciousness.

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

“Every line of thought on the relationship of consciousness to physics runs into deep trouble,” says Kent.

This has prompted him to suggest that “we could make some progress on understanding the problem of the evolution of consciousness if we supposed that consciousnesses alters (albeit perhaps very slightly and subtly) quantum probabilities.”

“Quantum consciousness” is widely derided as mystical woo, but it just will not go away

In other words, the mind could genuinely affect the outcomes of measurements.

It does not, in this view, exactly determine “what is real”. But it might affect the chance that each of the possible actualities permitted by quantum mechanics is the one we do in fact observe, in a way that quantum theory itself cannot predict. Kent says that we might look for such effects experimentally.

He even bravely estimates the chances of finding them. “I would give credence of perhaps 15% that something specifically to do with consciousness causes deviations from quantum theory, with perhaps 3% credence that this will be experimentally detectable within the next 50 years,” he says.

If that happens, it would transform our ideas about both physics and the mind. That seems a chance worth exploring.

Large human brain evolved as a result of ‘sizing each other up’ (Science Daily)

Date:
August 12, 2016
Source:
Cardiff University
Summary:
Humans have evolved a disproportionately large brain as a result of sizing each other up in large cooperative social groups, researchers have proposed.

The brains of humans enlarged over time thanks to our sizing up the competition, say scientists. Credit: © danheighton / Fotolia

Humans have evolved a disproportionately large brain as a result of sizing each other up in large cooperative social groups, researchers have proposed.

A team led by computer scientists at Cardiff University suggest that the challenge of judging a person’s relative standing and deciding whether or not to cooperate with them has promoted the rapid expansion of human brain size over the last 2 million years.

In a study published in Scientific Reports, the team, which also includes leading evolutionary psychologist Professor Robin Dunbar from the University of Oxford, specifically found that evolution favors those who prefer to help out others who are at least as successful as themselves.

Lead author of the study Professor Roger Whitaker, from Cardiff University’s School of Computer Science and Informatics, said: “Our results suggest that the evolution of cooperation, which is key to a prosperous society, is intrinsically linked to the idea of social comparison — constantly sizing each up and making decisions as to whether we want to help them or not.

“We’ve shown that over time, evolution favors strategies to help those who are at least as successful as themselves.”

In their study, the team used computer modelling to run hundreds of thousands of simulations, or ‘donation games’, to unravel the complexities of decision-making strategies for simplified humans and to establish why certain types of behaviour among individuals begins to strengthen over time.

In each round of the donation game, two simulated players were randomly selected from the population. The first player then made a decision on whether or not they wanted to donate to the other player, based on how they judged their reputation. If the player chose to donate, they incurred a cost and the receiver was given a benefit. Each player’s reputation was then updated in light of their action, and another game was initiated.

Compared to other species, including our closest relatives, chimpanzees, the brain takes up much more body weight in human beings. Humans also have the largest cerebral cortex of all mammals, relative to the size of their brains. This area houses the cerebral hemispheres, which are responsible for higher functions like memory, communication and thinking.

The research team propose that making relative judgements through helping others has been influential for human survival, and that the complexity of constantly assessing individuals has been a sufficiently difficult task to promote the expansion of the brain over many generations of human reproduction.

Professor Robin Dunbar, who previously proposed the social brain hypothesis, said: “According to the social brain hypothesis, the disproportionately large brain size in humans exists as a consequence of humans evolving in large and complex social groups.

“Our new research reinforces this hypothesis and offers an insight into the way cooperation and reward may have been instrumental in driving brain evolution, suggesting that the challenge of assessing others could have contributed to the large brain size in humans.”

According to the team, the research could also have future implications in engineering, specifically where intelligent and autonomous machines need to decide how generous they should be towards each other during one-off interactions.

“The models we use can be executed as short algorithms called heuristics, allowing devices to make quick decisions about their cooperative behaviour,” Professor Whitaker said.

“New autonomous technologies, such as distributed wireless networks or driverless cars, will need to self-manage their behaviour but at the same time cooperate with others in their environment.”


Journal Reference:

  1. Roger M. Whitaker, Gualtiero B. Colombo, Stuart M. Allen, Robin I. M. Dunbar. A Dominant Social Comparison Heuristic Unites Alternative Mechanisms for the Evolution of Indirect ReciprocityScientific Reports, 2016; 6: 31459 DOI: 10.1038/srep31459

‘Estudos de neurociência superaram a psicanálise’, diz pesquisador brasileiro (Folha de S.Paulo)

Juliana Cunha, 18.06.2016

Com 60 anos de carreira, 22.794 citações em periódicos, 60 premiações e 710 artigos publicados, Ivan Izquierdo, 78, é o neurocientista mais citado e um dos mais respeitados da América Latina. Nascido na Argentina, ele mora no Brasil há 40 anos e foi naturalizado brasileiro em 1981. Hoje coordena o Centro de Memória do Instituto do Cérebro da PUC-RS.

Suas pesquisas ajudaram a entender os diferentes tipos de memória e a desmistificar a ideia de que áreas específicas do cérebro se dedicariam de maneira exclusiva a um tipo de atividade.

Ele falou à Folha durante o Congresso Mundial do Cérebro, Comportamento e Emoções, que aconteceu esta semana, em Buenos Aires. Izquierdo foi o homenageado desta edição do congresso.

Na entrevista, o cientista fala sobre a utilidade de memórias traumáticas, sua descrença em métodos que prometem apagar lembranças e diz que a psicanálise foi superada pelos estudos de neurociência e funciona hoje como mero exercício estético.

Bruno Todeschini
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires
O neurocientista Ivan Izquierdo durante congresso em Buenos Aires

*

Folha – É possível apagar memórias?
Ivan Izquierdo – É possível evitar que uma memória se expresse, isso sim. É normal, é humano, inclusive, evitar a expressão de certas lembranças. A falta de uso de uma determinada memória implica em desuso daquela sinapse, que aos poucos se atrofia.

Fora disso, não dá. Não existe uma técnica para escolher lembranças e então apagá-las, até porque a mesma informação é salva várias vezes no cérebro, por um mecanismo que chamamos de plasticidade. Quando se fala em apagamento de memórias é pirotecnia, são coisas midiáticas e cinematográficas.

O senhor trabalha bastante com memória do medo. Não apagá-las é uma pena ou algo a ser comemorado?
A memória do medo é o que nos mantém vivos. É a que pode ser acessada mais rapidamente e é a mais útil. Toda vez que você passa por uma situação de ameaça, a informação fundamental que o cérebro precisa guardar é que aquilo é perigoso. As pessoas querem apagar memórias de medo porque muitas vezes são desconfortáveis, mas, se não estivessem ali, nos colocaríamos em situações ruins.

Claro que esse processo causa enorme estresse. Para me locomover numa cidade, meu cérebro aciona inúmeras memórias de medo. Entre tê-las e não tê-las, prefiro tê-las, foram elas que me trouxeram até aqui, mas se pudermos reduzir nossa exposição a riscos, melhor. O problema muitas vezes é o estímulo, não a resposta do medo.

Mas algumas memórias de medo são paralisantes, e podem ser mais arriscadas do que a situação que evitam. Como lidar com elas?
Antes parado do que morto. O cérebro atua para nos preservar, essa é a prioridade. Claro que esse mecanismo é sujeito a falhas. Se entendemos que a resposta a uma memória de medo é exagerada, podemos tentar fazer com que o cérebro ressignifique um estímulo. É possível, por exemplo, expor o paciente repetidas vezes aos estímulos que criaram aquela memória, mas sem o trauma. Isso dissocia a experiência do medo.

Isso não seria parecido com o que Freud tentava fazer com as fobias?
Sim, Freud foi um dos primeiros a usar a extinção no tratamento de fobias, embora ele não acreditasse exatamente em extinção. Com a extinção, a memória continua, não é apagada, mas o trauma não está mais lá.

Mas muitos neurocientistas consideram Freud datado.
Toda teoria envelhece. Freud é uma grande referência, deu contribuições importantes. Mas a psicanálise foi superada pelos estudos em neurociência, é coisa de quando não tínhamos condições de fazer testes, ver o que acontecia no cérebro. Hoje a pessoa vai me falar em inconsciente? Onde fica? Sou cientista, não posso acreditar em algo só porque é interessante.

Para mim, a psicanálise hoje é um exercício estético, não um tratamento de saúde. Se a pessoa gosta, tudo bem, não faz mal, mas é uma pena quando alguém que tem um problema real que poderia ser tratado deixa de buscar um tratamento médico achando que psicanálise seria uma alternativa.

E outros tipos de análise que não a freudiana?
Terapia cognitiva, seguramente. Há formas de fazer o sujeito mudar sua resposta a um estímulo.

O senhor veio para o Brasil com a ditadura na Argentina. Agora, vivemos um processo no Brasil que alguns chamam de golpe, é uma memória em disputa. O que o senhor acha disso enquanto cientista?
Eu vim por conta de uma ameaça. Não considero um golpe, mas é um processo muito esperto. Mudar uma palavra ressignifica toda uma memória. Há de fato uma disputa de como essa memória coletiva vai ser construída. A esquerda usa o termo golpe para evocar memórias de medo de um país que já passou por um golpe. Conforme essa palavra é repetida, isso cria um efeito poderoso. Ainda não sabemos como essa memória será consolidada, mas a estratégia é muito esperta.

A jornalista JULIANA CUNHA viajou a convite do Congresso Mundial do Cérebro, Comportamento e Emoções

Répteis têm atividade cerebral típica de sonhos humanos, revela estudo (Folha de S.Paulo)

Dr. Stephan Junek, Max Planck Institute for Brain Research
Sleeping dragon (Pogona vitticeps). [Credit: Dr. Stephan Junek, Max Planck Institute for Brain Research]
Estudo mostra que lagartos atingem padrão de sono que, em humanos, permite o surgimento de sonhos

REINALDO JOSÉ LOPES
COLABORAÇÃO PARA A FOLHA

28/04/2016 14h56

Será que os lagartos sonham com ovelhas escamosas? Ninguém ainda foi capaz de enxergar detalhadamente o que acontece no cérebro de tais bichos para que seja possível responder a essa pergunta, mas um novo estudo revela que o padrão de atividade cerebral típico dos sonhos humanos também surge nesses répteis quando dormem.

Trata-se do chamado sono REM (sigla inglesa da expressão “movimento rápido dos olhos”), que antes parecia ser exclusividade de mamíferos como nós e das aves. No entanto, a análise da atividade cerebral de um lagarto australiano, o dragão-barbudo (Pogona vitticeps), indica que, ao longo da noite, o cérebro do animal fica se revezando entre o sono REM e o sono de ondas lentas (grosso modo, o sono profundo, sem sonhos), num padrão parecido, ainda que não idêntico, ao observado em seres humanos.

Liderado por Gilles Laurent, do Instituto Max Planck de Pesquisa sobre o Cérebro, na Alemanha, o estudo está saindo na revista especializada “Science”. “Laurent não brinca em serviço”, diz Sidarta Ribeiro, pesquisador da UFRN (Universidade Federal do Rio Grande do Norte) e um dos principais especialistas do mundo em neurobiologia do sono e dos sonhos. “Foi feita uma demonstração bem clara do fenômeno.”

A metodologia usada para verificar o que acontecia no cérebro reptiliano não era exatamente um dragão de sete cabeças. Cinco exemplares da espécie receberam implantes de eletrodos no cérebro e, na hora de dormir, seu comportamento foi monitorado com câmeras infravermelhas, ideais para “enxergar no escuro”. Os animais costumavam dormir entre seis e dez horas por noite, num ciclo que podia ser mais ou menos controlado pelos cientistas do Max Planck, já que eles é que apagavam e acendiam as luzes e regulavam a temperatura do recinto.

O que os pesquisadores estavam medindo era a variação de atividade elétrica no cérebro dos dragões-barbudos durante a noite. São essas oscilações que produzem o padrão de ondas já conhecido a partir do sono de humanos e demais mamíferos, por exemplo.

Só foi possível chegar aos achados relatados no novo estudo por causa de seu nível de detalhamento, diz Suzana Herculano-Houzel, neurocientista da UFRJ (Universidade Federal do Rio de Janeiro) e colunista da Folha. “Estudos anteriores menos minuciosos não tinham como detectar sono REM porque, nesses animais, a alternância entre os dois tipos de sono é extremamente rápida, a cada 80 segundos”, explica ela, que já tinha visto Laurent apresentar os dados num congresso científico. Em humanos, os ciclos são bem mais lentos, com duração média de 90 minutos.

Além da semelhança no padrão de atividade cerebral, o sono REM dos répteis também tem correlação clara com os movimentos oculares que lhe dão o nome (os quais lembram vagamente a maneira como uma pessoa desperta mexe os olhos), conforme mostraram as imagens em infravermelho.

DORMIR, TALVEZ SONHAR

A primeira implicação das descobertas é evolutiva. Embora dormir seja um comportamento aparentemente universal no reino animal, o sono REM (e talvez os sonhos) pareciam exclusividade de espécies com cérebro supostamente mais complexo. “Para quem estuda os mecanismos do sono, é um estudo fundamental”, afirma Suzana.

Acontece que tanto mamíferos quanto aves descendem de grupos primitivos associados aos répteis, só que em momentos bem diferentes da história do planeta – mamíferos já caminhavam pela Terra havia dezenas de milhões de anos quando um grupo de pequenos dinossauros carnívoros deu origem às aves. Ou seja, em tese, mamíferos e aves precisariam ter “aprendido a sonhar” de forma totalmente independente. O achado “resolve esse paradoxo”, diz Ribeiro: o sono REM já estaria presente no ancestral comum de todos esses vertebrados.

O trabalho do pesquisador brasileiro e o de outros especialistas mundo afora tem mostrado que ambos os tipos de sono são fundamentais para “esculpir” memórias no cérebro, ao mesmo tempo fortalecendo o que é relevante e jogando fora o que não é importante. Sem os ciclos alternados de atividade cerebral, a capacidade de aprendizado de animais e humanos ficaria seriamente prejudicada.

Tanto Ribeiro quanto Suzana, porém, dizem que ainda não dá para cravar que lagartos ou outros animais sonham como nós. “Talvez um dia alguém faça ressonância magnética em lagartos adormecidos e veja se eles mostram a mesma reativação de áreas sensoriais que se vê em humanos em sono REM”, diz ela. “Claro que os donos de cachorro têm certeza que suas mascotes sonham, mas o ideal seria fazer a decodificação do sinal neural”, uma técnica que permite saber o que uma pessoa imagina estar vendo quando sonha e já foi aplicada com sucesso por cientistas japoneses.

A single-celled organism capable of learning (Science Daily)

Date:
April 27, 2016
Source:
CNRS
Summary:
For the first time, scientists have demonstrated that an organism devoid of a nervous system is capable of learning. Biologists have succeeded in showing that a single-celled organism, the protist, is capable of a type of learning called habituation. This discovery throws light on the origins of learning ability during evolution, even before the appearance of a nervous system and brain. It may also raise questions as to the learning capacities of other extremely simple organisms such as viruses and bacteria.

The slime mold Physarum polycephalum (diameter: around 10 centimeters), made up of a single cell, was here cultivated in the laboratory on agar gel. Credit: Audrey Dussutour (CNRS)

For the first time, scientists have demonstrated that an organism devoid of a nervous system is capable of learning. A team from the Centre de Recherches sur la Cognition Animale (CNRS/Université Toulouse III — Paul Sabatier) has succeeded in showing that a single-celled organism, the protist Physarum polycephalum, is capable of a type of learning called habituation. This discovery throws light on the origins of learning ability during evolution, even before the appearance of a nervous system and brain. It may also raise questions as to the learning capacities of other extremely simple organisms such as viruses and bacteria. These findings are published in the Proceedings of the Royal Society B on 27 April 2016.

An ability to learn, and memory are key elements in the animal world. Learning from experiences and adapting behavior accordingly are vital for an animal living in a fluctuating and potentially dangerous environment. This faculty is generally considered to be the prerogative of organisms endowed with a brain and nervous system. However, single-celled organisms also need to adapt to change. Do they display an ability to learn? Bacteria certainly show adaptability, but it takes several generations to develop and is more a result of evolution. A team of biologists thus sought to find proof that a single-celled organism could learn. They chose to study the protist, or slime mold, Physarum polycephalum, a giant cell that inhabits shady, cool areas[1] and has proved to be endowed with some astonishing abilities, such as solving a maze, avoiding traps or optimizing its nutrition[2]. But until now very little was known about its ability to learn.

During a nine-day experiment, the scientists thus challenged different groups of this mold with bitter but harmless substances that they needed to pass through in order to reach a food source. Two groups were confronted either by a “bridge” impregnated with quinine, or with caffeine, while the control group only needed to cross a non-impregnated bridge. Initially reluctant to travel through the bitter substances, the molds gradually realized that they were harmless, and crossed them increasingly rapidly — behaving after six days in the same way as the control group. The cell thus learned not to fear a harmless substance after being confronted with it on several occasions, a phenomenon that the scientists refer to as habituation. After two days without contact with the bitter substance, the mold returned to its initial behavior of distrust. Furthermore, a protist habituated to caffeine displayed distrustful behavior towards quinine, and vice versa. Habituation was therefore clearly specific to a given substance.

Habituation is a form of rudimentary learning, which has been characterized in Aplysia (an invertebrate also called sea hare)[3]. This form of learning exists in all animals, but had never previously been observed in a non-neural organism. This discovery in a slime mold, a distant cousin of plants, fungi and animals that appeared on Earth some 500 million years before humans, improves existing understanding of the origins of learning, which markedly preceded those of nervous systems. It also offers an opportunity to study learning types in other very simple organisms, such as viruses or bacteria.

[1] This single cell, which contains thousands of nuclei, can cover an area of around a square meter and moves within its environment at speeds that can reach 5 cm per hour.

[2] See “Even single-celled organisms feed themselves in a ‘smart’ manner.” https://www.sciencedaily.com/releases/2010/02/100210164712.htm

[3] Mild tactile stimulation of the animal’s siphon normally causes the defensive reflex of withdrawing the branchiae. If the harmless tactile stimulation is repeated, this reflex diminishes and finally disappears, thus indicating habituation.


Journal Reference:

  1. Romain P. Boisseau, David Vogel, Audrey Dussutour. Habituation in non-neural organisms: evidence from slime mouldsProceedings of the Royal Society B: Biological Sciences, 2016; 283 (1829): 20160446 DOI: 10.1098/rspb.2016.0446

The Boy Whose Brain Could Unlock Autism (Matter)

 

Autism changed Henry Markram’s family. Now his Intense World theory could transform our understanding of the condition.


SOMETHING WAS WRONG with Kai Markram. At five days old, he seemed like an unusually alert baby, picking his head up and looking around long before his sisters had done. By the time he could walk, he was always in motion and required constant attention just to ensure his safety.

“He was super active, batteries running nonstop,” says his sister, Kali. And it wasn’t just boyish energy: When his parents tried to set limits, there were tantrums—not just the usual kicking and screaming, but biting and spitting, with a disproportionate and uncontrollable ferocity; and not just at age two, but at three, four, five and beyond. Kai was also socially odd: Sometimes he was withdrawn, but at other times he would dash up to strangers and hug them.

Things only got more bizarre over time. No one in the Markram family can forget the 1999 trip to India, when they joined a crowd gathered around a snake charmer. Without warning, Kai, who was five at the time, darted out and tapped the deadly cobra on its head.

Coping with such a child would be difficult for any parent, but it was especially frustrating for his father, one of the world’s leading neuroscientists. Henry Markram is the man behind Europe’s $1.3 billion Human Brain Project, a gargantuan research endeavor to build a supercomputer model of the brain. Markram knows as much about the inner workings of our brains as anyone on the planet, yet he felt powerless to tackle Kai’s problems.

“As a father and a neuroscientist, you realize that you just don’t know what to do,” he says. In fact, Kai’s behavior—which was eventually diagnosed as autism—has transformed his father’s career, and helped him build a radical new theory of autism: one that upends the conventional wisdom. And, ironically, his sideline may pay off long before his brain model is even completed.

IMAGINE BEING BORN into a world of bewildering, inescapable sensory overload, like a visitor from a much darker, calmer, quieter planet. Your mother’s eyes: a strobe light. Your father’s voice: a growling jackhammer. That cute little onesie everyone thinks is so soft? Sandpaper with diamond grit. And what about all that cooing and affection? A barrage of chaotic, indecipherable input, a cacophony of raw, unfilterable data.

Just to survive, you’d need to be excellent at detecting any pattern you could find in the frightful and oppressive noise. To stay sane, you’d have to control as much as possible, developing a rigid focus on detail, routine and repetition. Systems in which specific inputs produce predictable outputs would be far more attractive than human beings, with their mystifying and inconsistent demands and their haphazard behavior.

This, Markram and his wife, Kamila, argue, is what it’s like to be autistic.

They call it the “intense world” syndrome.

The behavior that results is not due to cognitive deficits—the prevailing view in autism research circles today—but the opposite, they say. Rather than being oblivious, autistic people take in too much and learn too fast. While they may appear bereft of emotion, the Markrams insist they are actually overwhelmed not only by their own emotions, but by the emotions of others.

Consequently, the brain architecture of autism is not just defined by its weaknesses, but also by its inherent strengths. The developmental disorder now believed to affect around 1 percent of the population is not characterized by lack of empathy, the Markrams claim. Social difficulties and odd behavior result from trying to cope with a world that’s just too much.

After years of research, the couple came up with their label for the theory during a visit to the remote area where Henry Markram was born, in the South African part of the Kalahari desert. He says “intense world” was Kamila’s phrase; she says she can’t recall who hit upon it. But he remembers sitting in the rust-colored dunes, watching the unusual swaying yellow grasses while contemplating what it must be like to be inescapably flooded by sensation and emotion.

That, he thought, is what Kai experiences. The more he investigated the idea of autism not as a deficit of memory, emotion and sensation, but an excess, the more he realized how much he himself had in common with his seemingly alien son.


HENRY MARKRAM IS TALL, with intense blue eyes, sandy hair and the air of unmistakable authority that goes with the job of running a large, ambitious, well-funded research project. It’s hard to see what he might have in common with a troubled, autistic child. He rises most days at 4 a.m. and works for a few hours in his family’s spacious apartment in Lausanne before heading to the institute, where the Human Brain Project is based. “He sleeps about four or five hours,” says Kamila. “That’s perfect for him.”

As a small child, Markram says, he “wanted to know everything.” But his first few years of high school were mostly spent “at the bottom of the F class.” A Latin teacher inspired him to pay more attention to his studies, and when a beloved uncle became profoundly depressed and died young—he was only in his 30s, but “just went downhill and gave up”—Markram turned a corner. He’d recently been given an assignment about brain chemistry, which got him thinking. “If chemicals and the structure of the brain can change and then I change, who am I? It’s a profound question. So I went to medical school and wanted to become a psychiatrist.”

Markram attended the University of Cape Town, but in his fourth year of medical school, he took a fellowship in Israel. “It was like heaven,” he says, “It was all the toys that I ever could dream of to investigate the brain.” He never returned to med school, and married his first wife, Anat, an Israeli, when he was 26. Soon, they had their first daughter, Linoy, now 24, then a second, Kali, now 23. Kai came four years afterwards.

During graduate research at the Weizmann Institute in Israel, Markram made his first important discovery, elucidating a key relationship between two neurotransmitters involved in learning, acetylcholine and glutamate. The work was important and impressive—especially so early in a scientist’s career—but it was what he did next that really made his name.

During a postdoc with Nobel laureate Bert Sakmann at Germany’s Max Planck Institute, Markram showed how brain cells that “fire together, wire together.” That had been a basic tenet of neuroscience since the 1940s—but no one had been able to figure out how the process actually worked.

By studying the precise timing of electrical signaling between neurons, Markram demonstrated that firing in specific patterns increases the strength of the synapses linking cells, while missing the beat weakens them. This simple mechanism allows the brain to learn, forging connections both literally and figuratively between various experiences and sensations—and between cause and effect.

Measuring these fine temporal distinctions was also a technical triumph. Sakmann won his 1991 Nobel for developing the required “patch clamp” technique, which measures the tiny changes in electrical activity inside nerve cells. To patch just one neuron, you first harvest a sliver of brain, about 1/3 of a millimeter thick and containing around 6 million neurons, typically from a freshly guillotined rat.

To keep the tissue alive, you bubble it in oxygen, and bathe the slice of brain in a laboratory substitute for cerebrospinal fluid. Under a microscope, using a minuscule glass pipette, you carefully pierce a single cell. The technique is similar to injecting a sperm into an egg for in vitro fertilization—except that neurons are hundreds of times smaller than eggs.

It requires steady hands and exquisite attention to detail. Markram’s ultimate innovation was to build a machine that could study 12 such carefully prepared cells simultaneously, measuring their electrical and chemical interactions. Researchers who have done it say you can sometimes go a whole day without getting one right—but Markram became a master.

Still, there was a problem. He seemed to go from one career peak to another—a Fulbright at the National Institutes of Health, tenure at Weizmann, publication in the most prestigious journals—but at the same time it was becoming clear that something was not right in his youngest child’s head. He studied the brain all day, but couldn’t figure out how to help Kai learn and cope. As he told a New York Times reporter earlier this year, “You know how powerless you feel. You have this child with autism and you, even as a neuroscientist, really don’t know what to do.”


AT FIRST, MARKRAM THOUGHT Kai had attention deficit/ hyperactivity disorder (ADHD): Once Kai could move, he never wanted to be still. “He was running around, very difficult to control,” Markram says. As Kai grew, however, he began melting down frequently, often for no apparent reason. “He became more particular, and he started to become less hyperactive but more behaviorally difficult,” Markram says. “Situations were very unpredictable. He would have tantrums. He would be very resistant to learning and to any kind of instruction.”

Preventing Kai from harming himself by running into the street or following other capricious impulses was a constant challenge. Even just trying to go to the movies became an ordeal: Kai would refuse to enter the cinema or hold his hands tightly over his ears.

However, Kai also loved to hug people, even strangers, which is one reason it took years to get a diagnosis. That warmth made many experts rule out autism. Only after multiple evaluations was Kai finally diagnosed with Asperger syndrome, a type of autism that includes social difficulties and repetitive behaviors, but not lack of speech or profound intellectual disability.

“We went all over the world and had him tested, and everybody had a different interpretation,” Markram says. As a scientist who prizes rigor, this infuriated him. He’d left medical school to pursue neuroscience because he disliked psychiatry’s vagueness. “I was very disappointed in how psychiatry operates,” he says.

Over time, trying to understand Kai became Markram’s obsession.

It drove what he calls his “impatience” to model the brain: He felt neuroscience was too piecemeal and could not progress without bringing more data together. “I wasn’t satisfied with understanding fragments of things in the brain; we have to understand everything,” he says. “Every molecule, every gene, every cell. You can’t leave anything out.”

This impatience also made him decide to study autism, beginning by reading every study and book he could get his hands on. At the time, in the 1990s, the condition was getting increased attention. The diagnosis had only been introduced into the psychiatric bible, then the DSM III, in 1980. The 1988 Dustin Hoffman film Rain Man, about an autistic savant, brought the idea that autism was both a disability and a source of quirky intelligence into the popular imagination.

The dark days of the mid–20th century, when autism was thought to be caused by unloving “refrigerator mothers” who icily rejected their infants, were long past. However, while experts now agree that the condition is neurological, its causes remain unknown.

The most prominent theory suggests that autism results from problems with the brain’s social regions, which results in a deficit of empathy. This “theory of mind” concept was developed by Uta Frith, Alan Leslie, and Simon Baron-Cohen in the 1980s. They found that autistic children are late to develop the ability to distinguish between what they know themselves and what others know—something that other children learn early on.

In a now famous experiment, children watched two puppets, “Sally” and “Anne.” Sally has a marble, which she places in a basket and then leaves. While she’s gone, Anne moves Sally’s marble into a box. By age four or five, normal children can predict that Sally will look for the marble in the basket first because she doesn’t know that Anne moved it. But until they are much older, most autistic children say that Sally will look in the box because they know it’s there. While typical children automatically adopt Sally’s point of view and know she was out of the room when Anne hid the marble, autistic children have much more difficulty thinking this way.

The researchers linked this “mind blindness”—a failure of perspective-taking—to their observation that autistic children don’t engage in make-believe. Instead of pretending together, autistic children focus on objects or systems—spinning tops, arranging blocks, memorizing symbols, or becoming obsessively involved with mechanical items like trains and computers.

This apparent social indifference was viewed as central to the condition. Unfortunately, the theory also seemed to imply that autistic people are uncaring because they don’t easily recognize that other people exist as intentional agents who can be loved, thwarted or hurt. But while the Sally-Anne experiment shows that autistic people have difficulty knowing that other people have different perspectives—what researchers call cognitive empathy or “theory of mind”—it doesn’t show that they don’t care when someone is hurt or feeling pain, whether emotional or physical. In terms of caring—technically called affective empathy—autistic people aren’t necessarily impaired.

Sadly, however, the two different kinds of empathy are combined in one English word. And so, since the 1980s, this idea that autistic people “lack empathy” has taken hold.

“When we looked at the autism field we couldn’t believe it,” Markram says. “Everybody was looking at it as if they have no empathy, no theory of mind. And actually Kai, as awkward as he was, saw through you. He had a much deeper understanding of what really was your intention.” And he wanted social contact.

 The obvious thought was: Maybe Kai’s not really autistic? But by the time Markram was fully up to speed in the literature, he was convinced that Kai had been correctly diagnosed. He’d learned enough to know that the rest of his son’s behavior was too classically autistic to be dismissed as a misdiagnosis, and there was no alternative condition that explained as much of his behavior and tendencies. And accounts by unquestionably autistic people, like bestselling memoirist and animal scientist Temple Grandin, raised similar challenges to the notion that autistic people could never really see beyond themselves.

Markram began to do autism work himself as visiting professor at the University of California, San Francisco in 1999. Colleague Michael Merzenich, a neuroscientist, proposed that autism is caused by an imbalance between inhibitory and excitatory neurons. A failure of inhibitions that tamp down impulsive actions might explain behavior like Kai’s sudden move to pat the cobra. Markram started his research there.


MARKRAM MET HIS second wife, Kamila Senderek, at a neuroscience conference in Austria in 2000. He was already separated from Anat. “It was love at first sight,” Kamila says.

Her parents left communist Poland for West Germany when she was five. When she met Markram, she was pursuing a master’s in neuroscience at the Max Planck Institute. When Markram moved to Lausanne to start the Human Brain Project, she began studying there as well.

Tall like her husband, with straight blonde hair and green eyes, Kamila wears a navy twinset and jeans when we meet in her open-plan office overlooking Lake Geneva. There, in addition to autism research, she runs the world’s fourth largest open-access scientific publishing firm, Frontiers, with a network of over 35,000 scientists serving as editors and reviewers. She laughs when I observe a lizard tattoo on her ankle, a remnant of an adolescent infatuation with The Doors.

When asked whether she had ever worried about marrying a man whose child had severe behavioral problems, she responds as though the question never occurred to her. “I knew about the challenges with Kai,” she says, “Back then, he was quite impulsive and very difficult to steer.”

The first time they spent a day together, Kai was seven or eight. “I probably had some blue marks and bites on my arms because he was really quite something. He would just go off and do something dangerous, so obviously you would have to get in rescue mode,” she says, noting that he’d sometimes walk directly into traffic. “It was difficult to manage the behavior,” she shrugs, “But if you were nice with him then he was usually nice with you as well.”

“Kamila was amazing with Kai,” says Markram, “She was much more systematic and could lay out clear rules. She helped him a lot. We never had that thing that you see in the movies where they don’t like their stepmom.”

At the Swiss Federal Institute of Technology in Lausanne (EPFL), the couple soon began collaborating on autism research. “Kamila and I spoke about it a lot,” Markram says, adding that they were both “frustrated” by the state of the science and at not being able to help more. Their now-shared parental interest fused with their scientific drives.

They started by studying the brain at the circuitry level. Markram assigned a graduate student, Tania Rinaldi Barkat, to look for the best animal model, since such research cannot be done on humans.

Barkat happened to drop by Kamila’s office while I was there, a decade after she had moved on to other research. She greeted her former colleagues enthusiastically.

She started her graduate work with the Markrams by searching the literature for prospective animal models. They agreed that the one most like human autism involved rats prenatally exposed to an epilepsy drug called valproic acid (VPA; brand name, Depakote). Like other “autistic” rats, VPA rats show aberrant social behavior and increased repetitive behaviors like excessive self-grooming.

But more significant is that when pregnant women take high doses of VPA, which is sometimes necessary for seizure control, studies have found that the risk of autism in their children increases sevenfold. One 2005 study found that close to 9 percent of these children have autism.

Because VPA has a link to human autism, it seemed plausible that its cellular effects in animals would be similar. A neuroscientist who has studied VPA rats once told me, “I see it not as a model, but as a recapitulation of the disease in other species.”

Barkat got to work. Earlier research showed that the timing and dose of exposure was critical: Different timing could produce opposite symptoms, and large doses sometimes caused physical deformities. The “best” time to cause autistic symptoms in rats is embryonic day 12, so that’s when Barkat dosed them.

At first, the work was exasperating. For two years, Barkat studied inhibitory neurons from the VPA rat cortex, using the same laborious patch-clamping technique perfected by Markram years earlier. If these cells were less active, that would confirm the imbalance that Merzenich had theorized.

She went through the repetitious preparation, making delicate patches to study inhibitory networks. But after two years of this technically demanding, sometimes tedious, and time-consuming work, Barkat had nothing to show for it.

“I just found no difference at all,” she told me, “It looked completely normal.” She continued to patch cell after cell, going through the exacting procedure endlessly—but still saw no abnormalities. At least she was becoming proficient at the technique, she told herself.

Markram was ready to give up, but Barkat demurred, saying she would like to shift her focus from inhibitory to excitatory VPA cell networks. It was there that she struck gold.

 “There was a difference in the excitability of the whole network,” she says, reliving her enthusiasm. The networked VPA cells responded nearly twice as strongly as normal—and they were hyper-connected. If a normal cell had connections to ten other cells, a VPA cell connected with twenty. Nor were they under-responsive. Instead, they were hyperactive, which isn’t necessarily a defect: A more responsive, better-connected network learns faster.

But what did this mean for autistic people? While Barkat was investigating the cortex, Kamila Markram had been observing the rats’ behavior, noting high levels of anxiety as compared to normal rats. “It was pretty much a gold mine then,” Markram says. The difference was striking. “You could basically see it with the eye. The VPAs were different and they behaved differently,” Markram says. They were quicker to get frightened, and faster at learning what to fear, but slower to discover that a once-threatening situation was now safe.

While ordinary rats get scared of an electrified grid where they are shocked when a particular tone sounds, VPA rats come to fear not just that tone, but the whole grid and everything connected with it—like colors, smells, and other clearly distinguishable beeps.

“The fear conditioning was really hugely amplified,” Markram says. “We then looked at the cell response in the amygdala and again they were hyper-reactive, so it made a beautiful story.”


THE MARKRAMS RECOGNIZED the significance of their results. Hyper-responsive sensory, memory and emotional systems might explain both autistic talents and autistic handicaps, they realized. After all, the problem with VPA rats isn’t that they can’t learn—it’s that they learn too quickly, with too much fear, and irreversibly.

They thought back to Kai’s experiences: how he used to cover his ears and resist going to the movies, hating the loud sounds; his limited diet and apparent terror of trying new foods.

“He remembers exactly where he sat at exactly what restaurant one time when he tried for hours to get himself to eat a salad,” Kamila says, recalling that she’d promised him something he’d really wanted if he did so. Still, he couldn’t make himself try even the smallest piece of lettuce. That was clearly overgeneralization of fear.

The Markrams reconsidered Kai’s meltdowns, too, wondering if they’d been prompted by overwhelming experiences. They saw that identifying Kai’s specific sensitivities preemptively might prevent tantrums by allowing him to leave upsetting situations or by mitigating his distress before it became intolerable. The idea of an intense world had immediate practical implications.

 The amygdala.

The VPA data also suggested that autism isn’t limited to a single brain network. In VPA rat brains, both the amygdala and the cortex had proved hyper-responsive to external stimuli. So maybe, the Markrams decided, autistic social difficulties aren’t caused by social-processing defects; perhaps they are the result of total information overload.


CONSIDER WHAT IT MIGHT FEEL like to be a baby in a world of relentless and unpredictable sensation. An overwhelmed infant might, not surprisingly, attempt to escape. Kamila compares it to being sleepless, jetlagged, and hung over, all at once. “If you don’t sleep for a night or two, everything hurts. The lights hurt. The noises hurt. You withdraw,” she says.

Unlike adults, however, babies can’t flee. All they can do is cry and rock, and, later, try to avoid touch, eye contact, and other powerful experiences. Autistic children might revel in patterns and predictability just to make sense of the chaos.

At the same time, if infants withdraw to try to cope, they will miss what’s known as a “sensitive period”—a developmental phase when the brain is particularly responsive to, and rapidly assimilates, certain kinds of external stimulation. That can cause lifelong problems.

Language learning is a classic example: If babies aren’t exposed to speech during their first three years, their verbal abilities can be permanently stunted. Historically, this created a spurious link between deafness and intellectual disability: Before deaf babies were taught sign language at a young age, they would often have lasting language deficits. Their problem wasn’t defective “language areas,” though—it was that they had been denied linguistic stimuli at a critical time. (Incidentally, the same phenomenon accounts for why learning a second language is easy for small children and hard for virtually everyone else.)

This has profound implications for autism. If autistic babies tune out when overwhelmed, their social and language difficulties may arise not from damaged brain regions, but because critical data is drowned out by noise or missed due to attempts to escape at a time when the brain actually needs this input.

The intense world could also account for the tragic similarities between autistic children and abused and neglected infants. Severely maltreated children often rock, avoid eye contact, and have social problems—just like autistic children. These parallels led to decades of blaming the parents of autistic children, including the infamous “refrigerator mother.” But if those behaviors are coping mechanisms, autistic people might engage in them not because of maltreatment, but because ordinary experience is overwhelming or even traumatic.

The Markrams teased out further implications: Social problems may not be a defining or even fixed feature of autism. Early intervention to reduce or moderate the intensity of an autistic child’s environment might allow their talents to be protected while their autism-related disabilities are mitigated or, possibly, avoided.

The VPA model also captures other paradoxical autistic traits. For example, while oversensitivities are most common, autistic people are also frequently under-reactive to pain. The same is true of VPA rats. In addition, one of the most consistent findings in autism is abnormal brain growth, particularly in the cortex. There, studies find an excess of circuits called mini-columns, which can be seen as the brain’s microprocessors. VPA rats also exhibit this excess.

Moreover, extra minicolumns have been found in autopsies of scientists who were not known to be autistic, suggesting that this brain organization can appear without social problems and alongside exceptional intelligence.

Like a high-performance engine, the autistic brain may only work properly under specific conditions. But under those conditions, such machines can vastly outperform others—like a Ferrari compared to a Ford.


THE MARKRAMS’ FIRST PUBLICATION of their intense world research appeared in 2007: a paper on the VPA rat in the Proceedings of the National Academy of Sciences. This was followed by an overview in Frontiers in Neuroscience. The next year, at the Society for Neuroscience (SFN), the field’s biggest meeting, a symposium was held on the topic. In 2010, they updated and expanded their ideas in a second Frontiers paper.

Since then, more than three dozen papers have been published by other groups on VPA rodents, replicating and extending the Markrams’ findings. At this year’s SFN, at least five new studies were presented on VPA autism models. The sensory aspects of autism have long been neglected, but the intense world and VPA rats are bringing it to the fore.

Nevertheless, reaction from colleagues in the field has been cautious. One exception is Laurent Mottron, professor of psychiatry and head of autism research at the University of Montreal. He was the first to highlight perceptual differences as critical in autism—even before the Markrams. Only a minority of researchers even studied sensory issues before him. Almost everyone else focused on social problems.

But when Mottron first proposed that autism is linked with what he calls “enhanced perceptual functioning,” he, like most experts, viewed this as the consequence of a deficit. The idea was that the apparently superior perception exhibited by some autistic people is caused by problems with higher level brain functioning—and it had historically been dismissed as mere“splinter skills,” not a sign of genuine intelligence. Autistic savants had earlier been known as “idiot savants,” the implication being that, unlike “real” geniuses, they didn’t have any creative control of their exceptional minds. Mottron described it this way in a review paper: “[A]utistics were not displaying atypical perceptual strengths but a failure to form global or high level representations.”

 However, Mottron’s research led him to see this view as incorrect. His own and other studies showed superior performance by autistic people not only in “low level” sensory tasks, like better detection of musical pitch and greater ability to perceive certain visual information, but also in cognitive tasks like pattern finding in visual IQ tests.

In fact, it has long been clear that detecting and manipulating complex systems is an autistic strength—so much so that the autistic genius has become a Silicon Valley stereotype. In May, for example, the German software firm SAP announced plans to hire 650 autistic people because of their exceptional abilities. Mathematics, musical virtuosity, and scientific achievement all require understanding and playing with systems, patterns, and structure. Both autistic people and their family members are over-represented in these fields, which suggests genetic influences.

“Our points of view are in different areas [of research,] but we arrive at ideas that are really consistent,” says Mottron of the Markrams and their intense world theory. (He also notes that while they study cell physiology, he images actual human brains.)

Because Henry Markram came from outside the field and has an autistic son, Mottron adds, “He could have an original point of view and not be influenced by all the clichés,” particularly those that saw talents as defects. “I’m very much in sympathy with what they do,” he says, although he is not convinced that they have proven all the details.

Mottron’s support is unsurprising, of course, because the intense world dovetails with his own findings. But even one of the creators of the “theory of mind” concept finds much of it plausible.

Simon Baron-Cohen, who directs the Autism Research Centre at Cambridge University, told me, “I am open to the idea that the social deficits in autism—like problems with the cognitive aspects of empathy, which is also known as ‘theory of mind’—may be upstream from a more basic sensory abnormality.” In other words, the Markrams’ physiological model could be the cause, and the social deficits he studies, the effect. He adds that the VPA rat is an “interesting” model. However, he also notes that most autism is not caused by VPA and that it’s possible that sensory and social defects co-occur, rather than one causing the other.

His collaborator, Uta Frith, professor of cognitive development at University College London, is not convinced. “It just doesn’t do it for me,” she says of the intense world theory. “I don’t want to say it’s rubbish,” she says, “but I think they try to explain too much.”


AMONG AFFECTED FAMILIES, by contrast, the response has often been rapturous. “There are elements of the intense world theory that better match up with autistic experience than most of the previously discussed theories,” says Ari Ne’eman, president of the Autistic Self Advocacy Network, “The fact that there’s more emphasis on sensory issues is very true to life.” Ne’eman and other autistic people fought to get sensory problems added to the diagnosis in DSM-5 — the first time the symptoms have been so recognized, and another sign of the growing receptiveness to theories like intense world.

Steve Silberman, who is writing a history of autism titled NeuroTribes: Thinking Smarter About People Who Think Differently, says, “We had 70 years of autism research [based] on the notion that autistic people have brain deficits. Instead, the intense world postulates that autistic people feel too much and sense too much. That’s valuable, because I think the deficit model did tremendous injury to autistic people and their families, and also misled science.”

Priscilla Gilman, the mother of an autistic child, is also enthusiastic. Her memoir, The Anti-Romantic Child, describes her son’s diagnostic odyssey. Before Benjamin was in preschool, Gilman took him to the Yale Child Study Center for a full evaluation. At the time, he did not display any classic signs of autism, but he did seem to be a candidate for hyperlexia—at age two-and-a-half, he could read aloud from his mother’s doctoral dissertation with perfect intonation and fluency. Like other autistic talents, hyperlexia is often dismissed as a “splinter” strength.

At that time, Yale experts ruled autism out, telling Gilman that Benjamin “is not a candidate because he is too ‘warm’ and too ‘related,’” she recalls. Kai Markram’s hugs had similarly been seen as disqualifying. At twelve years of age, however, Benjamin was officially diagnosed with Autism Spectrum Disorder.

According to the intense world perspective, however, warmth isn’t incompatible with autism. What looks like antisocial behavior results from being too affected by others’ emotions—the opposite of indifference.

Indeed, research on typical children and adults finds that too much distress can dampen ordinary empathy as well. When someone else’s pain becomes too unbearable to witness, even typical people withdraw and try to soothe themselves first rather than helping—exactly like autistic people. It’s just that autistic people become distressed more easily, and so their reactions appear atypical.

“The overwhelmingness of understanding how people feel can lead to either what is perceived as inappropriate emotional response, or to what is perceived as shutting down, which people see as lack of empathy,” says Emily Willingham. Willingham is a biologist and the mother of an autistic child; she also suspects that she herself has Asperger syndrome. But rather than being unemotional, she says, autistic people are “taking it all in like a tsunami of emotion that they feel on behalf of others. Going internal is protective.”

At least one study supports this idea, showing that while autistic people score lower on cognitive tests of perspective-taking—recall Anne, Sally, and the missing marble—they are more affected than typical folks by other people’s feelings. “I have three children, and my autistic child is my most empathetic,” Priscilla Gilman says, adding that when her mother first read about the intense world, she said, “This explains Benjamin.”

Benjamin’s hypersensitivities are also clearly linked to his superior perception. “He’ll sometimes say, ‘Mommy, you’re speaking in the key of D, could you please speak in the key of C? It’s easier for me to understand you and pay attention.”

Because he has musical training and a high IQ, Benjamin can use his own sense of “absolute pitch”—the ability to name a note without hearing another for comparison—to define the problem he’s having. But many autistic people can’t verbalize their needs like this. Kai, too, is highly sensitive to vocal intonation, preferring his favorite teacher because, he explains, she “speaks soft,” even when she’s displeased. But even at 19, he isn’t able to articulate the specifics any better than that.


ON A RECENT VISIT to Lausanne, Kai wears a sky blue hoodie, his gray Chuck Taylor–style sneakers carefully unlaced at the top. “My rapper sneakers,” he says, smiling. He speaks Hebrew and English and lives with his mother in Israel, attending a school for people with learning disabilities near Rehovot. His manner is unselfconscious, though sometimes he scowls abruptly without explanation. But when he speaks, it is obvious that he wants to connect, even when he can’t answer a question. Asked if he thinks he sees things differently than others do, he says, “I feel them different.”

He waits in the Markrams’ living room as they prepare to take him out for dinner. Henry’s aunt and uncle are here, too. They’ve been living with the family to help care for its newest additions: nine-month-old Charlotte and Olivia, who is one-and-a-half years old.

“It’s our big patchwork family,” says Kamila, noting that when they visit Israel, they typically stay with Henry’s ex-wife’s family, and that she stays with them in Lausanne. They all travel constantly, which has created a few problems now and then. None of them will ever forget a tantrum Kai had when he was younger, which got him barred from a KLM flight. A delay upset him so much that he kicked, screamed, and spat.

Now, however, he rarely melts down. A combination of family and school support, an antipsychotic medication that he’s been taking recently, and increased understanding of his sensitivities has mitigated the disabilities Kai associated with his autism.

 “I was a bad boy. I always was hitting and doing a lot of trouble,” Kai says of his past. “I was really bad because I didn’t know what to do. But I grew up.” His relatives nod in agreement. Kai has made tremendous strides, though his parents still think that his brain has far greater capacity than is evident in his speech and schoolwork.

As the Markrams see it, if autism results from a hyper-responsive brain, the most sensitive brains are actually the most likely to be disabled by our intense world. But if autistic people can learn to filter the blizzard of data, especially early in life, then those most vulnerable to the most severe autism might prove to be the most gifted of all.

Markram sees this in Kai. “It’s not a mental retardation,” he says, “He’s handicapped, absolutely, but something is going crazy in his brain. It’s a hyper disorder. It’s like he’s got an amplification of many of my quirks.”

One of these involves an insistence on timeliness. “If I say that something has to happen,” he says, “I can become quite difficult. It has to happen at that time.

He adds, “For me it’s an asset, because it means that I deliver. If I say I’ll do something, I do it.” For Kai, however, anticipation and planning run wild. When he travels, he obsesses about every move, over and over, long in advance. “He will sit there and plan, okay, when he’s going to get up. He will execute. You know he will get on that plane come hell or high water,” Markram says. “But he actually loses the entire day. It’s like an extreme version of my quirks, where for me they are an asset and for him they become a handicap.”

If this is true, autistic people have incredible unrealized potential. Say Kai’s brain was even more finely tuned than his father’s, then it might give him the capacity to be even more brilliant. Consider Markram’s visual skills. Like Temple Grandin, whose first autism memoir was titled Thinking In Pictures, he has stunning visual abilities. “I see what I think,” he says, adding that when he considers a scientific or mathematical problem, “I can see how things are supposed to look. If it’s not there, I can actually simulate it forward in time.”

At the offices of Markram’s Human Brain Project, visitors are given a taste of what it might feel like to inhabit such a mind. In a small screening room furnished with sapphire-colored, tulip-shaped chairs, I’m handed 3-D glasses. The instant the lights dim, I’m zooming through a brightly colored forest of neurons so detailed and thick that they appear to be velvety, inviting to the touch.

 The simulation feels so real and enveloping that it is hard to pay attention to the narration, which includes mind-blowing facts about the project. But it is also dizzying, overwhelming. If this is just a smidgen of what ordinary life is like for Kai it’s easier to see how hard his early life must have been. That’s the paradox about autism and empathy. The problem may not be that autistic people can’t understand typical people’s points of view—but that typical people can’t imagine autism.

Critics of the intense world theory are dismayed and put off by this idea of hidden talent in the most severely disabled. They see it as wishful thinking, offering false hope to parents who want to see their children in the best light and to autistic people who want to fight the stigma of autism. In some types of autism, they say, intellectual disability is just that.

“The maxim is, ‘If you’ve seen one person with autism, you’ve seen one person with autism,’” says Matthew Belmonte, an autism researcher affiliated with the Groden Center in Rhode Island. The assumption should be that autistic people have intelligence that may not be easily testable, he says, but it can still be highly variable.

He adds, “Biologically, autism is not a unitary condition. Asking at the biological level ‘What causes autism?’ makes about as much sense as asking a mechanic ‘Why does my car not start?’ There are many possible reasons.” Belmonte believes that the intense world may account for some forms of autism, but not others.

Kamila, however, insists that the data suggests that the most disabled are also the most gifted. “If you look from the physiological or connectivity point of view, those brains are the most amplified.”

The question, then, is how to unleash that potential.

“I hope we give hope to others,” she says, while acknowledging that intense-world adherents don’t yet know how or even if the right early intervention can reduce disability.

The secret-ability idea also worries autistic leaders like Ne’eman, who fear that it contains the seeds of a different stigma. “We agree that autistic people do have a number of cognitive advantages and it’s valuable to do research on that,” he says. But, he stresses, “People have worth regardless of whether they have special abilities. If society accepts us only because we can do cool things every so often, we’re not exactly accepted.”


The MARKRAMS ARE NOW EXPLORING whether providing a calm, predictable early environment—one aimed at reducing overload and surprise—can help VPA rats, soothing social difficulties while nurturing enhanced learning. New research suggests that autism can be detected in two-month-old babies, so the treatment implications are tantalizing.

So far, Kamila says, the data looks promising. Unexpected novelty seems to make the rats worse—while the patterned, repetitive, and safe introduction of new material seems to cause improvement.

In humans, the idea would be to keep the brain’s circuitry calm when it is most vulnerable, during those critical periods in infancy and toddlerhood. “With this intensity, the circuits are going to lock down and become rigid,” says Markram. “You want to avoid that, because to undo it is very difficult.”

For autistic children, intervening early might mean improvements in learning language and socializing. While it’s already clear that early interventions can reduce autistic disability, they typically don’t integrate intense-world insights. The behavioral approach that is most popular—Applied Behavior Analysis—rewards compliance with “normal” behavior, rather than seeking to understand what drives autistic actions and attacking the disabilities at their inception.

Research shows, in fact, that everyone learns best when receiving just the right dose of challenge—not so little that they’re bored, not so much that they’re overwhelmed; not in the comfort zone, and not in the panic zone, either. That sweet spot may be different in autism. But according to the Markrams, it is different in degree, not kind.

Markram suggests providing a gentle, predictable environment. “It’s almost like the fourth trimester,” he says.

To prevent the circuits from becoming locked into fearful states or behavioral patterns you need a filtered environment from as early as possible,” Markram explains. “I think that if you can avoid that, then those circuits would get locked into having the flexibility that comes with security.”

Creating this special cocoon could involve using things like headphones to block excess noise, gradually increasing exposure and, as much as possible, sticking with routines and avoiding surprise. If parents and educators get it right, he concludes, “I think they’ll be geniuses.”

IN SCIENCE, CONFIRMATION BIAS is always the unseen enemy. Having a dog in the fight means you may bend the rules to favor it, whether deliberately or simply because we’re wired to ignore inconvenient truths. In fact, the entire scientific method can be seen as a series of attempts to drive out bias: The double-blind controlled trial exists because both patients and doctors tend to see what they want to see—improvement.

At the same time, the best scientists are driven by passions that cannot be anything but deeply personal. The Markrams are open about the fact that their subjective experience with Kai influences their work.

But that doesn’t mean that they disregard the scientific process. The couple could easily deal with many of the intense world critiques by simply arguing that their theory only applies to some cases of autism. That would make it much more difficult to disprove. But that’s not the route they’ve chosen to take. In their 2010 paper, they list a series of possible findings that would invalidate the intense world, including discovering human cases where the relevant brain circuits are not hyper-reactive, or discovering that such excessive responsiveness doesn’t lead to deficiencies in memory, perception, or emotion. So far, however, the known data has been supportive.

But whether or not the intense world accounts for all or even most cases of autism, the theory already presents a major challenge to the idea that the condition is primarily a lack of empathy, or a social disorder. Intense world theory confronts the stigmatizing stereotypes that have framed autistic strengths as defects, or at least as less significant because of associated weaknesses.

And Henry Markram, by trying to take his son Kai’s perspective—and even by identifying so closely with it—has already done autistic people a great service, demonstrating the kind of compassion that people on the spectrum are supposed to lack. If the intense world does prove correct, we’ll all have to think about autism, and even about typical people’s reactions to the data overload endemic in modern life, very differently.

From left: Kamila, Henry, Kai, and Anat


This story was written by Maia Szalavitz, edited by Mark Horowitz, fact-checked by Kyla Jones, and copy-edited by Tim Heffernan, with photography by Darrin Vanselow and an audiobook narrated by Jack Stewart.


free download
ePub • Kindle • Audiobook

Efeitos bifásicos da ayahuasca (Plantando Consciência)

30 de setembro de 2015

Efeitos bifásicos da Ayahuasca

Foi publicado hoje na revista científica PLOS ONE artigo com os resultados de nosso estudo neurocientífico sobre a ayahuasca. Fruto de pouco mais de quatro anos de intenso e dedicado trabalho, a pesquisa foi conduzida na UNIFESP com financiamento da FAPESP, com cooperações na USP, UFABC, Louisiana State University (EUA) e da University of Auckland (Nova Zelândia). Além da colaboração da União do Vegetal que nos forneceu Hoasca para fins de pesquisa, e de 20 bravos(as) psiconautas experientes no uso da bebida amazônica. Nossos(as) voluntários(as) se disponibilizaram a participar de um processo em um ambiente e com uma proposta que difere em muito dos usos tradicionais, e era bastante desafiadora. Beberam ayahuasca num laboratório universitário, sem canto nem palo santo, sem reza, dança ou fogueira, no meio da conturbada metrópole paulista. E tiveram que usar uma touca que gravava a atividade elétrica de seus cérebros continuamente num notebook próximo a elas. Sentadas em uma poltrona confortável, doaram pequenas quantidades de sangue a cada 25 minutos. Apesar de não ter a fundamental condução dos guias, curandeiros, mestres ou maestros, que fazem trabalhos tão importantes quanto a bebida em si, e de tomarem ayahuasca uma pessoa por vez, foram acompanhados com carinho e cuidado pela equipe científica, nunca sendo deixados sozinhos ou desamparados, e sempre com os baldinhos à disposição… Tudo isso em prol da colaboração dos saberes tradicionais com os saberes científicos e tecnológicos.Uma pesquisa desse tipo se justifica por várias razões, desde um entendimento mais profundo sobre nossa resposta fisiológica aos compostos químicos presentes na ayahuasca, que nos fornece dados cruciais sobre potenciais terapêuticos e segurança de uso; até informações mais sofisticadas sobre as relações entre cérebro e consciência, o chamado “hard-problem”. Com os resultados dessa jornada aprofundamos e expandimos o conhecimento sobre os efeitos dos componentes moleculares da bebida sagrada, sobre como nossos corpos recebem estas moléculas e que efeitos elas ajudam a desencadear, especialmente no cérebro. Ao minimizarmos as intervenções biomédicas somente ao estritamente necessário e ao adotarmos uma postura observacional, deixando e encorajando que os voluntários passassem a maior parte do tempo de olhos fechados em estado introspectivo, pudemos revelar uma imagem fascinante sobre os efeitos da ayahuasca no cérebro. Este efeito ocorre em duas fases qualitativamente distintas e este perfil bifásico ajuda a explicar contradições de estudos semelhantes feitos anteriormente por outras equipes. Com isso abrimos mais portas para fascinantes investigações futuras sobre os diversos estados de consciência que podem ser alcançados com a bebida amazônica.

Cerca de uma hora após a ingestão da ayahuasca, ocorreram diminuições das ondas alfa (8 a 12 ciclos por segundo), especialmente no córtex temporo-parietal, com uma certa tendência de lateralização para o hemisfério esquerdo. A segunda fase ocorre cerca de uma hora depois (ou seja, cerca de duas horas após a ingestão) e enquanto as ondas alfa foram retornando a um padrão parecido com o que estava antes da ingestão da ayahuasca, os ritmos gama, de frequências muito altas (30 a 100 ciclos por segundo), se intensificaram por quase todo o córtex cerebral, incluindo o córtex frontal. Estas oscilações elétricas em distintas frequências, que ocorrem perpetuamente e simultaneamente em todo o cérebro, são resultado da complexa interação da atividade de bilhões de células cerebrais. E estão relacionadas com todas as funções do cérebro, inclusive os aspectos psicológicos e os estados de consciência. Por exemplo, durante o sono profundo predomina no córtex cerebral uma frequência lenta, de 1 a 4 ciclos por segundo, chamada delta. Enquanto durante a maioria dos sonhos, predomina a frequência teta (4 a 8 ciclos por segundo). Ao caracterizar as principais mudanças nestas frequências de oscilações neurais avançamos na criação de um mapa neurocientífico sobre o estado de consciência desencadeado pela ingestão de ayahuasca.

Há variadas nuances de interpretação para estes dados (e muitos estudos posteriores que podem ser feitos de acordo com cada interpretação, para testas hipóteses específicas). Mas a minha favorita e que discutimos no artigo é de que o ritmo alfa é resultado de atividades inibitórias no cérebro, e o ritmo gama representa atividade neural crucial para a consciência. Quando fechamos os olhos e temos a sensacao de um campo visual escuro, sem imagens, o ritmo alfa se fortalece nas regiões do cérebro que recebem estímulos vindos dos olhos. Ou seja, quando estamos de olhos fechados não apenas a informação que chega dos olhos está ausente, mas as áreas visuais são inibidas por “centros superiores” do córtex, capazes de modular a atividade de áreas sensoriais. E nós temos a experiência subjetiva de um mundo escuro e de ausência de visão. No caso da ayahuasca, encontramos um enfraquecimento dessa inibição em áreas multisensoriais. Ou seja, regiões que estão envolvidas não só com visão, mas com audição, tato, paladar, olfato e também com sensações corpóreas das mais diversas. Faz sentido portanto que esta diminuição de alfa esteja relacionada com o efeito tão comum de experiência de mais sensações e mais estímulos durante o efeito da ayahuasca quando comparado com o estado ordinário de consciência, incluindo as famosas visões de olhos fechados. Já o acelerado gama está relacionado com o que se chama na neurociência de integração. Enquanto áreas diversas do cérebro estão relacionadas a percepções subjetivas distintas, como os cinco sentidos mencionados acima, nossa experiência consciente é unificada. Essa unificação de atividades neurais em áreas anatomicamente distintas ocorre nas oscilações rápidas na frequência gama, que permitem ao cérebro temporariamente juntar as peças de um complexo quebra cabeças de atividade neural. Esse aumento de gama pode ajudar a explicar porque durante a ayahuasca a percepção de sons e imagens, por exemplo, parece se fundir e criar relações peculiares, não perceptíveis durante a consciência ordinária, quando o cérebro tende a organizar a atividade neural relacionada aos cinco sentidos de maneira parcialmente independente. Essa função do gama em unificar ou integrar informações no cérebro é conhecida de longa data, pelo menos desde a obra pioneira do cientista Chileno Francisco Varela. E foi observada em dois indíviduos após tomarem ayahuasca em trabalho do antropólogo Luis Eduardo Luna e colaboradores há uma década. Ao confirmarmos os dados de Luna e colaboradores com nova e mais rigorosa metodologia, com mais pessoas e ao detectarmos a combinação destes efeitos com as reduções em alfa, abrimos portas importantíssimas no entendimento não só dos estados não-ordinários de consciência, mas da teoria neurocientifica sobre a consciência como um todo. Um exemplo é uma teoria proposta recentemente sobre a ação dos psicodélicos que sugere que uma das características principais do cérebro durante o efeito de psicodélicos sejam intensificações do gama. Para Andrew Gallimore, do Japão, que se baseia na influente teoria da informacao integrada, ou IIT (integrated information theory), a mais promissora teoria neurocientífica sobre a consciência, a expansão da consciência com psicodélicos é mesmo possível dentro de uma perspectiva neurocientífica, e provavelmente depende do ritmo gama. Esta expansão da consciência inclui a percepção subjetiva de mais conteúdo, de maior intensidade, incluindo fusões entre os sentidos e possivelmente a experiência subjetiva de intensidades e qualidades não perceptíveis durante a consciência ordinária, como cores mais vívidas e brilhantes e estados emocionais mais intensos do que jamais experienciados fora do estado psicodélico. O gama também tem papel fundamental na teoria da consciência proposta pelo matemático Sir Roger Penrose e pelo anestesiologista Stuart Hameroff. Segundo a teoria deles, oscilações na faixa de 40 ciclos por segundo seriam importantes ao permitir reverberações menores e muito mais aceleradas nos microtúbulos, uma rede de fibras e filamentos que percorre todas as células do nosso corpo – e do cérebro.

Ademais de caracterizar as oscilações e regiões corticais mais importantes no processo neural relacionado à modificação da consciência durante a ayahuasca, fizemos coletas periódicas de sangue para quantificar os princípios ativos da ayahuasca e seus metabólitos. E encontramos que durante a primeira fase a concentração da DMT e da harmina estavam próximas do máximo, sendo que na segunda fase acontecem os picos de harmalina e tetrahidroharmina. Com uma análise estatística sofisticada e inédita, desenvolvida especialmente para este estudo, demonstramos que este efeito bifásico no cérebro esta relacionado à concentração sanguínea de vários componentes do chá. Isto expande a visão científica predominante que foca apenas na famosa DMT. De acordo com este modelo, o papel do cipó é apenas de inibir a digestão da DMT. Mas “ayahuasca” é um dos muitos nomes não só da bebida, mas do cipó jagube ou mariri, catalogado nos anais científicos como Banisteriopsis caapi. Isto revela que, para os povos tradicionais, é o cipó a planta mais importante. E de fato há preparações de ayahuasca feitas somente com o cipó, sem qualquer outra planta. Mas na farmacologia esse quadro foi invertido, dando-se ênfase na psicoatividade da DMT apenas, que não vem do cipó, mas de outras plantas que frequentemente são adicionadas no preparo da bebida, como a rainha no Brasil e Peru (Psychotria viridis) ou a chagropanga na Colômbia (Dyplopteris cabrerana). Mas nossa análise com 10 moléculas (DMT, NMT e DMT-NO; Harmina e harmol; Harmalina e harmalol; THH e THH-OH e também o metabólito serotonérgico IAA) revelou associações importantes entre níveis plasmáticos de DMT, harmina, harmalina e tetraidroharmina, bem como alguns metabólitos como a DMT-NO, e os efeitos cerebrais em alfa e gama em momentos distintos da experiência. Revelamos portanto que a psicoatividade da ayahuasca não pode ser totalmente explicada apenas pelas concentrações de DMT, dando um passo importante para reaproximar o saber científico dos saberes tradicionais.

novo infografico pt_br FB

Descobrimos ainda que a concentração de harmalina (e apenas de harmalina) está correlacionada com o momento em que os voluntários(as) vomitaram. Ou seja, a harmalina desempenha um papel fundamental tanto no cérebro, estando relacionada a intensificação das ondas gama, mas também nos efeitos periféricos da ayahuasca, como o vômito. Isso reforça a idéia de que o vômito tem relações importantes com a experiência psicológica, sendo talvez mais apropriado chamá-lo de purga, termo que reforça a idéia de que ocorre uma associação entre físico e psicológico neste momento da experiência. Esses resultados sobre a harmalina também dão nova importância para as pesquisas pioneiras de Claudio Naranjo, terapeuta Chileno que foi um dos primeiros a estudar ayahuasca desde um ponto de vista médico-científico, nos anos 60. A proposta de Naranjo, de que a harmalina era o principal componente psicoativo da ayahuasca foi, entretanto, quase que totalmente esquecida em prol do foco na DMT a partir dos anos 80. Outro fator importante contra a proposta de Naranjo é que as concentrações de harmalina na ayahuasca são em geral abaixo das doses de harmalina que, sozinha, desencadeiam efeitos psicoativos nítidos, conforme relato subjetivo das pessoas que ingeriram harmalina nos estudos de Naranjo. Mas nunca foi testado o efeito da harmalina combinada com a harmina e a tetraidroharmina, como ocorre na ayahuasca. E então nossos resultados reforçam a idéia de que a harmalina também pode ter contribuições importantes no efeito psicoativo da ayahuasca quando em combinação com as outras beta-carbolinas vindas do cipó. Interessantemente, em quase todos os casos a purga ocorreu após a primeira fase, quando os níveis de DMT estão próximos do máximo que atingem no sangue. Como a elevação da concentração de harmalina no sangue é mais lenta que da DMT e da harmina, vomitar pouco interfere nos efeitos da primeira fase e nas concentrações destas duas moléculas, e ajuda a explicar porque mesmo quem vomita rápido pode ter experiências fortes e profundas. Mas vomitar potencialmente interfere nas concentrações de tetraidroharmina, que é a molécula cujas concentrações sobem mais lentamente, e pode permanecer em circulação por alguns dias, dependendo da capacidade de metabolização de cada indivíduo.

Importante notar ainda que o perfil bifásico foi observado com ingestão de apenas um copo (mas com uma dose grande). Mas sabemos que nos usos rituais é muito frequente os participantes tomarem mais de uma dose, com intervalo de uma hora ou mais. É possível então que nestes casos ocorram variadas combinações de efeitos, como por exemplo a segunda fase de uma primeira dose (aumento de gama) coincidir com a primeira fase de uma segunda dose (diminuição de alfa). Isso potencialmente geraria estados cerebrais (e por correlação, estados de consciência) não observados na pesquisa com apenas uma dose. Isto ajuda a entender porque muitas pessoas relatam que a segunda dose é sempre uma “caixinha de surpresas”, e não apenas a intensificação ou prolongação dos efeitos da primeira toma. Ao depender do perfil metabólico de cada pessoa, do tamanho de cada dose, da proporção destas moléculas na bebida e do intervalo entre elas, pode-se atingir outros estados mesclados entre as duas fases observadas na pesquisa. Some-se a isto as influências ambientais, psicológicas, motivacionais e espirituais e temos uma prática de exploração da consciência que não cabe numa resposta simples e singular sobre qual “o efeito” da ayahuasca.

Do ponto de vista neurocientífico, estas possíveis combinações são muito intrigantes, porque relações entre as frequências alfa e gama no córtex parietal e no frontal estão envolvidas em processos de reavaliação psicológica e emocional. Ou seja, quando fazemos certas formas de introspecção que resultam em ressignificação de eventos emocionais de nossas vidas, estas áreas do cérebro se comunicam através de oscilações elétricas nestas duas faixas de frequência. E estas mesmas frequências e áreas cerebrais estão envolvidas em processos criativos de resolução de problemas. Ou seja, através de nossa pesquisa, a neurociência começa a convergir com o saber ancestral ao reafirmar o potencial da ayahuasca em nutrir a criatividade e o autoconhecimento, facilitando formas de terapia focadas no potencial de cada indíviduo em crescer e se desenvolver de maneira consciente.

Para saber mais, confira abaixo minha palestra na World Ayahuasca Confrence, em Ibiza ano passado (disponível com legendas em português e inglês). Ou ainda a mais antiga “Ayahuasca e as ondas cerebrais“, realizada no Brasil no início deste projeto. Ou se você quer mesmo mergulhar fundo, acesse gratuitamente o artigo científico na íntegra.

Referência: Schenberg EE, Alexandre JFM, Filev R, Cravo AM, Sato JR, Muthukumaraswamy SD, et al. (2015) Acute Biphasic Effects of Ayahuasca. PLoS ONE 10(9): e0137202. doi:10.1371/journal.pone.0137202

 

Plantas têm memória, sentem dor e são inteligentes (Portugal Mundial)

Março de 2015

singingplants

Pode uma planta ser inteligente? Alguns cientistas insistem que são – uma vez que elas podem sentir, aprender, lembrar e até mesmo reagir de formas que seriam familiares aos seres humanos. A nova pesquisa está num campo chamado neurobiologia de plantas – o que é meio que um equívoco, porque mesmo os cientistas desta área não argumentam que as plantas tenham neurónios ou cérebros.

Elas têm estruturas análogas“, explica Michael Pollan, autor de livros como The Omnivore’s Dilemma (O Dilema do Onívoro) e The Botany of Desire (A Botânica do Desejo). “Elas têm maneiras de tomar todos os dados sensoriais que se reúnem em suas vidas quotidianas … integrá-los e, em seguida, se comportar de forma adequada em resposta. E elas fazem isso sem cérebro, o que, de certa forma, é o que é incrível sobre isso, porque assumimos automaticamente que você precisa de um cérebro para processar a informação”.

E nós supomos que precisamos de ouvidos para ouvir. Mas os pesquisadores, diz Pollan, tocaram uma gravação de uma lagarta comendo uma folha para plantas – e as plantas reagiram. Elas começam a segregar substâncias químicas defensivas – embora a planta não esteja realmente ameaçada, diz Pollan. “Ela está de alguma forma ouvindo o que é, para ela, um som aterrorizante de uma lagarta comendo suas folhas.”

Plantas podem sentir

Pollan diz que as plantas têm todos os mesmos sentidos como os seres humanos, e alguns a mais. Além da audição e do paladar, por exemplo, elas podem detectar a gravidade, a presença de água, ou até sentir que  um obstáculo está a bloquear as suas raízes, antes de entrar em contacto com ele. As raízes das plantas mudam de direcção, diz ele, para evitar obstáculos.

E a dor? As plantas sentem? Pollan diz que elas respondem aos anestésicos. “Pode apagar uma planta com um anestésico humano… E não só isso, as plantas produzem seus próprios compostos que são anestésicos para nós.” 

De acordo com os pesquisadores do Instituto de Física Aplicada da Universidade de Bonn, na Alemanha, as plantas libertam gases que são o equivalente a gritos de dor. Usando um microfone movido a laser, os pesquisadores captaram ondas sonoras produzidas por plantas que liberam gases quando cortadas ou feridas. Apesar de não ser audível ao ouvido humano, as vozes secretas das plantas têm revelado que os pepinos gritam quando estão doentes, e as flores se lamentam quando suas folhas são cortadas [fonte: Deutsche Welle].

Plant_042913-617x416

Sistema nervoso de plantas

Como as plantas sentem e reagem ainda é um pouco desconhecido. Elas não têm células nervosas como os seres humanos, mas elas têm um sistema de envio de sinais eléctricos e até mesmo a produção de neurotransmissores, como dopamina, serotonina e outras substâncias químicas que o cérebro humano usa para enviar sinais.

As plantas realmente sentem dor

As evidências desses complexos sistemas de comunicação são sinais de que as plantas sentem dor. Ainda mais, os cientistas supõem que as plantas podem apresentar um comportamento inteligente sem possuir um cérebro ou consciência.

Elas podem se lembrar

michael pollan plantas sencientes conscientes inteligência veganismo vegetarianismo burrice errado dó piedade misericórdia

Pollan descreve um experimento feito pela bióloga de animais Monica Gagliano. Ela apresentou uma pesquisa que sugere que a planta Mimosa pudica pode aprender com a experiência. E, Pollan diz, por apenas sugerir que uma planta poderia aprender, era tão controverso que seu artigo foi rejeitado por 10 revistas científicas antes de ser finalmente publicado.

Mimosa é uma planta, que é algo como uma samambaia, que recolhe suas folhas temporariamente quando é perturbada. Então Gagliano configurou uma engenhoca que iria pingar gotas na planta mimosa, sem ferir-la. Quando a planta era tocada, tal como esperado, as folhas se fechavam. Ela ficava pingando as plantas a cada 5-6 segundos.

Depois de cinco ou seis gotas, as plantas paravam de responder, como se tivessem aprendido a sintonizar o estímulo como irrelevante“, diz Pollan. “Esta é uma parte muito importante da aprendizagem – saber o que você pode ignorar com segurança em seu ambiente.”

Talvez a planta estava apenas se cansando de tantos pingos? Para testar isso, Gagliano pegou as plantas que tinham parado de responder às gotas e sacudiu-as.

Elas continuavam a se fechar“, diz Pollan. “Elas tinham feito a distinção que o gotejamento era um sinal que elas poderiam ignorar. E o que foi mais incrível é que Gagliano as testou novamente a cada semana durante quatro semanas e, durante um mês, elas continuaram a lembrar a lição.”

Isso foi o mais longe que Gagliano testou. É possível que elas se lembrem ainda mais. Por outro lado, Pollan aponta, as abelhas que foram testadas de maneira semelhante se esquecem o que aprenderam em menos de 48 horas.

Plantas: seres sentientes?

As plantas podem fazer coisas incríveis. Elas parecem se lembrar de estresse e eventos, como essa experiência. Elas têm a capacidade de responder de 15 a 20 variáveis ambientais”, diz Pollan. “A questão é, é correto de chamar isso de aprendizagem? É essa a palavra certa? É correto chamar isso de inteligência? É certo, ainda, dizer que elas são conscientes? Alguns destes neurobiólogos de plantas acreditam que as plantas estão conscientes – não auto-conscientes, mas conscientes, no sentido que elas sabem onde elas estão no espaço … e reagem adequadamente a sua posição no espaço”.

Pollan diz que não há definição consensual de inteligência. “Vá para a Wikipedia e procure por inteligência. Eles se desesperam para dar-lhe uma resposta. Eles têm basicamente um gráfico onde dão-lhe nove definições diferentes. E cerca da metade delas dependem de um cérebro … se referem ao raciocínio abstracto ou julgamento.”

“E a outra metade apenas se referem a uma capacidade de resolver problemas. E esse é o tipo de inteligência que estamos falando aqui.  Então a inteligência pode muito bem ser uma propriedade de vida. E a nossa diferença em relação a essas outras criaturas pode ser uma questão da diferença de grau e não de espécie. Podemos apenas ter mais desta habilidade de resolver problemas e podemos fazê-lo de diferentes maneiras.”

Pollan diz que o que realmente assusta as pessoas é “que a linha entre plantas e animais pode ser um pouco mais fina do que nós tradicionalmente acreditamos.”

E ele sugere que as plantas podem ser capaz de ensinar os seres humanos uma ou duas coisas, tais como a forma de processar a informação sem um posto de comando central, como um cérebro.

Confira este vídeo de Michael Pollan.

VÍDEO