- August 12, 2016
- Cardiff University
- Humans have evolved a disproportionately large brain as a result of sizing each other up in large cooperative social groups, researchers have proposed.
Humans have evolved a disproportionately large brain as a result of sizing each other up in large cooperative social groups, researchers have proposed.
A team led by computer scientists at Cardiff University suggest that the challenge of judging a person’s relative standing and deciding whether or not to cooperate with them has promoted the rapid expansion of human brain size over the last 2 million years.
In a study published in Scientific Reports, the team, which also includes leading evolutionary psychologist Professor Robin Dunbar from the University of Oxford, specifically found that evolution favors those who prefer to help out others who are at least as successful as themselves.
Lead author of the study Professor Roger Whitaker, from Cardiff University’s School of Computer Science and Informatics, said: “Our results suggest that the evolution of cooperation, which is key to a prosperous society, is intrinsically linked to the idea of social comparison — constantly sizing each up and making decisions as to whether we want to help them or not.
“We’ve shown that over time, evolution favors strategies to help those who are at least as successful as themselves.”
In their study, the team used computer modelling to run hundreds of thousands of simulations, or ‘donation games’, to unravel the complexities of decision-making strategies for simplified humans and to establish why certain types of behaviour among individuals begins to strengthen over time.
In each round of the donation game, two simulated players were randomly selected from the population. The first player then made a decision on whether or not they wanted to donate to the other player, based on how they judged their reputation. If the player chose to donate, they incurred a cost and the receiver was given a benefit. Each player’s reputation was then updated in light of their action, and another game was initiated.
Compared to other species, including our closest relatives, chimpanzees, the brain takes up much more body weight in human beings. Humans also have the largest cerebral cortex of all mammals, relative to the size of their brains. This area houses the cerebral hemispheres, which are responsible for higher functions like memory, communication and thinking.
The research team propose that making relative judgements through helping others has been influential for human survival, and that the complexity of constantly assessing individuals has been a sufficiently difficult task to promote the expansion of the brain over many generations of human reproduction.
Professor Robin Dunbar, who previously proposed the social brain hypothesis, said: “According to the social brain hypothesis, the disproportionately large brain size in humans exists as a consequence of humans evolving in large and complex social groups.
“Our new research reinforces this hypothesis and offers an insight into the way cooperation and reward may have been instrumental in driving brain evolution, suggesting that the challenge of assessing others could have contributed to the large brain size in humans.”
According to the team, the research could also have future implications in engineering, specifically where intelligent and autonomous machines need to decide how generous they should be towards each other during one-off interactions.
“The models we use can be executed as short algorithms called heuristics, allowing devices to make quick decisions about their cooperative behaviour,” Professor Whitaker said.
“New autonomous technologies, such as distributed wireless networks or driverless cars, will need to self-manage their behaviour but at the same time cooperate with others in their environment.”
- Roger M. Whitaker, Gualtiero B. Colombo, Stuart M. Allen, Robin I. M. Dunbar. A Dominant Social Comparison Heuristic Unites Alternative Mechanisms for the Evolution of Indirect Reciprocity. Scientific Reports, 2016; 6: 31459 DOI: 10.1038/srep31459
Foi publicado hoje na revista científica PLOS ONE artigo com os resultados de nosso estudo neurocientífico sobre a ayahuasca. Fruto de pouco mais de quatro anos de intenso e dedicado trabalho, a pesquisa foi conduzida na UNIFESP com financiamento da FAPESP, com cooperações na USP, UFABC, Louisiana State University (EUA) e da University of Auckland (Nova Zelândia). Além da colaboração da União do Vegetal que nos forneceu Hoasca para fins de pesquisa, e de 20 bravos(as) psiconautas experientes no uso da bebida amazônica. Nossos(as) voluntários(as) se disponibilizaram a participar de um processo em um ambiente e com uma proposta que difere em muito dos usos tradicionais, e era bastante desafiadora. Beberam ayahuasca num laboratório universitário, sem canto nem palo santo, sem reza, dança ou fogueira, no meio da conturbada metrópole paulista. E tiveram que usar uma touca que gravava a atividade elétrica de seus cérebros continuamente num notebook próximo a elas. Sentadas em uma poltrona confortável, doaram pequenas quantidades de sangue a cada 25 minutos. Apesar de não ter a fundamental condução dos guias, curandeiros, mestres ou maestros, que fazem trabalhos tão importantes quanto a bebida em si, e de tomarem ayahuasca uma pessoa por vez, foram acompanhados com carinho e cuidado pela equipe científica, nunca sendo deixados sozinhos ou desamparados, e sempre com os baldinhos à disposição… Tudo isso em prol da colaboração dos saberes tradicionais com os saberes científicos e tecnológicos.Uma pesquisa desse tipo se justifica por várias razões, desde um entendimento mais profundo sobre nossa resposta fisiológica aos compostos químicos presentes na ayahuasca, que nos fornece dados cruciais sobre potenciais terapêuticos e segurança de uso; até informações mais sofisticadas sobre as relações entre cérebro e consciência, o chamado “hard-problem”. Com os resultados dessa jornada aprofundamos e expandimos o conhecimento sobre os efeitos dos componentes moleculares da bebida sagrada, sobre como nossos corpos recebem estas moléculas e que efeitos elas ajudam a desencadear, especialmente no cérebro. Ao minimizarmos as intervenções biomédicas somente ao estritamente necessário e ao adotarmos uma postura observacional, deixando e encorajando que os voluntários passassem a maior parte do tempo de olhos fechados em estado introspectivo, pudemos revelar uma imagem fascinante sobre os efeitos da ayahuasca no cérebro. Este efeito ocorre em duas fases qualitativamente distintas e este perfil bifásico ajuda a explicar contradições de estudos semelhantes feitos anteriormente por outras equipes. Com isso abrimos mais portas para fascinantes investigações futuras sobre os diversos estados de consciência que podem ser alcançados com a bebida amazônica.
Cerca de uma hora após a ingestão da ayahuasca, ocorreram diminuições das ondas alfa (8 a 12 ciclos por segundo), especialmente no córtex temporo-parietal, com uma certa tendência de lateralização para o hemisfério esquerdo. A segunda fase ocorre cerca de uma hora depois (ou seja, cerca de duas horas após a ingestão) e enquanto as ondas alfa foram retornando a um padrão parecido com o que estava antes da ingestão da ayahuasca, os ritmos gama, de frequências muito altas (30 a 100 ciclos por segundo), se intensificaram por quase todo o córtex cerebral, incluindo o córtex frontal. Estas oscilações elétricas em distintas frequências, que ocorrem perpetuamente e simultaneamente em todo o cérebro, são resultado da complexa interação da atividade de bilhões de células cerebrais. E estão relacionadas com todas as funções do cérebro, inclusive os aspectos psicológicos e os estados de consciência. Por exemplo, durante o sono profundo predomina no córtex cerebral uma frequência lenta, de 1 a 4 ciclos por segundo, chamada delta. Enquanto durante a maioria dos sonhos, predomina a frequência teta (4 a 8 ciclos por segundo). Ao caracterizar as principais mudanças nestas frequências de oscilações neurais avançamos na criação de um mapa neurocientífico sobre o estado de consciência desencadeado pela ingestão de ayahuasca.
Há variadas nuances de interpretação para estes dados (e muitos estudos posteriores que podem ser feitos de acordo com cada interpretação, para testas hipóteses específicas). Mas a minha favorita e que discutimos no artigo é de que o ritmo alfa é resultado de atividades inibitórias no cérebro, e o ritmo gama representa atividade neural crucial para a consciência. Quando fechamos os olhos e temos a sensacao de um campo visual escuro, sem imagens, o ritmo alfa se fortalece nas regiões do cérebro que recebem estímulos vindos dos olhos. Ou seja, quando estamos de olhos fechados não apenas a informação que chega dos olhos está ausente, mas as áreas visuais são inibidas por “centros superiores” do córtex, capazes de modular a atividade de áreas sensoriais. E nós temos a experiência subjetiva de um mundo escuro e de ausência de visão. No caso da ayahuasca, encontramos um enfraquecimento dessa inibição em áreas multisensoriais. Ou seja, regiões que estão envolvidas não só com visão, mas com audição, tato, paladar, olfato e também com sensações corpóreas das mais diversas. Faz sentido portanto que esta diminuição de alfa esteja relacionada com o efeito tão comum de experiência de mais sensações e mais estímulos durante o efeito da ayahuasca quando comparado com o estado ordinário de consciência, incluindo as famosas visões de olhos fechados. Já o acelerado gama está relacionado com o que se chama na neurociência de integração. Enquanto áreas diversas do cérebro estão relacionadas a percepções subjetivas distintas, como os cinco sentidos mencionados acima, nossa experiência consciente é unificada. Essa unificação de atividades neurais em áreas anatomicamente distintas ocorre nas oscilações rápidas na frequência gama, que permitem ao cérebro temporariamente juntar as peças de um complexo quebra cabeças de atividade neural. Esse aumento de gama pode ajudar a explicar porque durante a ayahuasca a percepção de sons e imagens, por exemplo, parece se fundir e criar relações peculiares, não perceptíveis durante a consciência ordinária, quando o cérebro tende a organizar a atividade neural relacionada aos cinco sentidos de maneira parcialmente independente. Essa função do gama em unificar ou integrar informações no cérebro é conhecida de longa data, pelo menos desde a obra pioneira do cientista Chileno Francisco Varela. E foi observada em dois indíviduos após tomarem ayahuasca em trabalho do antropólogo Luis Eduardo Luna e colaboradores há uma década. Ao confirmarmos os dados de Luna e colaboradores com nova e mais rigorosa metodologia, com mais pessoas e ao detectarmos a combinação destes efeitos com as reduções em alfa, abrimos portas importantíssimas no entendimento não só dos estados não-ordinários de consciência, mas da teoria neurocientifica sobre a consciência como um todo. Um exemplo é uma teoria proposta recentemente sobre a ação dos psicodélicos que sugere que uma das características principais do cérebro durante o efeito de psicodélicos sejam intensificações do gama. Para Andrew Gallimore, do Japão, que se baseia na influente teoria da informacao integrada, ou IIT (integrated information theory), a mais promissora teoria neurocientífica sobre a consciência, a expansão da consciência com psicodélicos é mesmo possível dentro de uma perspectiva neurocientífica, e provavelmente depende do ritmo gama. Esta expansão da consciência inclui a percepção subjetiva de mais conteúdo, de maior intensidade, incluindo fusões entre os sentidos e possivelmente a experiência subjetiva de intensidades e qualidades não perceptíveis durante a consciência ordinária, como cores mais vívidas e brilhantes e estados emocionais mais intensos do que jamais experienciados fora do estado psicodélico. O gama também tem papel fundamental na teoria da consciência proposta pelo matemático Sir Roger Penrose e pelo anestesiologista Stuart Hameroff. Segundo a teoria deles, oscilações na faixa de 40 ciclos por segundo seriam importantes ao permitir reverberações menores e muito mais aceleradas nos microtúbulos, uma rede de fibras e filamentos que percorre todas as células do nosso corpo – e do cérebro.
Ademais de caracterizar as oscilações e regiões corticais mais importantes no processo neural relacionado à modificação da consciência durante a ayahuasca, fizemos coletas periódicas de sangue para quantificar os princípios ativos da ayahuasca e seus metabólitos. E encontramos que durante a primeira fase a concentração da DMT e da harmina estavam próximas do máximo, sendo que na segunda fase acontecem os picos de harmalina e tetrahidroharmina. Com uma análise estatística sofisticada e inédita, desenvolvida especialmente para este estudo, demonstramos que este efeito bifásico no cérebro esta relacionado à concentração sanguínea de vários componentes do chá. Isto expande a visão científica predominante que foca apenas na famosa DMT. De acordo com este modelo, o papel do cipó é apenas de inibir a digestão da DMT. Mas “ayahuasca” é um dos muitos nomes não só da bebida, mas do cipó jagube ou mariri, catalogado nos anais científicos como Banisteriopsis caapi. Isto revela que, para os povos tradicionais, é o cipó a planta mais importante. E de fato há preparações de ayahuasca feitas somente com o cipó, sem qualquer outra planta. Mas na farmacologia esse quadro foi invertido, dando-se ênfase na psicoatividade da DMT apenas, que não vem do cipó, mas de outras plantas que frequentemente são adicionadas no preparo da bebida, como a rainha no Brasil e Peru (Psychotria viridis) ou a chagropanga na Colômbia (Dyplopteris cabrerana). Mas nossa análise com 10 moléculas (DMT, NMT e DMT-NO; Harmina e harmol; Harmalina e harmalol; THH e THH-OH e também o metabólito serotonérgico IAA) revelou associações importantes entre níveis plasmáticos de DMT, harmina, harmalina e tetraidroharmina, bem como alguns metabólitos como a DMT-NO, e os efeitos cerebrais em alfa e gama em momentos distintos da experiência. Revelamos portanto que a psicoatividade da ayahuasca não pode ser totalmente explicada apenas pelas concentrações de DMT, dando um passo importante para reaproximar o saber científico dos saberes tradicionais.
Descobrimos ainda que a concentração de harmalina (e apenas de harmalina) está correlacionada com o momento em que os voluntários(as) vomitaram. Ou seja, a harmalina desempenha um papel fundamental tanto no cérebro, estando relacionada a intensificação das ondas gama, mas também nos efeitos periféricos da ayahuasca, como o vômito. Isso reforça a idéia de que o vômito tem relações importantes com a experiência psicológica, sendo talvez mais apropriado chamá-lo de purga, termo que reforça a idéia de que ocorre uma associação entre físico e psicológico neste momento da experiência. Esses resultados sobre a harmalina também dão nova importância para as pesquisas pioneiras de Claudio Naranjo, terapeuta Chileno que foi um dos primeiros a estudar ayahuasca desde um ponto de vista médico-científico, nos anos 60. A proposta de Naranjo, de que a harmalina era o principal componente psicoativo da ayahuasca foi, entretanto, quase que totalmente esquecida em prol do foco na DMT a partir dos anos 80. Outro fator importante contra a proposta de Naranjo é que as concentrações de harmalina na ayahuasca são em geral abaixo das doses de harmalina que, sozinha, desencadeiam efeitos psicoativos nítidos, conforme relato subjetivo das pessoas que ingeriram harmalina nos estudos de Naranjo. Mas nunca foi testado o efeito da harmalina combinada com a harmina e a tetraidroharmina, como ocorre na ayahuasca. E então nossos resultados reforçam a idéia de que a harmalina também pode ter contribuições importantes no efeito psicoativo da ayahuasca quando em combinação com as outras beta-carbolinas vindas do cipó. Interessantemente, em quase todos os casos a purga ocorreu após a primeira fase, quando os níveis de DMT estão próximos do máximo que atingem no sangue. Como a elevação da concentração de harmalina no sangue é mais lenta que da DMT e da harmina, vomitar pouco interfere nos efeitos da primeira fase e nas concentrações destas duas moléculas, e ajuda a explicar porque mesmo quem vomita rápido pode ter experiências fortes e profundas. Mas vomitar potencialmente interfere nas concentrações de tetraidroharmina, que é a molécula cujas concentrações sobem mais lentamente, e pode permanecer em circulação por alguns dias, dependendo da capacidade de metabolização de cada indivíduo.
Importante notar ainda que o perfil bifásico foi observado com ingestão de apenas um copo (mas com uma dose grande). Mas sabemos que nos usos rituais é muito frequente os participantes tomarem mais de uma dose, com intervalo de uma hora ou mais. É possível então que nestes casos ocorram variadas combinações de efeitos, como por exemplo a segunda fase de uma primeira dose (aumento de gama) coincidir com a primeira fase de uma segunda dose (diminuição de alfa). Isso potencialmente geraria estados cerebrais (e por correlação, estados de consciência) não observados na pesquisa com apenas uma dose. Isto ajuda a entender porque muitas pessoas relatam que a segunda dose é sempre uma “caixinha de surpresas”, e não apenas a intensificação ou prolongação dos efeitos da primeira toma. Ao depender do perfil metabólico de cada pessoa, do tamanho de cada dose, da proporção destas moléculas na bebida e do intervalo entre elas, pode-se atingir outros estados mesclados entre as duas fases observadas na pesquisa. Some-se a isto as influências ambientais, psicológicas, motivacionais e espirituais e temos uma prática de exploração da consciência que não cabe numa resposta simples e singular sobre qual “o efeito” da ayahuasca.
Do ponto de vista neurocientífico, estas possíveis combinações são muito intrigantes, porque relações entre as frequências alfa e gama no córtex parietal e no frontal estão envolvidas em processos de reavaliação psicológica e emocional. Ou seja, quando fazemos certas formas de introspecção que resultam em ressignificação de eventos emocionais de nossas vidas, estas áreas do cérebro se comunicam através de oscilações elétricas nestas duas faixas de frequência. E estas mesmas frequências e áreas cerebrais estão envolvidas em processos criativos de resolução de problemas. Ou seja, através de nossa pesquisa, a neurociência começa a convergir com o saber ancestral ao reafirmar o potencial da ayahuasca em nutrir a criatividade e o autoconhecimento, facilitando formas de terapia focadas no potencial de cada indíviduo em crescer e se desenvolver de maneira consciente.
Para saber mais, confira abaixo minha palestra na World Ayahuasca Confrence, em Ibiza ano passado (disponível com legendas em português e inglês). Ou ainda a mais antiga “Ayahuasca e as ondas cerebrais“, realizada no Brasil no início deste projeto. Ou se você quer mesmo mergulhar fundo, acesse gratuitamente o artigo científico na íntegra.
Referência: Schenberg EE, Alexandre JFM, Filev R, Cravo AM, Sato JR, Muthukumaraswamy SD, et al. (2015) Acute Biphasic Effects of Ayahuasca. PLoS ONE 10(9): e0137202. doi:10.1371/journal.pone.0137202
photo credit: Courtesy of MIT Researchers
Given the fundamental importance of our DNA, it is logical to assume that damage to it is undesirable and spells bad news; after all, we know that cancer can be caused by mutations that arise from such injury. But a surprising new study is turning that idea on its head, with the discovery that brain cells actually break their own DNA to enable us to learn and form memories.
While that may sound counterintuitive, it turns out that the damage is necessary to allow the expression of a set of genes, called early-response genes, which regulate various processes that are critical in the creation of long-lasting memories. These lesions are rectified pronto by repair systems, but interestingly, it seems that this ability deteriorates during aging, leading to a buildup of damage that could ultimately result in the degeneration of our brain cells.
This idea is supported by earlier work conducted by the same group, headed by Li-Huei Tsai, at the Massachusetts Institute of Technology (MIT) that discovered that the brains of mice engineered to develop a model of Alzheimer’s disease possessed a significant amount of DNA breaks, even before symptoms appeared. These lesions, which affected both strands of DNA, were observed in a region critical to learning and memory: the hippocampus.
To find out more about the possible consequences of such damage, the team grew neurons in a dish and exposed them to an agent that causes these so-called double strand breaks (DSBs), and then they monitored the gene expression levels. As described in Cell, they found that while the vast majority of genes that were affected by these breaks showed decreased expression, a small subset actually displayed increased expression levels. Importantly, these genes were involved in the regulation of neuronal activity, and included the early-response genes.
Since the early-response genes are known to be rapidly expressed following neuronal activity, the team was keen to find out whether normal neuronal stimulation could also be inducing DNA breaks. The scientists therefore applied a substance to the cells that is known to strengthen the tiny gap between neurons across which information flows – the synapse – mimicking what happens when an organism is exposed to a new experience.
“Sure enough, we found that the treatment very rapidly increased the expression of those early response genes, but it also caused DNA double strand breaks,” Tsai said in a statement.
So what is the connection between these breaks and the apparent boost in early-response gene expression? After using computers to scrutinize the DNA sequences neighboring these genes, the researchers found that they were enriched with a pattern targeted by an architectural protein that, upon binding, distorts the DNA strands by introducing kinks. By preventing crucial interactions between distant DNA regions, these bends therefore act as a barrier to gene expression. The breaks, however, resolve these constraints, allowing expression to ensue.
These findings could have important implications because earlier work has demonstrated that aging is associated with a decline in the expression of genes involved in the processes of learning and memory formation. It therefore seems likely that the DNA repair system deteriorates with age, but at this stage it is unclear how these changes occur, so the researchers plan to design further studies to find out more.
November 26, 2014
DZNE – German Center for Neurodegenerative Diseases
An international team of researchers has successfully determined the location, where memories are generated with a level of precision never achieved before. To this end the scientists used a particularly accurate type of magnetic resonance imaging technology.
Magnetic resonance imaging provides insights into the brain. Credit: DZNE/Guido Hennes
The human brain continuously collects information. However, we have only basic knowledge of how new experiences are converted into lasting memories. Now, an international team led by researchers of the University of Magdeburg and the German Center for Neurodegenerative Diseases (DZNE) has successfully determined the location, where memories are generated with a level of precision never achieved before. The team was able to pinpoint this location down to specific circuits of the human brain. To this end the scientists used a particularly accurate type of magnetic resonance imaging (MRI) technology. The researchers hope that the results and method of their study might be able to assist in acquiring a better understanding of the effects Alzheimer’s disease has on the brain.
The findings are reported in Nature Communications.
For the recall of experiences and facts, various parts of the brain have to work together. Much of this interdependence is still undetermined, however, it is known that memories are stored primarily in the cerebral cortex and that the control center that generates memory content and also retrieves it, is located in the brain’s interior. This happens in the hippocampus and in the adjacent entorhinal cortex.
“It is been known for quite some time that these areas of the brain participate in the generation of memories. This is where information is collected and processed. Our study has refined our view of this situation,” explains Professor Emrah Düzel, site speaker of the DZNE in Magdeburg and director of the Institute of Cognitive Neurology and Dementia Research at the University of Magdeburg. “We have been able to locate the generation of human memories to certain neuronal layers within the hippocampus and the entorhinal cortex. We were able to determine which neuronal layer was active. This revealed if information was directed into the hippocampus or whether it traveled from the hippocampus into the cerebral cortex. Previously used MRI techniques were not precise enough to capture this directional information. Hence, this is the first time we have been able to show where in the brain the doorway to memory is located.”
For this study, the scientists examined the brains of persons who had volunteered to participate in a memory test. The researchers used a special type of magnetic resonance imaging technology called “7 Tesla ultra-high field MRI.” This enabled them to determine the activity of individual brain regions with unprecedented accuracy.
A Precision method for research on Alzheimer’s
“This measuring technique allows us to track the flow of information inside the brain and examine the areas that are involved in the processing of memories in great detail,” comments Düzel. “As a result, we hope to gain new insights into how memory impairments arise that are typical for Alzheimer’s. Concerning dementia, is the information still intact at the gateway to memory? Do troubles arise later on, when memories are processed? We hope to answer such questions.”
The above story is based on materials provided by DZNE – German Center for Neurodegenerative Diseases. Note: Materials may be edited for content and length.
- Anne Maass, Hartmut Schütze, Oliver Speck, Andrew Yonelinas, Claus Tempelmann, Hans-Jochen Heinze, David Berron, Arturo Cardenas-Blanco, Kay H. Brodersen, Klaas Enno Stephan, Emrah Düzel. Laminar activity in the hippocampus and entorhinal cortex related to novelty and episodic encoding. Nature Communications, 2014; 5: 5547 DOI: 10.1038/ncomms6547
Date: November 6, 2014
Source: Ecole Polytechnique Fédérale de Lausanne
Ghosts exist only in the mind, and scientists know just where to find them, an EPFL study suggests. Patients suffering from neurological or psychiatric conditions have often reported feeling a strange “presence.” Now, EPFL researchers in Switzerland have succeeded in recreating this so-called ghost illusion in the laboratory.
On June 29, 1970, mountaineer Reinhold Messner had an unusual experience. Recounting his descent down the virgin summit of Nanga Parbat with his brother, freezing, exhausted, and oxygen-starved in the vast barren landscape, he recalls, “Suddenly there was a third climber with us… a little to my right, a few steps behind me, just outside my field of vision.”
It was invisible, but there. Stories like this have been reported countless times by mountaineers, explorers, and survivors, as well as by people who have been widowed, but also by patients suffering from neurological or psychiatric disorders. They commonly describe a presence that is felt but unseen, akin to a guardian angel or a demon. Inexplicable, illusory, and persistent.
Olaf Blanke’s research team at EPFL has now unveiled this ghost. The team was able to recreate the illusion of a similar presence in the laboratory and provide a simple explanation. They showed that the “feeling of a presence” actually results from an alteration of sensorimotor brain signals, which are involved in generating self-awareness by integrating information from our movements and our body’s position in space.
In their experiment, Blanke’s team interfered with the sensorimotor input of participants in such a way that their brains no longer identified such signals as belonging to their own body, but instead interpreted them as those of someone else. The work is published in Current Biology.
Generating a “Ghost”
The researchers first analyzed the brains of 12 patients with neurological disorders — mostly epilepsy — who have experienced this kind of “apparition.” MRI analysis of the patients’s brains revealed interference with three cortical regions: the insular cortex, parietal-frontal cortex, and the temporo-parietal cortex. These three areas are involved in self-awareness, movement, and the sense of position in space (proprioception). Together, they contribute to multisensory signal processing, which is important for the perception of one’s own body.
The scientists then carried out a “dissonance” experiment in which blindfolded participants performed movements with their hand in front of their body. Behind them, a robotic device reproduced their movements, touching them on the back in real time. The result was a kind of spatial discrepancy, but because of the synchronized movement of the robot, the participant’s brain was able to adapt and correct for it.
Next, the neuroscientists introduced a temporal delay between the participant’s movement and the robot’s touch. Under these asynchronous conditions, distorting temporal and spatial perception, the researchers were able to recreate the ghost illusion.
An “Unbearable” Experience
The participants were unaware of the experiment’s purpose. After about three minutes of the delayed touching, the researchers asked them what they felt. Instinctively, several subjects reported a strong “feeling of a presence,” even counting up to four “ghosts” where none existed. “For some, the feeling was even so strong that they asked to stop the experiment,” said Giulio Rognini, who led the study.
“Our experiment induced the sensation of a foreign presence in the laboratory for the first time. It shows that it can arise under normal conditions, simply through conflicting sensory-motor signals,” explained Blanke. “The robotic system mimics the sensations of some patients with mental disorders or of healthy individuals under extreme circumstances. This confirms that it is caused by an altered perception of their own bodies in the brain.”
A Deeper Understanding of Schizophrenia
In addition to explaining a phenomenon that is common to many cultures, the aim of this research is to better understand some of the symptoms of patients suffering from schizophrenia. Such patients often suffer from hallucinations or delusions associated with the presence of an alien entity whose voice they may hear or whose actions they may feel. Many scientists attribute these perceptions to a malfunction of brain circuits that integrate sensory information in relation to our body’s movements.
“Our brain possesses several representations of our body in space,” added Giulio Rognini. “Under normal conditions, it is able to assemble a unified self-perception of the self from these representations. But when the system malfunctions because of disease — or, in this case, a robot — this can sometimes create a second representation of one’s own body, which is no longer perceived as ‘me’ but as someone else, a ‘presence’.”
It is unlikely that these findings will stop anyone from believing in ghosts. However, for scientists, it’s still more evidence that they only exist in our minds.
Watch the video: http://youtu.be/GnusbO8QjbE
- Olaf Blanke, Polona Pozeg, Masayuki Hara, Lukas Heydrich, Andrea Serino, Akio Yamamoto, Toshiro Higuchi, Roy Salomon, Margitta Seeck, Theodor Landis, Shahar Arzy, Bruno Herbelin, Hannes Bleuler, Giulio Rognini. Neurological and Robot-Controlled Induction of an Apparition. Current Biology, 2014; DOI:10.1016/j.cub.2014.09.049
Date: November 5, 2014
Source: University of Washington
Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?
University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.
At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.
“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”
Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.
The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.
Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.
The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.
Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.
Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.
Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.
Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.
Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.
With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.
They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.
The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.
“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.
Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.
The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.
- Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332
Date: October 17, 2014
Source: Bielefeld University
We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists at Bielefeld University have been investigating how the brain fools us into believing that we see in sharp detail. The results have been published in the scientific magazine Journal of Experimental Psychology: General. Its central finding is that our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail.
“In our study we are dealing with the question of why we believe that we see the world uniformly detailed,” says Dr. Arvid Herwig from the Neuro-Cognitive Psychology research group of the Faculty of Psychology and Sports Science. The group is also affiliated to the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University and is led by Professor Dr. Werner X. Schneider.
Only the fovea, the central area of the retina, can process objects precisely. We should therefore only be able to see a small area of our environment in sharp detail. This area is about the size of a thumb nail at the end of an outstretched arm. In contrast, all visual impressions which occur outside the fovea on the retina become progressively coarse. Nevertheless, we commonly have the impression that we see large parts of our environment in sharp detail.
Herwig and Schneider have been getting to the bottom of this phenomenon with a series of experiments. Their approach presumes that people learn through countless eye movements over a lifetime to connect the coarse impressions of objects outside the fovea to the detailed visual impressions after the eye has moved to the object of interest. For example, the coarse visual impression of a football (blurred image of a football) is connected to the detailed visual impression after the eye has moved. If a person sees a football out of the corner of her eye, her brain will compare this current blurred picture with memorised images of blurred objects. If the brain finds an image that fits, it will replace the coarse image with a precise image from memory. This blurred visual impression is replaced before the eye moves. The person thus thinks that she already sees the ball clearly, although this is not the case.
The psychologists have been using eye-tracking experiments to test their approach. Using the eye-tracking technique, eye movements are measured accurately with a specific camera which records 1000 images per second. In their experiments, the scientists have recorded fast balistic eye movements (saccades) of test persons. Though most of the participants did not realise it, certain objects were changed during eye movement. The aim was that the test persons learn new connections between visual stimuli from inside and outside the fovea, in other words from detailed and coarse impressions. Afterwards, the participants were asked to judge visual characteristics of objects outside the area of the fovea. The result showed that the connection between a coarse and detailed visual impression occurred after just a few minutes. The coarse visual impressions became similar to the newly learnt detailed visual impressions.
“The experiments show that our perception depends in large measure on stored visual experiences in our memory,” says Arvid Herwig. According to Herwig and Schneider, these experiences serve to predict the effect of future actions (“What would the world look like after a further eye movement”). In other words: “We do not see the actual world, but our predictions.”
- Arvid Herwig, Werner X. Schneider. Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 2014; 143 (5): 1903 DOI: 10.1037/a0036781
September 1, 2014
Neuroscientists test the theory that your body shapes your ideas
Chronicle Review illustration by Scott Seymour
The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.
The verbs are the same.
The syntax is identical.
Does the brain notice, or care,
that the first is literal, the second
metaphorical, the third idiomatic?
It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.
The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.
But in their 1980 book, Metaphors We Live By,the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.
From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote.
What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness.
Metaphors do differ across languages, but that doesn’t affect the theory. For example, in Aymara, spoken in Bolivia and Chile, speakers refer to past experiences as being in front of them, on the theory that past events are “visible” and future ones are not. However, the difference between behind and ahead is relatively unimportant compared with the central fact that space is being used as a metaphor for time. Lakoff argues that it isimpossible—not just difficult, but impossible—for humans to talk about time and many other fundamental aspects of life without using metaphors to do it.
Lakoff and Johnson’s program is as anti-Platonic as it’s possible to get. It undermines the argument that human minds can reveal transcendent truths about reality in transparent language. They argue instead that human cognition is embodied—that human concepts are shaped by the physical features of human brains and bodies. “Our physiology provides the concepts for our philosophy,” Lakoff wrote in his introduction to Benjamin Bergen’s 2012 book, Louder Than Words: The New Science of How the Mind Makes Meaning. Marianna Bolognesi, a linguist at the International Center for Intercultural Exchange, in Siena, Italy, puts it this way: “The classical view of cognition is that language is an independent system made with abstract symbols that work independently from our bodies. This view has been challenged by the embodied account of cognition which states that language is tightly connected to our experience. Our bodily experience.”
Modern brain-scanning technologies make it possible to test such claims empirically. “That would make a connection between the biology of our bodies on the one hand, and thinking and meaning on the other hand,” says Gerard Steen, a professor of linguistics at VU University Amsterdam. Neuroscientists have been stuffing volunteers into fMRI scanners and having them read sentences that are literal, metaphorical, and idiomatic.
Neuroscientists agree on what happens with literal sentences like “The player kicked the ball.” The brain reacts as if it were carrying out the described actions. This is called “simulation.” Take the sentence “Harry picked up the glass.” “If you can’t imagine picking up a glass or seeing someone picking up a glass,” Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, “then you can’t understand that sentence.” Lakoff argues that the brain understands sentences not just by analyzing syntax and looking up neural dictionaries, but also by igniting its memories of kicking and picking up.
But what about metaphorical sentences like “The patient kicked the habit”? An addiction can’t literally be struck with a foot. Does the brain simulate the action of kicking anyway? Or does it somehow automatically substitute a more literal verb, such as “stopped”? This is where functional MRI can help, because it can watch to see if the brain’s motor cortex lights up in areas related to the leg and foot.
The evidence says it does. “When you read action-related metaphors,” says Valentina Cuccio, a philosophy postdoc at the University of Palermo, in Italy, “you have activation of the motor area of the brain.” In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.
Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”
But idioms are a major sticking point. Idioms are usually thought of as dead metaphors, that is, as metaphors that are so familiar that they have become clichés. What does the brain do with “The villain kicked the bucket” (“The villain died”)? What about “The students toed the line” (“The students conformed to the rules”)? Does the brain simulate the verb phrases, or does it treat them as frozen blocks of abstract language? And if it simulates them, what actions does it imagine? If the brain understands language by simulating it, then it should do so even when sentences are not literal.
The findings so far have been contradictory. Lisa Aziz-Zadeh, of the University of Southern California, and her colleagues reported in 2006 that idioms such as “biting off more than you can chew” did not activate the motor cortex. So did Ana Raposo, then at the University of Cambridge, and her colleagues in 2009. On the other hand, Véronique Boulenger, of the Laboratoire Dynamique du Langage, in Lyon, France, reported in the same year that they did, at least for leg and arm verbs.
In 2013, Desai and his colleagues tried to settle the problem of idioms. They first hypothesized that the inconsistent results come from differences of methodology. “Imaging studies of embodiment in figurative language have not compared idioms and metaphors,” they wrote in a report. “Some have mixed idioms and metaphors together, and in some cases, ‘idiom’ is used to refer to familiar metaphors.” Lera Boroditsky, an associate professor of psychology at the University of California at San Diego, agrees. “The field is new. The methods need to stabilize,” she says. “There are many different kinds of figurative language, and they may be importantly different from one another.”
Not only that, the nitty-gritty differences of procedure may be important. “All of these studies are carried out with different kinds of linguistic stimuli with different procedures,” Cuccio says. “So, for example, sometimes you have an experiment in which the person can read the full sentence on the screen. There are other experiments in which participants read the sentence just word by word, and this makes a difference.”
To try to clear things up, Desai and his colleagues presented subjects inside fMRI machines with an assorted set of metaphors and idioms. They concluded that in a sense, everyone was right. The more idiomatic the metaphor was, the less the motor system got involved: “When metaphors are very highly conventionalized, as is the case for idioms, engagement of sensory-motor systems is minimized or very brief.”
But George Lakoff thinks the problem of idioms can’t be settled so easily. The people who do fMRI studies are fine neuroscientists but not linguists, he says. “They don’t even know what the problem is most of the time. The people doing the experiments don’t know the linguistics.”
That is to say, Lakoff explains, their papers assume that every brain processes a given idiom the same way. Not true. Take “kick the bucket.” Lakoff offers a theory of what it means using a scene from Young Frankenstein. “Mel Brooks is there and they’ve got the patient dying,” he says. “The bucket is a slop bucket at the edge of the bed, and as he dies, his foot goes out in rigor mortis and the slop bucket goes over and they all hold their nose. OK. But what’s interesting about this is that the bucket starts upright and it goes down. It winds up empty. This is a metaphor—that you’re full of life, and life is a fluid. You kick the bucket, and it goes over.”
That’s a useful explanation of a rather obscure idiom. But it turns out that when linguists ask people what they think the metaphor means, they get different answers. “You say, ‘Do you have a mental image? Where is the bucket before it’s kicked?’ ” Lakoff says. “Some people say it’s upright. Some people say upside down. Some people say you’re standing on it. Some people have nothing. You know! There isn’t a systematic connection across people for this. And if you’re averaging across subjects, you’re probably not going to get anything.”
Similarly, Lakoff says, when linguists ask people to write down the idiom “toe the line,” half of them write “tow the line.” That yields a different mental simulation. And different mental simulations will activate different areas of the motor cortex—in this case, scrunching feet up to a line versus using arms to tow something heavy. Therefore, fMRI results could show different parts of different subjects’ motor cortexes lighting up to process “toe the line.” In that case, averaging subjects together would be misleading.
Furthermore, Lakoff questions whether functional MRI can really see what’s going on with language at the neural level. “How many neurons are there in one pixel or one voxel?” he says. “About 125,000. They’re one point in the picture.” MRI lacks the necessary temporal resolution, too. “What is the time course of that fMRI? It could be between one and five seconds. What is the time course of the firing of the neurons? A thousand times faster. So basically, you don’t know what’s going on inside of that voxel.” What it comes down to is that language is a wretchedly complex thing and our tools aren’t yet up to the job.
Nonetheless, the work supports a radically new conception of how a bunch of pulsing cells can understand anything at all. In a 2012 paper, Lakoff offered an account of how metaphors arise out of the physiology of neural firing, based on the work of a student of his, Srini Narayanan, who is now a faculty member at Berkeley. As children grow up, they are repeatedly exposed to basic experiences such as temperature and affection simultaneously when, for example, they are cuddled. The neural structures that record temperature and affection are repeatedly co-activated, leading to an increasingly strong neural linkage between them.
However, since the brain is always computing temperature but not always computing affection, the relationship between those neural structures is asymmetric. When they form a linkage, Lakoff says, “the one that spikes first and most regularly is going to get strengthened in its direction, and the other one is going to get weakened.” Lakoff thinks the asymmetry gives rise to a metaphor: Affection is Warmth. Because of the neural asymmetry, it doesn’t go the other way around: Warmth is not Affection. Feeling warm during a 100-degree day, for example, does not make one feel loved. The metaphor originates from the asymmetry of the neural firing. Lakoff is now working on a book on the neural theory of metaphor.
If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.
On the other hand, roboticists such as Rodney Brooks, an emeritus professor at the Massachusetts Institute of Technology, have suggested that computers could be provided with bodies. For example, they could be given control of robots stuffed with sensors and actuators. Brooks pondered Lakoff’s ideas in his 2002 book, Flesh and Machines, and supposed, “For anything to develop the same sorts of conceptual understanding of the world as we do, it will have to develop the same sorts of metaphors, rooted in a body, that we humans do.”
But Lera Boroditsky wonders if giving computers humanlike bodies would only reproduce human limitations. “If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system,” she says. “I don’t know why we have to replicate our physical limitations in other systems.”
What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there. And so may be the ability to create asymmetric neural linkages that say this is like (but not identical to) that. In an age of brain scanning as well as poetry, that’s where metaphor gets you.
Michael Chorost is the author of Rebuilt: How Becoming Part Computer Made Me More Human (Houghton Mifflin, 2005) and World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011).
Date: August 27, 2014
Summary: It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.
It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.
Florida State University College of Medicine Neuroscientist Pradeep Bhide brought together some of the world’s foremost researchers in a quest to explain why teenagers — boys, in particular — often behave erratically.
The result is a series of 19 studies that approached the question from multiple scientific domains, including psychology, neurochemistry, brain imaging, clinical neuroscience and neurobiology. The studies are published in a special volume of Developmental Neuroscience, “Teenage Brains: Think Different?”
“Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general.
“The emotional and economic burdens of such behaviors are quite huge. The research described in this book offers clues to what may cause such maladaptive behaviors and how one may be able to devise methods of countering, avoiding or modifying these behaviors.”
An example of findings published in the book that provide new insights about the inner workings of a teenage boy’s brain:
• Unlike children or adults, teenage boys show enhanced activity in the part of the brain that controls emotions when confronted with a threat. Magnetic resonance scanner readings in one study revealed that the level of activity in the limbic brain of adolescent males reacting to threat, even when they’ve been told not to respond to it, was strikingly different from that in adult men.
• Using brain activity measurements, another team of researchers found that teenage boys were mostly immune to the threat of punishment but hypersensitive to the possibility of large gains from gambling. The results question the effectiveness of punishment as a deterrent for risky or deviant behavior in adolescent boys.
• Another study demonstrated that a molecule known to be vital in developing fear of dangerous situations is less active in adolescent male brains. These findings point towards neurochemical differences between teenage and adult brains, which may underlie the complex behaviors exhibited by teenagers.
“The new studies illustrate the neurobiological basis of some of the more unusual but well-known behaviors exhibited by our teenagers,” Bhide said. “Stress, hormonal changes, complexities of psycho-social environment and peer-pressure all contribute to the challenges of assimilation faced by teenagers.
“These studies attempt to isolate, examine and understand some of these potential causes of a teenager’s complex conundrum. The research sheds light on how we may be able to better interact with teenagers at home or outside the home, how to design educational strategies and how best to treat or modify a teenager’s maladaptive behavior.”
Bhide conceived and edited “Teenage Brains: Think Different?” His co-editors were Barry Kasofsky and B.J. Casey, both of Weill Medical College at Cornell University. The book was published by Karger Medical and Scientific Publisher of Basel, Switzerland. More information on the book can be found at: http://www.karger.com/Book/Home/261996
The table of contents to the special journal volume can be found at: http://www.karger.com/Journal/Issue/261977
Date: August 12, 2014
Summary: Author Marcel Proust makes a compelling case that our identities and decisions are shaped in profound and ongoing ways by our memories. This truth is powerfully reflected in mental illnesses, like posttraumatic stress disorder (PTSD) and addictions. In PTSD, memories of traumas intrude vividly upon consciousness, causing distress, driving people to avoid reminders of their traumas, and increasing risk for addiction and suicide. In addiction, memories of drug use influence reactions to drug-related cues and motivate compulsive drug use.
In the novel À la recherche du temps perdu (translated into English as Remembrance of Things Past), Marcel Proust makes a compelling case that our identities and decisions are shaped in profound and ongoing ways by our memories.
This truth is powerfully reflected in mental illnesses, like posttraumatic stress disorder (PTSD) and addictions. In PTSD, memories of traumas intrude vividly upon consciousness, causing distress, driving people to avoid reminders of their traumas, and increasing risk for addiction and suicide. In addiction, memories of drug use influence reactions to drug-related cues and motivate compulsive drug use.
What if one could change these dysfunctional memories? Although we all like to believe that our memories are reliable and permanent, it turns out that memories may indeed be plastic.
The process for modifying memories, depicted in the graphic, is called memory reconsolidation. After memories are formed and stored, subsequent retrieval may make them unstable. In other words, when a memory is activated, it also becomes open to revision and reconsolidation in a new form.
“Memory reconsolidation is probably among the most exciting phenomena in cognitive neuroscience today. It assumes that memories may be modified once they are retrieved which may give us the great opportunity to change seemingly robust, unwanted memories,” explains Dr. Lars Schwabe of Ruhr-University Bochum in Germany. He and his colleagues have authored a review paper on the topic, published in the current issue of Biological Psychiatry.
The idea of memory reconsolidation was initially discovered and demonstrated in rodents.
The first evidence of reconsolidation in humans was reported in a study in 2003, and the findings have since continued to accumulate. The current report summarizes the most recent findings on memory reconsolidation in humans and poses additional questions that must be answered by future studies.
“Reconsolidation appears to be a fundamental process underlying cognitive and behavioral therapies. Identifying its roles and mechanisms is an important step forward to fully harnessing the reconsolidation process in psychotherapy,” said Dr. John Krystal, Editor of Biological Psychiatry.
The translation of the animal data to humans is a vital step for the potential application of memory reconsolidation in the context of mental disorders. Memory reconsolidation could open the door to novel treatment approaches for disorders such as PTSD or drug addiction.
- Lars Schwabe, Karim Nader, Jens C. Pruessner. Reconsolidation of Human Memory: Brain Mechanisms and Clinical Relevance. Biological Psychiatry, 2014; 76 (4): 274 DOI: 10.1016/j.biopsych.2014.03.008
Victor-M Amela, Ima Sanchís, Lluís Amiguet
“Las plantas tienen neuronas, son seres inteligentes”
29/12/2010 – 02:03
Foto: KIM MANRESA
Gracias a nuestros amigos de Redes, el programa de Eduard Punset, buscadores incansables de todo conocimiento científico que amplíe los límites del saber, de quiénes somos y qué papel desempeñamos en esta sopa de universos, descubrimos a Mancuso, que nos explica que las plantas, vistas a cámara rápida, se comportan como si tuvieran cerebro: tienen neuronas, se comunican mediante señales químicas, toman decisiones, son altruistas y manipuladoras. ¿Hace cinco años era imposible hablar de comportamiento de las plantas, hoy podemos empezar a hablar de su inteligencia¿… Puede que pronto empecemos a hablar de sus sentimientos. Mancuso estará en Redes el próximo día 2. No se lo pierdan.
Las plantas son organismos inteligentes, pero se mueven y toman decisiones en un tiempo más largo que el del hombre.
Hoy sabemos que tienen familia y parientes y que reconocen su cercanía. Se comportan de manera totalmente distinta si a su lado hay parientes o hay extraños. Si son parientes no compiten: a través de las raíces, dividen el territorio de manera equitativa.
¿Un árbol puede voluntariamente mandar savia a una planta pequeña?
Sí. Las plantas requieren luz para vivir, ypara que una semilla llegue a la luz deben pasar muchos años; mientras tanto, son nutridas por árboles de su misma especie.
Los cuidados parentales sólo se dan en animales muy evolucionados y es increíble que se den en las plantas.
Entonces, se comunican.
Sí, en una selva todas las plantas están en comunicación subterránea a través de las raíces. Y también fabrican moléculas volátiles que avisan a plantas lejanas sobre lo que está sucediendo.
Cuando una planta es atacada por un patógeno, inmediatamente produce moléculas volátiles que pueden viajar kilómetros, y que avisan a todas las demás para que preparen sus defensas.
Producen moléculas químicas que las convierten en indigeribles, y pueden ser muy agresivas. Hace diez años, en Botsuana introdujeron en un gran parque 200.000 antílopes, que comenzaron a comerse las acacias con intensidad. Tras pocas semanas muchos murieron y al cabo de seis meses murieron más de 10.000, y no advertían por qué. Hoy sabemos que fueron las plantas.
Sí, y las plantas aumentaron hasta tal punto la concentración de taninos en sus hojas, que se convirtieron en un veneno.
¿Las plantas también son empáticas con otros seres?
Es difícil decirlo, pero hay una cosa segura: las plantas pueden manipular a los animales. Durante la polinización producen néctar y otras sustancias para atraer a los insectos. Las orquídeas producen flores que son muy similares a las hembras de algunos insectos, que, engañados, acuden a ellas. Y hay quien afirma que hasta el ser humano es manipulado por las plantas.
Todas las drogas que usa el hombre (café, tabaco, opio, marihuana…) derivan de las plantas, ¿pero por qué las plantas producen una sustancia que convierte a humanos en dependientes? Porque así las propagamos. Las plantas utilizan al hombre como transporte. Hay investigaciones sobre ello.
Si mañana desaparecieran las plantas del planeta, en un mes toda la vida se extinguiría porque no habría comida ni oxígeno. Todo el oxígeno que respiramos viene de ellas. Pero si nosotros desapareciéramos, no pasaría nada. Somos dependientes de las plantas, pero las plantas no lo son de nosotros. Quien es dependiente está en una situación inferior, ¿no?
Las plantas son mucho más sensibles. Cuando algo cambia en el ambiente, como ellas no pueden escapar, han de ser capaces de sentir con mucha anticipación cualquier mínimo cambio para adaptarse.
¿Y cómo perciben?
Cada punta de raíz es capaz de percibir continuamente y a la vez como mínimo quince parámetros distintos físicos y químicos (temperatura, luz, gravedad, presencia de nutrientes, oxígeno).
Es su gran descubrimiento, y es suyo.
En cada punta de las raíces existen células similares a nuestras neuronas y su función es la misma: comunicar señales mediante impulsos eléctricos, igual que nuestro cerebro. En una planta puede haber millones de puntas de raíces, cada una con su pequeña comunidad de células; y trabajan en red como internet.
Ha encontrado el cerebro vegetal.
Sí, su zona de cálculo. La cuestión es cómo medir su inteligencia. Pero de una cosa estamos seguros: son muy inteligentes, su poder de resolver problemas, de adaptación, es grande. Hoy sobre el planeta el 99,6% de todo lo que está vivo son plantas.
… Y sólo conocemos el 10%.
Y en ese porcentaje tenemos todo nuestro alimento y la medicina. ¿Qué habrá en el restante 90%?… A diario, cientos de especies vegetales desconocidas se extinguen. Tal vez poseían la capacidad de una cura importante, no lo sabremos nunca. Debemos proteger las plantas por nuestra supervivencia.
¿Qué le emociona de las plantas?
Algunos comportamientos son muy emocionantes. Todas las plantas duermen, se despiertan, buscan la luz con sus hojas; tienen una actividad similar a la de los animales. Filmé el crecimiento de unos girasoles, y se ve clarísimo cómo juegan entre ellos.
Sí, establecen el comportamiento típico del juego que se ve en tantos animales. Cogimos una de esas pequeñas plantas y la hicimos crecer sola. De adulta tenía problemas de comportamiento: le costaba girar en busca del sol, le faltaba el aprendizaje a través del juego. Ver estas cosas es emocionante.
Date: July 29, 2014
Source: University of Illinois at Urbana-Champaign
Summary: By studying the injuries and aptitudes of Vietnam War veterans who suffered penetrating head wounds during the war, scientists are tackling — and beginning to answer — longstanding questions about how the brain works. The researchers found that brain regions that contribute to optimal social functioning also are vital to general intelligence and to emotional intelligence. This finding bolsters the view that general intelligence emerges from the emotional and social context of one’s life.
By studying the injuries and aptitudes of Vietnam War veterans who suffered penetrating head wounds during the war, scientists are tackling — and beginning to answer — longstanding questions about how the brain works.
The researchers found that brain regions that contribute to optimal social functioning also are vital to general intelligence and to emotional intelligence. This finding bolsters the view that general intelligence emerges from the emotional and social context of one’s life.
The findings are reported in the journal Brain.
“We are trying to understand the nature of general intelligence and to what extent our intellectual abilities are grounded in social cognitive abilities,” said Aron Barbey, a University of Illinois professor of neuroscience, of psychology, and of speech and hearing science. Barbey (bar-BAY), an affiliate of the Beckman Institute and of the Institute for Genomic Biology at the U. of I., led the new study with an international team of collaborators.
Studies in social psychology indicate that human intellectual functions originate from the social context of everyday life, Barbey said.
“We depend at an early stage of our development on social relationships — those who love us care for us when we would otherwise be helpless,” he said.
Social interdependence continues into adulthood and remains important throughout the lifespan, Barbey said.
“Our friends and family tell us when we could make bad mistakes and sometimes rescue us when we do,” he said. “And so the idea is that the ability to establish social relationships and to navigate the social world is not secondary to a more general cognitive capacity for intellectual function, but that it may be the other way around. Intelligence may originate from the central role of relationships in human life and therefore may be tied to social and emotional capacities.”
The study involved 144 Vietnam veterans injured by shrapnel or bullets that penetrated the skull, damaging distinct brain tissues while leaving neighboring tissues intact. Using CT scans, the scientists painstakingly mapped the affected brain regions of each participant, then pooled the data to build a collective map of the brain.
The researchers used a battery of carefully designed tests to assess participants’ intellectual, emotional and social capabilities. They then looked for patterns that tied damage to specific brain regions to deficits in the participants’ ability to navigate the intellectual, emotional or social realms. Social problem solving in this analysis primarily involved conflict resolution with friends, family and peers at work.
As in their earlier studies of general intelligence and emotional intelligence, the researchers found that regions of the frontal cortex (at the front of the brain), the parietal cortex (further back near the top of the head) and the temporal lobes (on the sides of the head behind the ears) are all implicated in social problem solving. The regions that contributed to social functioning in the parietal and temporal lobes were located only in the brain’s left hemisphere, while both left and right frontal lobes were involved.
The brain networks found to be important to social adeptness were not identical to those that contribute to general intelligence or emotional intelligence, but there was significant overlap, Barbey said.
“The evidence suggests that there’s an integrated information-processing architecture in the brain, that social problem solving depends upon mechanisms that are engaged for general intelligence and emotional intelligence,” he said. “This is consistent with the idea that intelligence depends to a large extent on social and emotional abilities, and we should think about intelligence in an integrated fashion rather than making a clear distinction between cognition and emotion and social processing. This makes sense because our lives are fundamentally social — we direct most of our efforts to understanding others and resolving social conflict. And our study suggests that the architecture of intelligence in the brain may be fundamentally social, too.”
- A. K. Barbey, R. Colom, E. J. Paul, A. Chau, J. Solomon, J. H. Grafman. Lesion mapping of social problem solving. Brain, 2014; DOI: 10.1093/brain/awu207
Date: May 29, 2014
Source: KU Leuven
Summary: When electrical pulses are applied to the ventral tegmental area of their brain, macaques presented with two images change their preference from one image to the other. The study is the first to confirm a causal link between activity in the ventral tegmental area and choice behavior in primates.
When electrical pulses are applied to the ventral tegmental area of their brain, macaques presented with two images change their preference from one image to the other. The study by researchers Wim Vanduffel and John Arsenault (KU Leuven and Massachusetts General Hospital) is the first to confirm a causal link between activity in the ventral tegmental area and choice behaviour in primates.
The ventral tegmental area is located in the midbrain and helps regulate learning and reinforcement in the brain’s reward system. It produces dopamine, a neurotransmitter that plays an important role in positive feelings, such as receiving a reward. “In this way, this small area of the brain provides learning signals,” explains Professor Vanduffel. “If a reward is larger or smaller than expected, behavior is reinforced or discouraged accordingly.”
This effect can be artificially induced: “In one experiment, we allowed macaques to choose multiple times between two images — a star or a ball, for example. This told us which of the two visual stimuli they tended to naturally prefer. In a second experiment, we stimulated the ventral tegmental area with mild electrical currents whenever they chose the initially nonpreferred image. This quickly changed their preference. We were also able to manipulate their altered preference back to the original favorite.”
The study, which will be published online in the journal Current Biology on 16 June, is the first to confirm a causal link between activity in the ventral tegmental area and choice behaviour in primates. “In scans we found that electrically stimulating this tiny brain area activated the brain’s entire reward system, just as it does spontaneously when a reward is received. This has important implications for research into disorders relating to the brain’s reward network, such as addiction or learning disabilities.”
Could this method be used in the future to manipulate our choices? “Theoretically, yes. But the ventral tegmental area is very deep in the brain. At this point, stimulating it can only be done invasively, by surgically placing electrodes — just as is currently done for deep brain stimulation to treat Parkinson’s or depression. Once non-invasive methods — light or ultrasound, for example — can be applied with a sufficiently high level of precision, they could potentially be used for correcting defects in the reward system, such as addiction and learning disabilities.”
- John T. Arsenault, Samy Rima, Heiko Stemmann, Wim Vanduffel. Role of the Primate Ventral Tegmental Area in Reinforcement and Motivation. Current Biology, 2014; DOI: 10.1016/j.cub.2014.04.044
CreditFelipe Dana/Associated Press
All animals do the same thing to the food they eat — they break it down to extract fuel and building blocks for growing new tissue. But the metabolism of one species may be profoundly different from another’s. A sloth will generate just enough energy to hang from a tree, for example, while some birds can convert their food into a flight from Alaska to New Zealand.
For decades, scientists have wondered how our metabolism compares to that of other species. It’s been a hard question to tackle, because metabolism is complicated — something that anyone who’s stared at a textbook diagram knows all too well. As we break down our food, we produce thousands of small molecules, some of which we flush out of our bodies and some of which we depend on for our survival.
An international team of researchers has now carried out a detailed comparison of metabolism in humans and other mammals. As they report in the journal PLOS Biology, both our brains and our muscles turn out to be unusual, metabolically speaking. And it’s possible that their odd metabolism was part of what made us uniquely human.
When scientists first began to study metabolism, they could measure it only in simple ways. They might estimate how many calories an animal burned in a day, for example. If they were feeling particularly ambitious, they might try to estimate how many calories each organ in the animal’s body burned.
Those tactics were enough to reveal some striking things about metabolism. Compared with other animals, we humans have ravenous brains. Twenty percent of the calories we take in each day are consumed by our neurons as they send signals to one another.
Ten years ago, Philipp Khaitovich of the Max Planck Institute of Evolutionary Anthropology and his colleagues began to study human metabolism in a more detailed way. They started making a catalog of the many molecules produced as we break down food.
“We wanted to get as much data as possible, just to see what happened,” said Dr. Khaitovich.
To do so, the scientists obtained brain, muscle and kidney tissues from organ donors. They then extracted metabolic compounds like glucose from the samples and measured their concentrations. All told, they measured the levels of over 10,000 different molecules.
The scientists found that each tissue had a different metabolic fingerprint, with high levels of some molecules and low levels of others.
These distinctive fingerprints came as little surprise, since each tissue has a different job to carry out. Muscles need to burn energy to generate mechanical forces, for example, while kidney cells need to pull waste out of the bloodstream.
The scientists then carried out the same experiment on chimpanzees, monkeys and mice. They found that the metabolic fingerprint for a given tissue was usually very similar in closely related species. The same tissues in more distantly related species had fingerprints with less in common.
But the scientists found two exceptions to this pattern.
The first exception turned up in the front of the brain. This region, called the prefrontal cortex, is important for figuring out how to reach long-term goals. Dr. Khaitovich’s team found that the way the human prefrontal cortex uses energy is quite distinct from other species; other tissues had comparable metabolic fingerprints across species, and even in other regions of the brain, the scientists didn’t find such a drastic difference.
This result fit in nicely with findings by other scientists that the human prefrontal cortex expanded greatly over the past six million years of our evolution. Its expansion accounts for much of the extra demand our brains make for calories.
The evolution of our enormous prefrontal cortex also had a profound effect on our species. We use it for many of the tasks that only humans can perform, such as reflecting on ourselves, thinking about what others are thinking and planning for the future.
But the prefrontal cortex was not the only part of the human body that has experienced a great deal of metabolic evolution. Dr. Khaitovich and his colleagues found that the metabolic fingerprint of muscle is even more distinct in humans.
“Muscle was really off the charts,” Dr. Khaitovich said. “We didn’t expect to see that at all.”
It was possible that the peculiar metabolism in human muscle was just the result of our modern lifestyle — not an evolutionary shift in our species. Our high-calorie diet might change the way muscle cells generated energy. It was also possible that a sedentary lifestyle made muscles weaker, creating a smaller metabolic demand.
To test that possibility, Dr. Khaitovich compared the strength of humans to that of our closest relatives. They found that chimpanzees and monkeys are far stronger, for their weight, than even university basketball players or professional climbers.
The scientists also tested their findings by putting monkeys on a couch-potato regime for a month to see if their muscles acquired a human metabolic fingerprint.
They barely changed.
Dr. Khaitovich suspects that the metabolic fingerprint of our muscles represents a genuine evolutionary change in our species.
Karen Isler and Carel van Schaik of the University of Zurich have argued that the gradual changes in human brains and muscles were intimately linked. To fuel a big brain, our ancestors had to sacrifice other tissues, including muscles.
Dr. Isler said that the new research fit their hypothesis nicely. “It looks quite convincing,” she said.
Daniel E. Lieberman, a professor of human evolutionary biology at Harvard, said he found Dr. Khaitovich’s study “very cool,” but didn’t think the results meant that brain growth came at the cost of strength. Instead, he suggested, our ancestors evolved muscles adapted for a new activity: long-distance walking and running.
“We have traded strength for endurance,” he said. And that endurance allowed our ancestors to gather more food, which could then fuel bigger brains.
“It may be that the human brain is bigger not in spite of brawn but rather because of brawn, albeit a very different kind,” he said.
JC e-mail 4892, de 11 de fevereiro de 2014
Descoberta pode ter implicações importantes para a compreensão de transtornos psiquiátricos como esquizofrenia e autismo
Cientistas do King’s College London identificaram, pela primeira vez, um gene que relaciona a espessura da massa cinzenta do cérebro à inteligência. O estudo foi publicado nesta terça-feira na revista “Molecular Psychiatry” e pode ajudar a entender os mecanismos biológicos por trás de determinados danos intelectuais.
Até agora já se sabia que a massa cinzenta tinha um papel importante para a memória, atenção, pensamento, linguagem e consciência. Estudos anteriores também já mostravam que a espessura do córtex cerebral tinha a ver com a habilidade intelectual, mas nenhum gene tinha sido identificado.
Um time internacional de cientistas, liderado pelo King´s College, analisou amostras de DNA e exames de ressonância magnética por imagem de 1.583 adolescentes saudáveis de 14 anos, que também se submeteram a uma série de testes para determinar inteligência verbal e não verbal.
– Queríamos descobrir como diferenças estruturais no cérebro tinham a ver com diferenças na habilidade intelectual. Identificamos uma variação genética relacionada à plasticidade sináptica, de como os neurônios se comunicam – explica Sylvane Desrivières, principal autora do estudo, pelo Instituto de Psiquiatria do King’s College London. – Isto pode nos ajudar a entender o que acontece em nível neuronal com certas formas de comprometimento intelectual, onde a habilidade de comunicação dos neurônios é, de alguma forma, comprometida.
Ela acrescenta que é importante apontar que a inteligência é influenciada por muitos fatores genéticos e ambientais. O gene que identificamos só explica uma pequena proporção das diferenças nas habilidades intelectuais e não é, de forma alguma, “o gene da inteligência”.
Os pesquisadores observaram 54 mil possíveis variações envolvidas no desenvolvimento cerebral. Em média, adolescentes com uma variante genética particular tinham um córtex mais fino no hemisfério cerebral esquerdo, particularmente nos lobos frontal e temporal, e executavam bem testes de capacidade intelectual. A variação genética afeta a expressão do gene NPTN, que codifica uma proteína que atua nas sinapses neuronais e, portanto, afeta a forma como as células do cérebro se comunicam.
Para confirmar as suas conclusões, os pesquisadores estudaram o gene NPTN em células de camundongo e do cérebro humano. Os pesquisadores verificaram que o gene NPTN tinha uma atividade diferente nos hemisférios esquerdo e direito do cérebro, o que pode fazer com que o hemisfério esquerdo seja mais sensível aos efeitos das mutações NPTN. Os resultados sugerem que algumas diferenças na capacidade intelectual podem resultar da diminuição da função do gene NPTN em determinadas regiões do hemisfério esquerdo do cérebro.
A variação genética identificada neste estudo representa apenas uma estimativa de 0,5% da variação total em inteligência. No entanto, as descobertas podem ter implicações importantes para a compreensão dos mecanismos biológicos subjacentes de vários transtornos psiquiátricos, como esquizofrenia e autismo, nas quais a capacidade cognitiva é uma característica fundamental da doença.
January 28, 2014
Source: Cell Press
Summary: New research suggests a surprising degree of similarity in the organization of regions of the brain that control language and complex thought processes in humans and monkeys. The study also revealed some key differences. The findings may provide valuable insights into the evolutionary processes that established our ties to other primates but also made us distinctly human.
(A) The right vlFC ROI. Dorsally it included the inferior frontal sulcus and, more posteriorly, it included PMv; anteriorly it was bound by the paracingulate sulcus and ventrally by the lateral orbital sulcus and the border between the dorsal insula and the opercular cortex. (B) A schematic depiction of the result of the 12 cluster parcellation solution using an iterative parcellation approach. We subdivided PMv into ventral and dorsal regions (6v and 6r, purple and black). We delineated the IFJ area (blue) and areas 44d (gray) and 44v (red) in lateral pars opercularis. More anteriorly, we delineated areas 45 (orange) in the pars triangularis and adjacent operculum and IFS (green) in the inferior frontal sulcus and dorsal pars triangularis. We found area 12/47 in the pars orbitalis (light blue) and area Op (bright yellow) in the deep frontal operculum. We also identified area 46 (yellow), and lateral and medial frontal pole regions (FPl and FPm, ruby colored and pink). Credit: Neuron, Neubert et al.
New research suggests a surprising degree of similarity in the organization of regions of the brain that control language and complex thought processes in humans and monkeys. The study, publishing online January 28 in the Cell Press journal Neuron, also revealed some key differences. The findings may provide valuable insights into the evolutionary processes that established our ties to other primates but also made us distinctly human.
By using non-invasive MRI techniques in 25 people and 25 macaques, Dr. Neubert and his team compared ventrolateral frontal cortex connectivity and architecture in humans and monkeys. The investigators were surprised to find many similarities in the connectivity of these regions. This suggests that some uniquely human cognitive traits may rely on an evolutionarily conserved neural apparatus that initially supported different functions. Additional research may reveal how slight changes in connectivity accompanied or facilitated the development of distinctly human abilities.
The researchers also noted some key differences between monkeys and humans. For example, ventrolateral frontal cortex circuits in the two species differ in the way that they interact with brain areas involved with hearing.
“This could explain why monkeys perform very poorly in some auditory tasks and might suggest that we humans use auditory information in a different way when making decisions and selecting actions,” says Dr. Neubert.
A region in the human ventrolateral frontal cortex — called the lateral frontal pole — does not seem to have an equivalent area in the monkey. This area is involved with strategic planning, decision-making, and multi-tasking abilities.
“This might relate to humans being particularly proficient in tasks that require strategic planning and decision making as well as ‘multi-tasking’,” says Dr. Neubert.
Interestingly, some of the ventrolateral frontal cortex regions that were similar in humans and monkeys are thought to play roles in psychiatric disorders such as attention deficit hyperactivity disorder, obsessive compulsive disorder, and substance abuse. A better understanding of the networks that are altered in these disorders might lead to therapeutic insights.
- Franz-Xaver Neubert et al. Comparison of human ventral frontal cortex areas for cognitive control and language with areas in monkey frontal cortex.Neuron, Jan 28, 2014
Jan. 16, 2014 — A thickening of the brain cortex associated with regular meditation or other spiritual or religious practice could be the reason those activities guard against depression — particularly in people who are predisposed to the disease, according to new research led by Lisa Miller, professor and director of Clinical Psychology and director of the Spirituality Mind Body Institute at Teachers College, Columbia University.
The study, published online by JAMA Psychiatry, involved 103 adults at either high or low risk of depression, based on family history. The subjects were asked how highly they valued religion or spirituality. Brain MRIs showed thicker cortices in subjects who placed a high importance on religion or spirituality than those who did not. The relatively thicker cortex was found in precisely the same regions of the brain that had otherwise shown thinning in people at high risk for depression.
Although more research is necessary, the results suggest that spirituality or religion may protect against major depression by thickening the brain cortex and counteracting the cortical thinning that would normally occur with major depression. The study, published on Dec. 25, 2013, is the first published investigation on the neuro-correlates of the protective effect of spirituality and religion against depression.
“The new study links this extremely large protective benefit of spirituality or religion to previous studies which identified large expanses of cortical thinning in specific regions of the brain in adult offspring of families at high risk for major depression,” Miller said.
Previous studies by Miller and the team published in theAmerican Journal of Psychiatry (2012) showed a 90 percent decrease in major depression in adults who said they highly valued spirituality or religiosity and whose parents suffered from the disease. While regular attendance at church was not necessary, a strong personal importance placed on spirituality or religion was most protective against major depression in people who were at high familial risk.
- Lisa Miller, Ravi Bansal, Priya Wickramaratne, Xuejun Hao, Craig E. Tenke, Myrna M. Weissman, Bradley S. Peterson.Neuroanatomical Correlates of Religiosity and Spirituality. JAMA Psychiatry, 2013; : 1 DOI:10.1001/jamapsychiatry.2013.3067
Suzana Herculano é a primeira brasileira a falar na prestigiada conferência TED
Ela debaterá o cérebro de 86 bilhões de neurônios (e não 100 bilhões, como se acreditava) e como o homem se diferenciou dos primatas
Publicado:24/05/13 – 7h00; Atualizado:24/05/13 – 11h41
Suzana Herculano-Houzel, professora do Instituto de Ciências Biomédicas da UFRJ Guito Moreto
Neurocientista da UFRJ, Suzana Herculano-Houzel é a primeira brasileira a participar da TED (Tecnologia, Entretenimento e Design, em português) — prestigiada série de conferências que reúne grandes nomes das mais diversas áreas do conhecimento para debater novas ideias. Suzana falará no dia 12 de junho, sob o tema “Ouça a natureza”, e destacará suas descobertas únicas sobre o cérebro humano.
Sobre o que vai falar na TED?
Vou falar sobre o cérebro humano e mostrar como ele não é um cérebro especial, uma exceção à regra. Nossas pesquisas nos revelaram que se trata apenas de um cérebro de primata grande. O notável é que passamos a ter um cérebro enorme, do tamanho que nenhum outro primata tem, nem os maiores, porque inventamos o cozimento dos alimentos e, com isso, passamos a ter um número enorme de neurônios.
O cozimento foi fundamental para nos tornarmos humanos?
Sim, burlamos a limitação energética imposta pela dieta crua. E a implicação bacana e irônica é que, com isso, conseguimos liberar tempo no cérebro para nos dedicarmos a outras coisas (que não buscar alimentos), como criar a agricultura, as civilizações, a geladeira e a eletricidade. Até o ponto em que conseguir comida cozida e calorias em excesso ficou tão fácil que, agora, temos o problema inverso: estamos comendo demais. Por isso, voltamos à saladinha.
Se alimentarmos orangotangos e gorilas com comida cozida eles serão tão inteligentes quanto nós?
Sim, porque não seriam limitados pelo número reduzido de calorias que conseguem com a comida crua. Claro que nós fizemos uma inovação cultural ao inventar a cozinha. Tem uma diferença entre dar comida cozida para o animal e ele ter o desenvolvimento cultural do cozimento. Mas, ainda assim, se em todas as refeições eles tiverem acesso à comida cozida, daqui a 200 mil ou 300 mil anos eles terão o cérebro maior. Com a alimentação que têm hoje, não é possível terem um cérebro maior dado o corpo grande que têm. É uma coisa ou outra.
A gente não é especial coisa alguma. Somos apenas um primata que burlou as regras energéticas e conseguiu botar mais neurônios no cérebro de um jeito que nenhum outro animal conseguiu. Por isso estudamos os outros animais e não o contrário.
Persistem ainda mitos sobre o cérebro? Como o dos 100 bilhões de neurônios, que seus estudos demonstraram que são, na verdade, 86 bilhões?
Sim, eles continuam existindo, mesmo na neurociência. O nosso trabalho já é muito citado como referência. As coisas estão mudando. E o mais legal é que é por conta da ciência tupiniquim, o que eu acho maravilhoso. Mas vemos que é um processo, que ainda tem muita gente que insiste no número antigo.
O novo manual de diagnóstico de doenças mentais dos EUA (que serve de referência para todo o mundo, inclusive para a OMS) foi lançado na semana passada em meio à controvérsia. Especialistas acham que são tantos transtornos que praticamente não resta mais nenhum espaço para a normalidade. Qual a sua opinião?
Acho que essa discussão é muito necessária, justamente para reconhecermos o que são as variações ao redor do normal e quais são os extremos problemáticos e doentios de fato. Então, a discussão é importante, ótima a qualquer momento. Mas acho também que há muita informação errada e sensacionalista circulando, sobretudo sobre o déficit de atenção. As estatísticas variam muito de país para país, às vezes porque varia o número de médicos que reconhece a criança como portadora do distúrbio. E acho que ainda há um problema enorme, um medo enorme do estereótipo da doença mental. Até hoje ainda existe uma resistência louca em ir a um psiquiatra. E acho que, pelo contrário, ganhamos muito reconhecendo que existem transtornos e que eles podem ser tratados.
Ainda há muito estigma?
O maior problema hoje em dia é que é feio ter um distúrbio no cérebro. Perceba que nem estou falando em transtorno mental. Precisar de remédio para o cérebro é terrível. E temos tanto a ganhar reconhecendo os problemas, fazendo os diagnósticos. O cérebro é tão complexo, tem tanta coisa para dar errado, que o espantoso é que não dê problema em todo mundo sempre. Então, acho normal que boa parte da população tenha algum problema, não me espanta nem um pouco. E, uma vez que se reconhece o problema, que se faz o diagnóstico, há a opção de poder tratar. Se dispomos de um tratamento, por que não usar?
O presidente dos EUA, Barack Obama, recentemente anunciou uma inédita iniciativa de reunir pesquisadores dos mais diversos centros para estudar exclusivamente o cérebro. O que podemos esperar de tamanho esforço científico?
Não só o cérebro, mas o cérebro em atividade. Obama quer ir além do que já tinham feito — estudar a função de diferentes áreas — e entender como se conectam, como falam umas com as outras, ter ideia desse funcionamento integrado, dessa interação. Essa é uma das grandes lacunas do conhecimento: entender como as várias partes do cérebro funcionam ao mesmo tempo. Não sabemos como o cérebro funciona como um todo; é uma das fronteiras finais do conhecimento.
Não sabemos como o cérebro funciona?
Como um todo, não. Sabemos o que as partes fazem, mas não sabemos como se dá a conversa entre elas. Não sabemos a origem da consciência, da sensação do “eu estou aqui agora”. Que áreas são fundamentais para isso? É esse tipo de conhecimento que se está buscando, do cérebro funcionando ao vivo e em cores, em tempo real.
O objetivo não é estudar doenças, então?
Não, o grande objetivo é estudar consciência, memória; entender como o cérebro reúne emoção e lógica, coisas que são fruto da ação coordenada de várias partes. Claro que desse conhecimento todo podem surgir implicações para o Alzheimer e outras doenças. Mas, na verdade, falar em doenças é uma roupagem usada pela divulgação do programa para o público assimilar melhor. Existe esse preconceito de que a ciência só vale quando resolve uma doença.
Leia mais sobre esse assunto em http://oglobo.globo.com/ciencia/a-mulher-que-encolheu-cerebro-humano-8482825#ixzz2UFWUvdYn © 1996 – 2013. Todos direitos reservados a Infoglobo Comunicação e Participações S.A. Este material não pode ser publicado, transmitido por broadcast, reescrito ou redistribuído sem autorização.
May 22, 2013 — Overexpression of a gene associated with schizophrenia causes classic symptoms of the disorder that are reversed when gene expression returns to normal, scientists report.
They genetically engineered mice so they could turn up levels of neuregulin-1 to mimic high levels found in some patients then return levels to normal, said Dr. Lin Mei, Director of the Institute of Molecular Medicine and Genetics at the Medical College of Georgia at Georgia Regents University.
They found that when elevated, mice were hyperactive, couldn’t remember what they had just learned and couldn’t ignore distracting background or white noise. When they returned neuregulin-1 levels to normal in adult mice, the schizophrenia-like symptoms went away, said Mei, corresponding author of the study in the journal Neuron.
While schizophrenia is generally considered a developmental disease that surfaces in early adulthood, Mei and his colleagues found that even when they kept neuregulin-1 levels normal until adulthood, mice still exhibited schizophrenia-like symptoms once higher levels were expressed. Without intervention, they developed symptoms at about the same age humans do.
“This shows that high levels of neuregulin-1 are a cause of schizophrenia, at least in mice, because when you turn them down, the behavior deficit disappears,” Mei said. “Our data certainly suggests that we can treat this cause by bringing down excessive levels of neuregulin-1 or blocking its pathologic effects.”
Schizophrenia is a spectrum disorder with multiple causes — most of which are unknown — that tends to run in families, and high neuregulin-1 levels have been found in only a minority of patients. To reduce neuregulin-1 levels in those individuals likely would require development of small molecules that could, for example, block the gene’s signaling pathways, Mei said. Current therapies treat symptoms and generally focus on reducing the activity of two neurotransmitters since the bottom line is excessive communication between neurons.
The good news is it’s relatively easy to measure neuregulin-1 since blood levels appear to correlate well with brain levels. To genetically alter the mice, they put a copy of the neuregulin-1 gene into mouse DNA then, to make sure they could control the levels, they put in front of the DNA a binding protein for doxycycline, a stable analogue for the antibiotic tetracycline, which is infamous for staining the teeth of fetuses and babies.
The mice are born expressing high levels of neuregulin-1 and giving the antibiotic restores normal levels. “If you don’t feed the mice tetracycline, the neuregulin-1 levels are always high,” said Mei, noting that endogenous levels of the gene are not affected. High-levels of neuregulin-1 appear to activate the kinase LIMK1, impairing release of the neurotransmitter glutamate and normal behavior. The LIMK1 connection identifies another target for intervention, Mei said.
Neuregulin-1 is essential for heart development as well as formation of myelin, the insulation around nerves. It’s among about 100 schizophrenia-associated genes identified through genome-wide association studies and has remained a consistent susceptibility gene using numerous other methods for examining the genetics of the disease. It’s also implicated in cancer.
Mei and his colleagues were the first to show neuregulin-1’s positive impact in the developed brain, reporting in Neuron in 2007 that it and its receptor ErbB4 help maintain a healthy balance of excitement and inhibition by releasing GABA, a major inhibitory neurotransmitter, at the sight of inhibitory synapses, the communication paths between neurons. Years before, they showed the genes were also at excitatory synapses, where they also could quash activation. In 2009, the MCG researchers provided additional evidence of the role of neuregulin-1 in schizophrenia by selectively deleting the gene for its receptor, ErbB4 and creating another symptomatic mouse.
Schizophrenia affects about 1 percent of the population, causing hallucinations, depression and impaired thinking and social behavior. Babies born to mothers who develop a severe infection, such as influenza or pneumonia, during pregnancy have a significantly increased risk of schizophrenia.
- Dong-Min Yin, Yong-Jun Chen, Yi-Sheng Lu, Jonathan C. Bean, Anupama Sathyamurthy, Chengyong Shen, Xihui Liu, Thiri W. Lin, Clifford A. Smith, Wen-Cheng Xiong, Lin Mei.Reversal of Behavioral Deficits and Synaptic Dysfunction in Mice Overexpressing Neuregulin 1.Neuron, 2013; 78 (4): 644 DOI:10.1016/j.neuron.2013.03.028
Gene activity changes accompany doglike behavior
Web edition: May 15, 2013
Taming silver foxes (shown) alters their behavior. A new study links those behavior changes to changes in brain chemicals. Tom Reichner/Shutterstock
COLD SPRING HARBOR, N.Y. – Taming foxes changes not only the animals’ behavior but also their brain chemistry, a new study shows.
The finding could shed light on how the foxes’ genetic cousins, wolves, morphed into man’s best friend. Lenore Pipes of Cornell University presented the results May 10 at the Biology of Genomes conference.
The foxes she worked with come from a long line started in 1959 when a Russian scientist named Dmitry Belyaev attempted to recreate dog domestication, but using foxes instead of wolves. He bred silver foxes (Vulpes vulpes), which are actually a type of red fox with white-tipped black fur. Belyaev and his colleagues selected the least aggressive animals they could find at local fox farms and bred them. Each generation, the scientists picked the tamest animals to mate, creating ever friendlier foxes. Now, more than 50 years later, the foxes act like dogs, wagging their tails, jumping with excitement and leaping into the arms of caregivers for caresses.
At the same time, the scientists also bred the most aggressive foxes on the farms. The descendents of those foxes crouch, flatten their ears, growl, bare their teeth and lunge at people who approach their cages.
The foxes’ tame and aggressive behaviors are rooted in genetics, but scientists have not found DNA changes that account for the differences. Rather than search for changes in genes themselves, Pipes and her colleagues took an indirect approach, looking for differences in the activity of genes in the foxes’ brains.
The team collected two brain parts, the prefrontal cortex and amygdala, from a dozen aggressive foxes and a dozen tame ones. The prefrontal cortex, an area at the front of the brain, is involved in decision making and in controlling social behavior, among other tasks. The amygdala, a pair of almond-size regions on either side of the brain, helps process emotional information.
Pipes found that the activity of hundreds of genes in the two brain regions differed between the groups of affable and hostile foxes. For example, aggressive animals had increased activity of some genes for sensing dopamine. Pipes speculated that tame animals’ lower levels of dopamine sensors might make them less anxious.
The team had expected to find changes in many genes involved in serotonin signaling, a process targeted by some popular antidepressants such as Prozac. Tame foxes are known to have more serotonin in their brains. But only one gene for sensing serotonin had higher activity in the friendly animals.
In a different sort of analysis, Pipes discovered that all aggressive foxes carry one form of the GRM3 glutamate receptor gene, while a majority of the friendly foxes have a different variant of the gene. In people, genetic variants of GRM3 have been linked to schizophrenia, bipolar disorder and other mood disorders. Other genes involved in transmitting glutamate signals, which help regulate mood, had increased activity in tame foxes, Pipes said.
It is not clear whether similar brain chemical changes accompanied the transformation of wolves into dogs, said Adam Freedman, an evolutionary biologist at Harvard University. Even if dogs and wolves now have differing brain chemical levels, researchers can’t turn back time to watch the process unfold; they can only guess at how domestication happened. “We have to reconstruct an unobservable series of steps,” he said. Pipes’ study is an interesting example of what might have happened to dogs’ brains during domestication, he said.
Feb. 13, 2013 — A team of political scientists and neuroscientists has shown that liberals and conservatives use different parts of the brain when they make risky decisions, and these regions can be used to predict which political party a person prefers. The new study suggests that while genetics or parental influence may play a significant role, being a Republican or Democrat changes how the brain functions.
Dr. Darren Schreiber, a researcher in neuropolitics at the University of Exeter, has been working in collaboration with colleagues at the University of California, San Diego on research that explores the differences in the way the brain functions in American liberals and conservatives. The findings are published Feb. 13 in the journalPLOS ONE.
In a prior experiment, participants had their brain activity measured as they played a simple gambling game. Dr. Schreiber and his UC San Diego collaborators were able to look up the political party registration of the participants in public records. Using this new analysis of 82 people who performed the gambling task, the academics showed that Republicans and Democrats do not differ in the risks they take. However, there were striking differences in the participants’ brain activity during the risk-taking task.
Democrats showed significantly greater activity in the left insula, a region associated with social and self-awareness. Meanwhile Republicans showed significantly greater activity in the right amygdala, a region involved in the body’s fight-or-flight system. These results suggest that liberals and conservatives engage different cognitive processes when they think about risk.
In fact, brain activity in these two regions alone can be used to predict whether a person is a Democrat or Republican with 82.9% accuracy. By comparison, the longstanding traditional model in political science, which uses the party affiliation of a person’s mother and father to predict the child’s affiliation, is only accurate about 69.5% of the time. And another model based on the differences in brain structure distinguishes liberals from conservatives with only 71.6% accuracy.
The model also outperforms models based on differences in genes. Dr. Schreiber said: “Although genetics have been shown to contribute to differences in political ideology and strength of party politics, the portion of variation in political affiliation explained by activity in the amygdala and insula is significantly larger, suggesting that affiliating with a political party and engaging in a partisan environment may alter the brain, above and beyond the effect of heredity.”
These results may pave the way for new research on voter behaviour, yielding better understanding of the differences in how liberals and conservatives think. According to Dr. Schreiber: “The ability to accurately predict party politics using only brain activity while gambling suggests that investigating basic neural differences between voters may provide us with more powerful insights than the traditional tools of political science.”
- Darren Schreiber, Greg Fonzo, Alan N. Simmons, Christopher T. Dawes, Taru Flagan, James H. Fowler, Martin P. Paulus. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS ONE, 2013; 8 (2): e52970 DOI:10.1371/journal.pone.0052970
Dec. 19, 2012 — Over the last half decade, it has become increasingly clear that the normal gastrointestinal (GI) bacteria play a variety of very important roles in the biology of human and animals. Now Vic Norris of the University of Rouen, France, and coauthors propose yet another role for GI bacteria: that they exert some control over their hosts’ appetites. Their review was published online ahead of print in the Journal of Bacteriology.
This hypothesis is based in large part on observations of the number of roles bacteria are already known to play in host biology, as well as their relationship to the host system. “Bacteria both recognize and synthesize neuroendocrine hormones,” Norris et al. write. “This has led to the hypothesis that microbes within the gut comprise a community that forms a microbial organ interfacing with the mammalian nervous system that innervates the gastrointestinal tract.” (That nervous system innervating the GI tract is called the “enteric nervous system.” It contains roughly half a billion neurons, compared with 85 billion neurons in the central nervous system.)
“The gut microbiota respond both to both the nutrients consumed by their hosts and to the state of their hosts as signaled by various hormones,” write Norris et al. That communication presumably goes both ways: they also generate compounds that are used for signaling within the human system, “including neurotransmitters such as GABA, amino acids such as tyrosine and tryptophan — which can be converted into the mood-determining molecules, dopamine and serotonin” — and much else, says Norris.
Furthermore, it is becoming increasingly clear that gut bacteria may play a role in diseases such as cancer, metabolic syndrome, and thyroid disease, through their influence on host signaling pathways. They may even influence mood disorders, according to recent, pioneering studies, via actions on dopamine and peptides involved in appetite. The gut bacterium,Campilobacter jejuni, has been implicated in the induction of anxiety in mice, says Norris.
But do the gut flora in fact use their abilities to influence choice of food? The investigators propose a variety of experiments that could help answer this question, including epidemiological studies, and “experiments correlating the presence of particular bacterial metabolites with images of the activity of regions of the brain associated with appetite and pleasure.”
- V. Norris, F. Molina, A. T. Gewirtz. Hypothesis: bacteria control host appetites. Journal of Bacteriology, 2012; DOI:10.1128/JB.01384-12
ScienceDaily (Nov. 16, 2012) — Researchers at Thomas Jefferson University and the University of Sao Paulo in Brazil analyzed the cerebral blood flow (CBF) of Brazilian mediums during the practice of psychography, described as a form of writing whereby a deceased person or spirit is believed to write through the medium’s hand. The new research revealed intriguing findings of decreased brain activity during the mediums’ dissociative state which generated complex written content. Their findings will appear in the November 16th edition of the online journal PLOS ONE.
The 10 mediums — five less expert and five experienced — were injected with a radioactive tracer to capture their brain activity during normal writing and during the practice of psychography which involves the subject entering a trance-like state. The subjects were scanned using SPECT (single photon emission computed tomography) to highlight the areas of the brain that are active and inactive during the practice.
“Spiritual experiences affect cerebral activity, this is known. But, the cerebral response to mediumship, the practice of supposedly being in communication with, or under the control of the spirit of a deceased person, has received little scientific attention, and from now on new studies should be conducted,” says Andrew Newberg, MD, director of Research at the Jefferson-Myrna Brind Center of Integrative Medicine and a nationally-known expert on spirituality and the brain, who collaborated with Julio F. P. Peres, Clinical Psychologist, PhD in Neuroscience and Behavior, Institute of Psychology at the University of Sao Paulo in Brazil, and colleagues on the research.
The mediums ranged from 15 to 47 years of automatic writing experience, performing up to 18 psychographies per month. All were right-handed, in good mental health, and not currently using any psychiatric drugs. All reported that during the study, they were able to reach their usual trance-like state during the psychography task and were in their regular state of consciousness during the control task.
The researchers found that the experienced psychographers showed lower levels of activity in the left hippocampus (limbic system), right superior temporal gyrus, and the frontal lobe regions of the left anterior cingulate and right precentral gyrus during psychography compared to their normal (non-trance) writing. The frontal lobe areas are associated with reasoning, planning, generating language, movement, and problem solving, perhaps reflecting an absence of focus, self-awareness and consciousness during psychography, the researchers hypothesize.
Less expert psychographers showed just the opposite — increased levels of CBF in the same frontal areas during psychography compared to normal writing. The difference was significant compared to the experienced mediums. This finding may be related to their more purposeful attempt at performing the psychography. The absence of current mental disorders in the groups is in line with current evidence that dissociative experiences are common in the general population and not necessarily related to mental disorders, especially in religious/spiritual groups. Further research should address criteria for distinguishing between healthy and pathological dissociative expressions in the scope of mediumship.
The writing samples produced were also analyzed and it was found that the complexity scores for the psychographed content were higher than those for the control writing across the board. In particular, the more experienced mediums showed higher complexity scores, which typically would require more activity in the frontal and temporal lobes, but this was not the case. Content produced during psychographies involved ethical principles, the importance of spirituality, and bringing together science and spirituality.
Several possible hypotheses for these many differences have been considered. One speculation is that as frontal lobe activity decreases, the areas of the brain that support mediumistic writing are further disinhibited (similar to alcohol or drug use) so that the overall complexity can increase. In a similar manner, improvisational music performance is associated with lower levels of frontal lobe activity which allows for more creative activity. However, improvisational music performance and alcohol/drug consumption states are quite peculiar and distinct from psychography. “While the exact reason is at this point elusive, our study suggests there are neurophysiological correlates of this state,” says Newberg.
“This first-ever neuroscientific evaluation of mediumistic trance states reveals some exciting data to improve our understanding of the mind and its relationship with the brain. These findings deserve further investigation both in terms of replication and explanatory hypotheses,” states Newberg.
- Julio Fernando Peres, Alexander Moreira-Almeida, Leonardo Caixeta, Frederico Leao, Andrew Newberg. Neuroimaging during Trance State: A Contribution to the Study of Dissociation. PLoS ONE, 2012; 7 (11): e49360 DOI:10.1371/journal.pone.0049360