Arquivo da tag: neurociências

Brain Cells Break Their Own DNA to Allow Memories to Form (IFL Science)

June 22, 2015 | by Justine Alford

photo credit: Courtesy of MIT Researchers 

Given the fundamental importance of our DNA, it is logical to assume that damage to it is undesirable and spells bad news; after all, we know that cancer can be caused by mutations that arise from such injury. But a surprising new study is turning that idea on its head, with the discovery that brain cells actually break their own DNA to enable us to learn and form memories.

While that may sound counterintuitive, it turns out that the damage is necessary to allow the expression of a set of genes, called early-response genes, which regulate various processes that are critical in the creation of long-lasting memories. These lesions are rectified pronto by repair systems, but interestingly, it seems that this ability deteriorates during aging, leading to a buildup of damage that could ultimately result in the degeneration of our brain cells.

This idea is supported by earlier work conducted by the same group, headed by Li-Huei Tsai, at the Massachusetts Institute of Technology (MIT) that discovered that the brains of mice engineered to develop a model of Alzheimer’s disease possessed a significant amount of DNA breaks, even before symptoms appeared. These lesions, which affected both strands of DNA, were observed in a region critical to learning and memory: the hippocampus.

To find out more about the possible consequences of such damage, the team grew neurons in a dish and exposed them to an agent that causes these so-called double strand breaks (DSBs), and then they monitored the gene expression levels. As described in Cellthey found that while the vast majority of genes that were affected by these breaks showed decreased expression, a small subset actually displayed increased expression levels. Importantly, these genes were involved in the regulation of neuronal activity, and included the early-response genes.

Since the early-response genes are known to be rapidly expressed following neuronal activity, the team was keen to find out whether normal neuronal stimulation could also be inducing DNA breaks. The scientists therefore applied a substance to the cells that is known to strengthen the tiny gap between neurons across which information flows – the synapse – mimicking what happens when an organism is exposed to a new experience.

“Sure enough, we found that the treatment very rapidly increased the expression of those early response genes, but it also caused DNA double strand breaks,” Tsai said in a statement.

So what is the connection between these breaks and the apparent boost in early-response gene expression? After using computers to scrutinize the DNA sequences neighboring these genes, the researchers found that they were enriched with a pattern targeted by an architectural protein that, upon binding, distorts the DNA strands by introducing kinks. By preventing crucial interactions between distant DNA regions, these bends therefore act as a barrier to gene expression. The breaks, however, resolve these constraints, allowing expression to ensue.

These findings could have important implications because earlier work has demonstrated that aging is associated with a decline in the expression of genes involved in the processes of learning and memory formation. It therefore seems likely that the DNA repair system deteriorates with age, but at this stage it is unclear how these changes occur, so the researchers plan to design further studies to find out more.

New Vessels Found In The Human Body That Connect Immune System And Brain (IFLScience)

June 3, 2015 | by Stephen Luntz

photo credit: Topic / Shutterstock. It used to be thought that the lymphatic system stopped at the neck, but it has now been found to reach into the brain

In contradiction to decades of medical education, a direct connection has been reported between the brain and the immune system. Claims this radical always require plenty of testing, even after winning publication, but this could be big news for research into diseases like multiple sclerosis (MS) and Alzheimer’s.

It seems astonishing that, after centuries of dissection, a system of lymphatic vessels could have survived undetected. That, however, is exactly what Professor Jonathan Kipnis of the University of Virginia claims in Nature.

Old and new representations of the lymphatic system that carries immune cells around the body. CreditUniversity of Virginia Health System

“It changes entirely the way we perceive the neuro-immune interaction,” says Kipnis. “We always perceived it before as something esoteric that can’t be studied. But now we can ask mechanistic questions.”

MS is known to be an example of the immune system attacking the brain, although the reasons are poorly understood. The opportunity to study lymphatic vessels that link the brain to the immune system could transform our understanding of how these attacks occur, and what could stop them. The causes of Alzheimer’s disease are even more controversial, but may also have immune system origins, and the authors suggest protein accumulation is a result of the vessels failing to do their job.

Indeed, Kipnis claims, “We believe that for every neurological disease that has an immune component to it, these vessels may play a major role.”

The discovery originated when Dr. Antoine Louveau, a researcher in Kipnis’ lab, mounted the membranes that cover mouse brains, known as meninges, on a slide. In the dural sinuses, which drain blood from the brain, he noticed linear patterns in the arrangement of immune T-cells. “I called Jony [Kipnis] to the microscope and I said, ‘I think we have something,'” Louveau recalls.

Kipnis was skeptical, and now says, “I thought that these discoveries ended somewhere around the middle of the last century. But apparently they have not.” Extensive further research convinced him and a group of co-authors from some of Virginia’s most prestigious neuroscience institutes that the vessels are real, they carry white blood cells and they also exist in humans. The network, they report, “appears to start from both eyes and track above the olfactory bulb before aligning adjacent to the sinuses.”

Kipnis pays particular credit to colleague Dr. Tajie Harris who enabled the team to image the vessels in action on live animals, confirming their function. Louveau also credits the discovery to fixing the meninges to a skullcap before dissecting, rather than the other way around. This, along with the closeness of the network to a blood vessel, is presumably why no one has observed it before.

The authors say the vessels, “Express all of the molecular hallmarks of lymphatic endothelial cells, are able to carry both fluid and immune cells from the cerebrospinal fluid, and are connected to the deep cervical lymph nodes.”

The authors add that the network bears many resemblances to the peripheral lymphatic system, but it “displays certain unique features,” including being “less complex [and] composed of narrower vessels.”

The discovery reinforces findings that immune cells are present even within healthy brains, a notion that was doubted until recently.

Meningial lymphatic vessels in mice. Credit: Louveau et al, Nature.

An evolutionary approach reveals new clues toward understanding the roots of schizophrenia (AAAS)



Is mental illness simply the evolutionary toll humans have to pay in return for our unique and superior cognitive abilities when compared to all other species? But if so, why have often debilitating illnesses like schizophrenia persisted throughout human evolutionary history when the affects can be quite negative on an individual’s chances of survival or reproductive success?

In a new study appearing in Molecular Biology and Evolution, Mount Sinai researcher Joel Dudley has led a new study that suggests that the very changes specific to human evolution may have come at a cost, contributing to the genetic architecture underlying schizophrenia traits in modern humans.

“We were intrigued by the fact that unlike many other mental traits, schizophrenia traits have not been observed in species other than humans, and schizophrenia has interesting and complex relationships with human intelligence,” said Dr. Joel Dudley, who led the study along with Dr. Panos Roussos. “The rapid increase in genomic data sequenced from large schizophrenia patient cohorts enabled us to investigate the molecular evolutionary history of schizophrenia in sophisticated new ways.”

The team examined a link between these regions, and human-specific evolution, in genomic segments called human accelerated regions, or HARs. HARs are short signposts in the genome that are conserved among non-human species but experienced faster mutation rates in humans. Thus, these regions, which are thought to control the level of gene expression, but not mutate the gene itself, may be an underexplored area of mental illness research.

The team’s research is the first study to sift through the human genome and identify a shared pattern between the location of HARs and recently identified schizophrenia gene loci. To perform their work, they utilized a recently completed, largest schizophrenia study of its kind, the Psychiatric Genomics Consortium (PGC), which included 36,989 schizophrenia cases and 113,075 controls. It is the largest genome-wide association study ever performed on any psychiatric disease.

They found that the schizophrenic loci were most strongly associated in genomic regions near the HARs that are conserved in non-human primates, and these HAR-associated schizophrenic loci are found to be under stronger evolutionary selective pressure when compared with other schizophrenic loci. Furthermore, these regions controlled genes that were expressed only in the prefrontal cortex of the brain, indicating that HARs may play an important role in regulating genes found to be linked to schizophrenia. They specifically found the greatest correlation between HAR-associated schizophrenic loci and genes controlling the expression of the neurotransmitter GABA, brain development, synaptic formations, adhesion and signaling molecules.

Their new evolutionary approach provides new insights into schizophrenia, and genomic targets to prioritize future studies and drug development targets. In addition, there are important new avenues to explore the roles of HARs in other mental diseases such as autism or bipolar disorder.

Common anticholinergic drugs like Benadryl linked to increased dementia risk (Harvard Health Blog)

POSTED JANUARY 28, 2015, 8:55 PM

Beverly Merz, Harvard Women’s Health Watch

One long-ago summer, I joined the legion of teens helping harvest our valley’s peach crop in western Colorado. My job was to select the best peaches from a bin, wrap each one in tissue, and pack it into a shipping crate. The peach fuzz that coated every surface of the packing shed made my nose stream and my eyelids swell. When I came home after my first day on the job, my mother was so alarmed she called the family doctor. Soon the druggist was at the door with a vial of Benadryl (diphenhydramine) tablets. The next morning I was back to normal and back on the job. Weeks later, when I collected my pay (including the ½-cent-per-crate bonus for staying until the end of the harvest), I thanked Benadryl.

Today, I’m thankful my need for that drug lasted only a few weeks. A report published online this week in JAMA Internal Medicine offers compelling evidence of a link between long-term use of anticholinergic medications like Benadryl and dementia.

Anticholinergic drugs block the action of acetylcholine. This substance transmits messages in the nervous system. In the brain, acetylcholine is involved in learning and memory. In the rest of the body, it stimulates muscle contractions. Anticholinergic drugs include some antihistamines, tricyclic antidepressants, medications to control overactive bladder, and drugs to relieve the symptoms of Parkinson’s disease.

What the study found

A team led by Shelley Gray, a pharmacist at the University of Washington’s School of Pharmacy, tracked nearly 3,500 men and women ages 65 and older who took part in Adult Changes in Thought (ACT), a long-term study conducted by the University of Washington and Group Health, a Seattle healthcare system. They used Group Health’s pharmacy records to determine all the drugs, both prescription and over-the-counter, that each participant took the 10 years before starting the study. Participants’ health was tracked for an average of seven years. During that time, 800 of the volunteers developed dementia. When the researchers examined the use of anticholinergic drugs, they found that people who used these drugs were more likely to have developed dementia as those who didn’t use them. Moreover, dementia risk increased along with the cumulative dose. Taking an anticholinergic for the equivalent of three years or more was associated with a 54% higher dementia risk than taking the same dose for three months or less.

The ACT results add to mounting evidence that anticholinergics aren’t drugs to take long-term if you want to keep a clear head, and keep your head clear into old age. The body’s production of acetylcholine diminishes with age, so blocking its effects can deliver a double whammy to older people. It’s not surprising that problems with short-term memory, reasoning, and confusion lead the list of anticholinergic side effects, which also include drowsiness, dry mouth, urine retention, and constipation.

The University of Washington study is the first to include nonprescription drugs. It is also the first to eliminate the possibility that people were taking a tricyclic antidepressant to alleviate early symptoms of undiagnosed dementia; the risk associated with bladder medications was just as high.

“This study is another reminder to periodically evaluate all of the drugs you’re taking. Look at each one to determine if it’s really helping,” says Dr. Sarah Berry, a geriatrician and assistant professor of medicine at Harvard Medical School. “For instance, I’ve seen people who have been on anticholinergic medications for bladder control for years and they are completely incontinent. These drugs obviously aren’t helping.”

Many drugs have a stronger effect on older people than younger people. With age, the kidneys and liver clear drugs more slowly, so drug levels in the blood remain higher for a longer time. People also gain fat and lose muscle mass with age, both of which change the way that drugs are distributed to and broken down in body tissues. In addition, older people tend to take more prescription and over-the-counter medications, each of which has the potential to suppress or enhance the effectiveness of the others.

What should you do?

In 2008, Indiana University School of Medicine geriatrician Malaz Boustani developed the anticholinergic cognitive burden scale, which ranks these drugs according to the severity of their effects on the mind. It’s a good idea to steer clear of the drugs with high ACB scores, meaning those with scores of 3. “There are so many alternatives to these drugs,” says Dr. Berry. For example, selective serotonin re-uptake inhibitors (SSRIs) like citalopram (Celexa) or fluoxetine (Prozac) are good alternatives to tricyclic antidepressants. Newer antihistamines such as loratadine (Claritin) can replace diphenhydramine or chlorpheniramine (Chlor-Trimeton). Botox injections and cognitive behavioral training can alleviate urge incontinence.

One of the best ways to make sure you’re taking the most effective drugs is to dump all your medications — prescription and nonprescription — into a bag and bring them to your next appointment with your primary care doctor.

Protein in coffee with effects like morphine discovered in Brazil (EFE)

Published January 25, 2015

Research done by the state University of Brasilia, or UnB, and Brazil’s state-owned agriculture and livestock research company Embrapa have discovered a protein in coffee with effects similar to morphine, scientists said on Saturday.

A communique from Embrapa said that its Genetics and Biotechnology Resources Division and the UnB successfully “identified previously unknown fragments of protein – peptides – in coffee that have an effect similar to morphine, in other words they have an analgesic and sedative activity.”

Those peptides, the note said, “have a positive differential: their effects last longer in experiments with laboratory mice.”

The two institutions applied for patents to Brazilian regulators for the seven “opioid peptides” identified in the study.

The discovery of the molecules came about through the doctorate research work of Felipe Vinecky of the Molecular Biology Department at UnB, who with the consultation of Embrapa was looking to combine coffee genes to improve the quality of the grain.

The studies also have the support of France’s Center for International Cooperation on Agricultural Research and Development, or CIRAD.

Another Weird Story: Intentional, Post-Intentional, and Unintentional Philosophy (The Cracked Egg)

JANUARY 18, 2015

I was a “2e” kid: gifted with ADHD but cursed with the power to ace standardized tests. I did so well on tests they enrolled me in a Hopkins study, but I couldn’t remember to brush my hair. As if that wasn’t enough, there were a lot of other unusual things going on, far too many to get into here. My brain constantly defied people’s expectations. It was never the same brain from day to day. I am, apparently, a real neuropsychiatric mystery, in both good and bad ways. I’m a walking, breathing challenge to people’s assumptions and perceptions. Just a few examples: the assumption that intelligence is a unitary phenomenon, and the perception that people who think like you are smarter than those who think differently. Even my reasons for defying expectations were misinterpreted. I hated the way people idolized individuality, because being different brought me only pain. People mistook me for trying to be different. Being different is a tragedy!

And it got weirder: I inherited the same sociocognitive tools as everyone else, so I made the same assumptions. Consequently, I defied even my own expectations. So I learned to mistrust my own perceptions, always looking over my shoulder, predicting my own behavior as if I were an outside observer. I literally had to re-engineer myself in order to function in society, and that was impossible to do without getting into some major philosophical questions. I freely admit that this process has taken me my entire life and only recently have I had any success. I am just now learning to function in society–I’m a cracked egg. Cracked once from outside, and once from inside. And just now growing up, a decade late.

So it’s no surprise that I’m so stuck on the question of what people’s brains are actually doing when they theorize.

I stumbled onto R. Scott Bakker’s theories after reading his philosophical thriller, Neuropath. Then I found his blog, and I was blown away that someone besides me was obsessed with the role of ingroup/outgroup dynamics in intellectual circles. As someone with no ingroup (at least not yet), it’s very refreshing. But what really blew my mind was that he had a theory of cognitive science that could explain many of my frustrating experiences: the Blind Brain Theory, or BBT.

The purpose of this post is not to explain BBT, so you’ll have to click the link if you want that. I’ll go more into depth on the specifics of BBT later, but for a ridiculously short summary: it’s a form of eliminativism. Eliminativism is the philosophical view that neuroscience reveals our traditional conceptions of the human being, like free will, mind, and meaning, to be radically mistaken. But BBT is unique among eliminativisms in its emphasis of neglect: the way in which blindness, or lack of information, actually *enables* our brains to solve problems, especially the problem of what we are. And from my perspective, that makes perfect sense.

BBT is a profoundly counterintuitive theory that cautions us against intuition itself. And ironically, it substantiates my skeptical intuitions.  In short, it shows I’m not the only one who has no clue what she’s doing. If BBT is correct, non-neurotypical individuals aren’t really “impaired.” They simply fit differently with other people. Fewer intersecting lines, that’s all. Bakker has developed his theory further since he published this paper, building on his notion of post-intentional theory (see here for a more general introduction). BBT has stirred up quite a lot of drama.

While we all argue over BBT, absorbed in defending our positions, I feel like an outsider, even among people who understand ingroups. Why? Because most of the people in the debate seem to be discussing something hypothetical, something academic. For me, as I’ve explained, the question of intentionality is a question of everyday life. So I can’t shirk my habit of wondering about biology: what’s going on in the brains of intentionalists? What’s going on in the brains of post-intentionalists? And what’s going on inside my own brain? Bakker would say this is precisely the sort of question a post-intentionalist would ask.

But what happens if the post-intentionalist has never done intentional philosophy? Allow me to explain, with a fictionalized example from my own experience. I use the term “intentional” in both an everyday and philosophical sense, interchangeably:

Intentional, Post-Intentional, and Unintentional Philosophy

Imagine you’re an ordinary person. You just want to get on with your life, but you have a terminal illness. It’s an extremely rare neuropsychiatric syndrome: in order to recover, you must solve an ancient philosophical question. You can’t just come up with any old answer. You actually have to prove you solved it, and convince everyone alive you at least have to convince yourself that you could convince anyone whose counterargument could possibly sway you. You’re skeptical to the marrow, and very good at Googling.

Remember, this is a terminal illness, so you have limited time to solve the problem.

In college, philosophy professors said you were a brilliant student. Plus, you have a great imagination from always being forced to do bizarre things. So naturally, you think you can solve it.

But it takes more time than you thought it would. Years more time. Enough time that you turn into a mad hermit. Your life collapses around you and you’re left with no friends, family, or work. But your genes are really damn virulent, and they simply don’t contain the stop codons for self-termination, so you persist.

And finally, after many failed attempts, you cough up something that sticks. An intellectual hairball.

But then the unimaginable happens: you come across a horrifying argument. The argument goes that when it comes to philosophy, intention matters. If your “philosophy” is just a means to survive, it is not philosophy at all; only that which is meant as philosophy can be called philosophical. So therefore, your solution is not valid. It is not even wrong.

So, it’s back to the drawing board for you. You have to find a new solution that makes your intention irrelevant. A solution that satisfies both the intentional philosophers, who do philosophy because they want to, and the unintentional philosophers who do it because they are forced to.

And then you run across something called post-intentional philosophy. It seems like a solution, but…

But post-intentional philosophy, as you see, requires a history: namely, a history of pre-post-intentional philosophy. Or, to oversimplify, intentional philosophy! The kind people do on purpose, not with a gun to their head.

You know that problems cannot be solved from the same level of consciousness that created them, so you try to escape what intentional and post-intentional philosophy share: theory. You think you can tackle your problem by finding a way out of theory altogether. A way that allows for the existence of all sorts of brains generating all sorts of things, intentional, post-intentional, and unintentional. A nonphilosophy, not a Laruellian non-philosophy. That way must exist, otherwise your philosophy will leave your very existence a mystery!

What do you do?

Are Theory and Practice Separate? Separable? Or something completely different?

Philosophy is generally a debate, but as an unintentional thinker I can’t help but remain neutral on everything except responsiveness to reality (more on that coming later). In this section I am attempting neither to support nor to attack it, but to explore it.

Bakker’s heuristic brand of eliminativism appears to bank on the ability to distinguish between the general and the specific, the practical and the theoretical. Correct me if I am wrong.

As the case of the “unintentional philosopher” suggests, philosophers themselves are counterexamples to the robustness of this distinction, just like people with impaired intentional cognition offer counterexamples that question folk psychology. If BBT is empirically testable, the practice-vs-theory distinction must remain empirically testable. We should be able to study everyday cognition (“Square One”) independently of theoretical cognition (“Square Two”) and characterize the neurobiological relationship of the two as either completely modular, somewhat modular, or somewhere in between. We should also be able to predict whether someone is an intentionalist or a post-intentionalist by observing their brains.

From a sociobiological perspective, one possibility is that Bakker is literally trying to hack philosophers’ brains: to separate the neural circuitry that connects philosophical cognition with daily functionality.

If that were the case, their disagreement would come as no surprise.

But my real point here, going back to my struggles with my unusual neurobiology, is that I am personally, neurologically, as close to “non-intentional” as people get. And that presents a problem for my ability to understand any of these philosophical distinctions regarding intentionality, post-intentionality, etc. But just as a person with Aspergers syndrome is forced to intellectually explore the social, my relative deficit of intentionality has simultaneously made it unavoidable–necessary for me to explore intentionality.  My point about theory and practice is to ask whether this state of affairs is “just my problem,” or whether it says something about the entire project of theory.

If nothing else, it certainly questions the assumption that the doctor is never the patient, that the post-intentional theorist is always, necessarily some sort of detached intellectual observer with no deviation from the intentional norm in his own neurobiology.

Come back later for a completely different view…

Do viruses make us smarter? (Science Daily)

Date: January 12, 2015

Source: Lund University

Summary: Inherited viruses that are millions of years old play an important role in building up the complex networks that characterize the human brain, researchers say. They have found that retroviruses seem to play a central role in the basic functions of the brain, more specifically in the regulation of which genes are to be expressed, and when.

Retroviruses seem to play a central role in the basic functions of the brain, more specifically in the regulation of which genes are to be expressed, and when, researchers say. Credit: © Sergey Bogdanov / Fotolia

A new study from Lund University in Sweden indicates that inherited viruses that are millions of years old play an important role in building up the complex networks that characterise the human brain.

Researchers have long been aware that endogenous retroviruses constitute around five per cent of our DNA. For many years, they were considered junk DNA of no real use, a side-effect of our evolutionary journey.

In the current study, Johan Jakobsson and his colleagues show that retroviruses seem to play a central role in the basic functions of the brain, more specifically in the regulation of which genes are to be expressed, and when. The findings indicate that, over the course of evolution, the viruses took an increasingly firm hold on the steering wheel in our cellular machinery. The reason the viruses are activated specifically in the brain is probably due to the fact that tumours cannot form in nerve cells, unlike in other tissues.

“We have been able to observe that these viruses are activated specifically in the brain cells and have an important regulatory role. We believe that the role of retroviruses can contribute to explaining why brain cells in particular are so dynamic and multifaceted in their function. It may also be the case that the viruses’ more or less complex functions in various species can help us to understand why we are so different,” says Johan Jakobsson, head of the research team for molecular neurogenetics at Lund University.

The article, based on studies of neural stem cells, shows that these cells use a particular molecular mechanism to control the activation processes of the retroviruses. The findings provide us with a complex insight into the innermost workings of the most basal functions of the nerve cells. At the same time, the results open up potential for new research paths concerning brain diseases linked to genetic factors.

“I believe that this can lead to new, exciting studies on the diseases of the brain. Currently, when we look for genetic factors linked to various diseases, we usually look for the genes we are familiar with, which make up a mere two per cent of the genome. Now we are opening up the possibility of looking at a much larger part of the genetic material which was previously considered unimportant. The image of the brain becomes more complex, but the area in which to search for errors linked to diseases with a genetic component, such as neurodegenerative diseases, psychiatric illness and brain tumours, also increases.”

Journal Reference:

  1. Liana Fasching, Adamandia Kapopoulou, Rohit Sachdeva, Rebecca Petri, Marie E. Jönsson, Christian Männe, Priscilla Turelli, Patric Jern, Florence Cammas, Didier Trono, Johan Jakobsson. TRIM28 Represses Transcription of Endogenous Retroviruses in Neural Progenitor CellsCell Reports, 2015; 10 (1): 20 DOI: 10.1016/j.celrep.2014.12.004

The Surprising Link Between Gut Bacteria And Anxiety (Huff Post)

 |  By

Posted: 01/04/2015 10:05 am EST 


In recent years, neuroscientists have become increasingly interested in the idea that there may be a powerful link between the human brain and gut bacteria. And while a growing body of research has provided evidence of the brain-gut connection, most of these studies so far have been conducted on animals.

Now, promising new research from neurobiologists at Oxford University offers some preliminary evidence of a connection between gut bacteria and mental health in humans. The researchers found that supplements designed to boost healthy bacteria in the gastrointestinal tract (“prebiotics”) may have an anti-anxiety effect insofar as they alter the way that people process emotional information.

While probiotics consist of strains of good bacteria, prebiotics are carbohydrates that act as nourishment for those bacteria. With increasing evidence that gut bacteria may exert some influence on brain function and mental health, probiotics and prebiotics are being increasingly studied for the potential alleviation of anxiety and depression symptoms.

“Prebiotics are dietary fibers (short chains of sugar molecules) that good bacteria break down, and use to multiply,” the study’s lead author, Oxford psychiatrist and neurobiologist Dr. Philip Burnet, told The Huffington Post. “Prebiotics are ‘food’ for good bacteria already present in the gut. Taking prebiotics therefore increases the numbers of all species of good bacteria in the gut, which will theoretically have greater beneficial effects than [introducing] a single species.”

To test the efficacy of prebiotics in reducing anxiety, the researchers asked 45 healthy adults between the ages of 18 and 45 to take either a prebiotic or a placebo every day for three weeks. After the three weeks had passed, the researchers completed several computer tests assessing how they processed emotional information, such as positive and negatively-charged words.

The results of one of the tests revealed that subjects who had taken the prebiotic paid less attention to negative information and more attention to positive information, compared to the placebo group, suggesting that the prebiotic group had less anxiety when confronted with negative stimuli. This effect is similar to that which has been observed among individuals who have taken antidepressants or anti-anxiety medication.

The researchers also found that the subjects who took the prebiotics had lower levels of cortisol — a stress hormone which has been linked with anxiety and depression — in their saliva when they woke up in the morning.

While previous research has documented that altering gut bacteria has a similarly anxiety-reducing effect in mice, the new study is one of the first to examine this phenomenon in humans. As of now, research on humans is in its early stages. A study conducted last year at UCLA found that women who consumed probiotics through regularly eating yogurt exhibited altered brain function in both a resting state and when performing an emotion-recognition task.

“Time and time again, we hear from patients that they never felt depressed or anxious until they started experiencing problems with their gut,” Dr. Kirsten Tillisch, the study’s lead author, said in a statement. “Our study shows that the gut–brain connection is a two-way street.”

So are we moving towards a future in which mental illness may be able to be treated (or at least managed) using targeted probiotic cocktails? Burnet says it’s possible, although they’re unlikely to replace conventional treatment.

“I think pre/probiotics will only be used as ‘adjuncts’ to conventional treatments, and never as mono-therapies,” Burnet tells HuffPost. “It is likely that these compounds will help to manage mental illness… they may also be used when there are metabolic and/or nutritional complications in mental illness, which may be caused by long-term use of current drugs.”

The findings were published in the journal Psychopharmacology.

Gut microbiota influences blood-brain barrier permeability (Science Daily)

Date: November 19, 2014

Source: Karolinska Institutet

Summary: Our natural gut-residing microbes can influence the integrity of the blood-brain barrier, which protects the brain from harmful substances in the blood, a new study in mice shows. The blood-brain barrier is a highly selective barrier that prevents unwanted molecules and cells from entering the brain from the bloodstream.

Uptake of the substance Raclopride in the brain of germ-free versus conventional mice. Credit: Miklos Toth

A new study in mice, conducted by researchers at Sweden’s Karolinska Institutet together with colleagues in Singapore and the United States, shows that our natural gut-residing microbes can influence the integrity of the blood-brain barrier, which protects the brain from harmful substances in the blood. According to the authors, the findings provide experimental evidence that our indigenous microbes contribute to the mechanism that closes the blood-brain barrier before birth. The results also support previous observations that gut microbiota can impact brain development and function.

The blood-brain barrier is a highly selective barrier that prevents unwanted molecules and cells from entering the brain from the bloodstream. In the current study, being published in the journal Science Translational Medicine, the international interdisciplinary research team demonstrates that the transport of molecules across the blood-brain barrier can be modulated by gut microbes — which therefore play an important role in the protection of the brain.

The investigators reached this conclusion by comparing the integrity and development of the blood-brain barrier between two groups of mice: the first group was raised in an environment where they were exposed to normal bacteria, and the second (called germ-free mice) was kept in a sterile environment without any bacteria.

“We showed that the presence of the maternal gut microbiota during late pregnancy blocked the passage of labeled antibodies from the circulation into the brain parenchyma of the growing fetus,” says first author Dr. Viorica Braniste at the Department of Microbiology, Tumor and Cell Biology at Karolinska Institutet. “In contrast, in age-matched fetuses from germ-free mothers, these labeled antibodies easily crossed the blood-brain barrier and was detected within the brain parenchyma.”

The team also showed that the increased ‘leakiness’ of the blood-brain barrier, observed in germ-free mice from early life, was maintained into adulthood. Interestingly, this ‘leakiness’ could be abrogated if the mice were exposed to fecal transplantation of normal gut microbes. The precise molecular mechanisms remain to be identified. However, the team was able to show that so-called tight junction proteins, which are known to be important for the blood-brain barrier permeability, did undergo structural changes and had altered levels of expression in the absence of bacteria.

According to the researchers, the findings provide experimental evidence that alterations of our indigenous microbiota may have far-reaching consequences for the blood-brain barrier function throughout life.

“These findings further underscore the importance of the maternal microbes during early life and that our bacteria are an integrated component of our body physiology,” says Professor Sven Pettersson, the principal investigator at the Department of Microbiology, Tumor and Cell Biology. “Given that the microbiome composition and diversity change over time, it is tempting to speculate that the blood-brain barrier integrity also may fluctuate depending on the microbiome. This knowledge may be used to develop new ways for opening the blood-brain-barrier to increase the efficacy of the brain cancer drugs and for the design of treatment regimes that strengthens the integrity of the blood-brain barrier.”

Journal Reference:

  1. V. Braniste, M. Al-Asmakh, C. Kowal, F. Anuar, A. Abbaspour, M. Toth, A. Korecka, N. Bakocevic, N. L. Guan, P. Kundu, B. Gulyas, C. Halldin, K. Hultenby, H. Nilsson, H. Hebert, B. T. Volpe, B. Diamond, S. Pettersson. The gut microbiota influences blood-brain barrier permeability in miceScience Translational Medicine, 2014; 6 (263): 263ra158 DOI: 10.1126/scitranslmed.3009759

Brain researchers pinpoint gateway to human memory (Science Daily)


November 26, 2014


DZNE – German Center for Neurodegenerative Diseases


An international team of researchers has successfully determined the location, where memories are generated with a level of precision never achieved before. To this end the scientists used a particularly accurate type of magnetic resonance imaging technology.


Magnetic resonance imaging provides insights into the brain. Credit: DZNE/Guido Hennes

The human brain continuously collects information. However, we have only basic knowledge of how new experiences are converted into lasting memories. Now, an international team led by researchers of the University of Magdeburg and the German Center for Neurodegenerative Diseases (DZNE) has successfully determined the location, where memories are generated with a level of precision never achieved before. The team was able to pinpoint this location down to specific circuits of the human brain. To this end the scientists used a particularly accurate type of magnetic resonance imaging (MRI) technology. The researchers hope that the results and method of their study might be able to assist in acquiring a better understanding of the effects Alzheimer’s disease has on the brain.

The findings are reported in Nature Communications.

For the recall of experiences and facts, various parts of the brain have to work together. Much of this interdependence is still undetermined, however, it is known that memories are stored primarily in the cerebral cortex and that the control center that generates memory content and also retrieves it, is located in the brain’s interior. This happens in the hippocampus and in the adjacent entorhinal cortex.

“It is been known for quite some time that these areas of the brain participate in the generation of memories. This is where information is collected and processed. Our study has refined our view of this situation,” explains Professor Emrah Düzel, site speaker of the DZNE in Magdeburg and director of the Institute of Cognitive Neurology and Dementia Research at the University of Magdeburg. “We have been able to locate the generation of human memories to certain neuronal layers within the hippocampus and the entorhinal cortex. We were able to determine which neuronal layer was active. This revealed if information was directed into the hippocampus or whether it traveled from the hippocampus into the cerebral cortex. Previously used MRI techniques were not precise enough to capture this directional information. Hence, this is the first time we have been able to show where in the brain the doorway to memory is located.”

For this study, the scientists examined the brains of persons who had volunteered to participate in a memory test. The researchers used a special type of magnetic resonance imaging technology called “7 Tesla ultra-high field MRI.” This enabled them to determine the activity of individual brain regions with unprecedented accuracy.

A Precision method for research on Alzheimer’s

“This measuring technique allows us to track the flow of information inside the brain and examine the areas that are involved in the processing of memories in great detail,” comments Düzel. “As a result, we hope to gain new insights into how memory impairments arise that are typical for Alzheimer’s. Concerning dementia, is the information still intact at the gateway to memory? Do troubles arise later on, when memories are processed? We hope to answer such questions.”

Story Source:

The above story is based on materials provided by DZNE – German Center for Neurodegenerative Diseases. Note: Materials may be edited for content and length.

Journal Reference:

  1. Anne Maass, Hartmut Schütze, Oliver Speck, Andrew Yonelinas, Claus Tempelmann, Hans-Jochen Heinze, David Berron, Arturo Cardenas-Blanco, Kay H. Brodersen, Klaas Enno Stephan, Emrah Düzel. Laminar activity in the hippocampus and entorhinal cortex related to novelty and episodic encoding. Nature Communications, 2014; 5: 5547 DOI: 10.1038/ncomms6547

How the bacteria in our gut affect our cravings for food (Conversation)

November 6 2014, 10.00pm EST

Vincent Ho

Gut bacteria can manufacture special proteins that are very similar to hunger-regulating hormones. Lighthunter/Shutterstock

We’ve long known that that the gut is responsible for digesting food and expelling the waste. More recently, we realised the gut has many more important functions and acts a type of mini-brain, affecting our mood and appetite. Now, new research suggests it might also play a role in our cravings for certain types of food.

How does the mini-brain work?

The gut mini-brain produces a wide range of hormones and contains many of the same neurotransmitters as the brain. The gut also contains neurons that are located in the walls of the gut in a distributed network known as the enteric nervous system. In fact, there are more of these neurons in the gut than in the entire spinal cord.

The enteric nervous system communicates to the brain via the brain-gut axis and signals flow in both directions. The brain-gut axis is thought to be involved in many regular functions and systems within the healthy body, including the regulation of eating.

Let’s consider what happens to the brain-gut axis when we eat a meal. When food arrives in the stomach, certain gut hormones are secreted. These activate signalling pathways from the gut to the brainstem and the hypothalamus to stop food consumption. Such hormones include the appetite-suppressing hormones peptide YY and cholecystokinin.

Gut hormones can bind and activate receptor targets in the brain directly but there is strong evidence that the vagus nerve plays a major role in brain-gut signalling. The vagus nerve acts as a major highway in the brain-gut axis, connecting the over 100 million neurons in the enteric nervous system to the medulla (located at the base of the brain).

Research has shown that vagus nerve blockade can lead to marked weight loss, while vagus nerve stimulation is known to trigger excessive eating in rats.

This brings us to the topic of food cravings. Scientists have largely debunked the myth that food cravings are our bodies’ way of letting us know that we need a specific type of nutrient. Instead, an emerging body of research suggests that our food cravings may actually be significantly shaped by the bacteria that we have inside our gut. In order to explore this further we will cover the role of gut microbes.

Gut microbiota

As many as 90% of our cells are bacterial. In fact, bacterial genes outnumber human genes by a factor of 100 to one.

The gut is an immensely complex microbial ecosystem with many different species of bacteria, some of which can live in an oxygen-free environment. An average person has approximately 1.5 kilograms of gut bacteria. The term “gut microbiota” is used to describe the bacterial collective.

We each have around 1.5kg of bacteria in our guts. Christopher PooleyCC BY

Gut microbiota send signals to the brain via the brain-gut axis and can have dramatic effects on animal behaviour and health.

In one study, for example, mice that were genetically predisposed to obesity remained lean when they were raised in a sterile environment without gut microbiota. These germ-free mice were, however, transformed into obese mice when fed a faecal pellet that came from an obese mouse raised conventionally.

The role of gut microbiota in food cravings

There is growing evidence to support the role of gut microbiota in influencing why we crave certain foods.

We know that mice that are bred in germ-free environments prefer more sweets and have greater number of sweet taste receptors in their gut compared to normal mice. Research has also found that persons who are “chocolate desiring” have microbial breakdown products in their urine that are different from those of “chocolate indifferent individuals” despite eating identical diets.

Many gut bacteria can manufacture special proteins (called peptides) that are very similar to hormones such as peptide YY and ghrelin that regulate hunger. Humans and other animals have produced antibodies against these peptides. This raises the distinct possibility that microbes might be able to directly influence human eating behaviour through their peptides that mimic hunger-regulating hormones or indirectly through antibodies that can interfere with appetite regulation.

Practical implications

There are substantial challenges to overcome before we can apply this knowledge about gut microbiota in a practical sense.

First, there is the challenge of collecting the gut microbes. Traditionally this is collected from stools but gut microbiota is known to vary between different regions of the gut, such as the small intestine and colon. Obtaining bacterial tissue through endoscopy or another invasive collection technique in addition to stool samples may lead to more accurate representation of the gut microbiome.

Second, the type of sequencing that is currently used for gut microbiota screening is expensive and time-consuming. Advances will need to be made before this technology is in routine use.

Probably the greatest challenge in gut microbiota research is the establishment of a strong correlation between gut microbiota patterns and human disease. The science of gut microbiota is in its infancy and there needs to be much more research mapping out disease relationships.

Probiotics contain live microorganisms. Quanthem/Shutterstock

But there is reason to be hopeful. There is now strong interest in utilising both prebiotics and probiotics to alter our gut micro biome. Prebiotics are non-digestible carbohydrates that trigger the growth of beneficial gut bacteria, while probiotics are beneficial live microorganisms contained in foods and supplements.

Faecal transplantation is also now an accepted treatment for those patients that have a severe form of gut bacterial infection called Clostridium difficile, which has been unresponsive to antibiotics.

The use of such targeted strategies is likely to become increasingly common as we better understand how gut microbiota influence our bodily functions, including food cravings.

Ghost illusion created in the lab (Science Daily)

Date: November 6, 2014

Source: Ecole Polytechnique Fédérale de Lausanne

Summary: Patients suffering from neurological or psychiatric conditions have often reported ‘feeling a presence’ watching over them. Now, researchers have succeeded in recreating these ghostly illusions in the lab.

This image depicts a person experiencing the ghost illusion in the lab. Credit: Alain Herzog/EPFL

Ghosts exist only in the mind, and scientists know just where to find them, an EPFL study suggests. Patients suffering from neurological or psychiatric conditions have often reported feeling a strange “presence.” Now, EPFL researchers in Switzerland have succeeded in recreating this so-called ghost illusion in the laboratory.

On June 29, 1970, mountaineer Reinhold Messner had an unusual experience. Recounting his descent down the virgin summit of Nanga Parbat with his brother, freezing, exhausted, and oxygen-starved in the vast barren landscape, he recalls, “Suddenly there was a third climber with us… a little to my right, a few steps behind me, just outside my field of vision.”

It was invisible, but there. Stories like this have been reported countless times by mountaineers, explorers, and survivors, as well as by people who have been widowed, but also by patients suffering from neurological or psychiatric disorders. They commonly describe a presence that is felt but unseen, akin to a guardian angel or a demon. Inexplicable, illusory, and persistent.

Olaf Blanke’s research team at EPFL has now unveiled this ghost. The team was able to recreate the illusion of a similar presence in the laboratory and provide a simple explanation. They showed that the “feeling of a presence” actually results from an alteration of sensorimotor brain signals, which are involved in generating self-awareness by integrating information from our movements and our body’s position in space.

In their experiment, Blanke’s team interfered with the sensorimotor input of participants in such a way that their brains no longer identified such signals as belonging to their own body, but instead interpreted them as those of someone else. The work is published in Current Biology.

Generating a “Ghost”

The researchers first analyzed the brains of 12 patients with neurological disorders — mostly epilepsy — who have experienced this kind of “apparition.” MRI analysis of the patients’s brains revealed interference with three cortical regions: the insular cortex, parietal-frontal cortex, and the temporo-parietal cortex. These three areas are involved in self-awareness, movement, and the sense of position in space (proprioception). Together, they contribute to multisensory signal processing, which is important for the perception of one’s own body.

The scientists then carried out a “dissonance” experiment in which blindfolded participants performed movements with their hand in front of their body. Behind them, a robotic device reproduced their movements, touching them on the back in real time. The result was a kind of spatial discrepancy, but because of the synchronized movement of the robot, the participant’s brain was able to adapt and correct for it.

Next, the neuroscientists introduced a temporal delay between the participant’s movement and the robot’s touch. Under these asynchronous conditions, distorting temporal and spatial perception, the researchers were able to recreate the ghost illusion.

An “Unbearable” Experience

The participants were unaware of the experiment’s purpose. After about three minutes of the delayed touching, the researchers asked them what they felt. Instinctively, several subjects reported a strong “feeling of a presence,” even counting up to four “ghosts” where none existed. “For some, the feeling was even so strong that they asked to stop the experiment,” said Giulio Rognini, who led the study.

“Our experiment induced the sensation of a foreign presence in the laboratory for the first time. It shows that it can arise under normal conditions, simply through conflicting sensory-motor signals,” explained Blanke. “The robotic system mimics the sensations of some patients with mental disorders or of healthy individuals under extreme circumstances. This confirms that it is caused by an altered perception of their own bodies in the brain.”

A Deeper Understanding of Schizophrenia

In addition to explaining a phenomenon that is common to many cultures, the aim of this research is to better understand some of the symptoms of patients suffering from schizophrenia. Such patients often suffer from hallucinations or delusions associated with the presence of an alien entity whose voice they may hear or whose actions they may feel. Many scientists attribute these perceptions to a malfunction of brain circuits that integrate sensory information in relation to our body’s movements.

“Our brain possesses several representations of our body in space,” added Giulio Rognini. “Under normal conditions, it is able to assemble a unified self-perception of the self from these representations. But when the system malfunctions because of disease — or, in this case, a robot — this can sometimes create a second representation of one’s own body, which is no longer perceived as ‘me’ but as someone else, a ‘presence’.”

It is unlikely that these findings will stop anyone from believing in ghosts. However, for scientists, it’s still more evidence that they only exist in our minds.

Watch the video:

Journal Reference:

  1. Olaf Blanke, Polona Pozeg, Masayuki Hara, Lukas Heydrich, Andrea Serino, Akio Yamamoto, Toshiro Higuchi, Roy Salomon, Margitta Seeck, Theodor Landis, Shahar Arzy, Bruno Herbelin, Hannes Bleuler, Giulio Rognini. Neurological and Robot-Controlled Induction of an Apparition. Current Biology, 2014; DOI:10.1016/j.cub.2014.09.049

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.

Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

How the brain leads us to believe we have sharp vision (Science Daily)

Date: October 17, 2014

Source: Bielefeld University

Summary: We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists have been investigating how the brain fools us into believing that we see in sharp detail.

The thumbnail at the end of an outstretched arm: This is the area that the eye actually can see in sharp detail. Researchers have investigated why the rest of the world also appears to be uniformly detailed. Credit: Bielefeld University

We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists at Bielefeld University have been investigating how the brain fools us into believing that we see in sharp detail. The results have been published in the scientific magazine Journal of Experimental Psychology: General. Its central finding is that our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail.

“In our study we are dealing with the question of why we believe that we see the world uniformly detailed,” says Dr. Arvid Herwig from the Neuro-Cognitive Psychology research group of the Faculty of Psychology and Sports Science. The group is also affiliated to the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University and is led by Professor Dr. Werner X. Schneider.

Only the fovea, the central area of the retina, can process objects precisely. We should therefore only be able to see a small area of our environment in sharp detail. This area is about the size of a thumb nail at the end of an outstretched arm. In contrast, all visual impressions which occur outside the fovea on the retina become progressively coarse. Nevertheless, we commonly have the impression that we see large parts of our environment in sharp detail.

Herwig and Schneider have been getting to the bottom of this phenomenon with a series of experiments. Their approach presumes that people learn through countless eye movements over a lifetime to connect the coarse impressions of objects outside the fovea to the detailed visual impressions after the eye has moved to the object of interest. For example, the coarse visual impression of a football (blurred image of a football) is connected to the detailed visual impression after the eye has moved. If a person sees a football out of the corner of her eye, her brain will compare this current blurred picture with memorised images of blurred objects. If the brain finds an image that fits, it will replace the coarse image with a precise image from memory. This blurred visual impression is replaced before the eye moves. The person thus thinks that she already sees the ball clearly, although this is not the case.

The psychologists have been using eye-tracking experiments to test their approach. Using the eye-tracking technique, eye movements are measured accurately with a specific camera which records 1000 images per second. In their experiments, the scientists have recorded fast balistic eye movements (saccades) of test persons. Though most of the participants did not realise it, certain objects were changed during eye movement. The aim was that the test persons learn new connections between visual stimuli from inside and outside the fovea, in other words from detailed and coarse impressions. Afterwards, the participants were asked to judge visual characteristics of objects outside the area of the fovea. The result showed that the connection between a coarse and detailed visual impression occurred after just a few minutes. The coarse visual impressions became similar to the newly learnt detailed visual impressions.

“The experiments show that our perception depends in large measure on stored visual experiences in our memory,” says Arvid Herwig. According to Herwig and Schneider, these experiences serve to predict the effect of future actions (“What would the world look like after a further eye movement”). In other words: “We do not see the actual world, but our predictions.”

Journal Reference:

  1. Arvid Herwig, Werner X. Schneider. Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 2014; 143 (5): 1903 DOI: 10.1037/a0036781

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients (Science Daily)

Date: October 16, 2014

Source: University of Cambridge

Summary: Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

These images show brain networks in two behaviorally similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu

Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).

The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically — but importantly, not always — impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question — it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”

The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.

Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”

Journal Reference:

  1. Chennu S, Finoia P, Kamau E, Allanson J, Williams GB, et al. Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness. PLOS Computational Biology, 2014; 10 (10): e1003887 DOI:10.1371/journal.pcbi.1003887

Amputees discern familiar sensations across prosthetic hand (Science Daily)

Date: October 8, 2014

Source: Case Western Reserve University

Summary: Patients connected to a new prosthetic system said they ‘felt’ their hands for the first time since they lost them in accidents. In the ensuing months, they began feeling sensations that were familiar and were able to control their prosthetic hands with more — well — dexterity.

Medical researchers are helping restore the sense of touch in amputees. Credit: Image courtesy of Case Western Reserve University

Even before he lost his right hand to an industrial accident 4 years ago, Igor Spetic had family open his medicine bottles. Cotton balls give him goose bumps.

Now, blindfolded during an experiment, he feels his arm hairs rise when a researcher brushes the back of his prosthetic hand with a cotton ball.

Spetic, of course, can’t feel the ball. But patterns of electric signals are sent by a computer into nerves in his arm and to his brain, which tells him different. “I knew immediately it was cotton,” he said.

That’s one of several types of sensation Spetic, of Madison, Ohio, can feel with the prosthetic system being developed by Case Western Reserve University and the Louis Stokes Cleveland Veterans Affairs Medical Center.

Spetic was excited just to “feel” again, and quickly received an unexpected benefit. The phantom pain he’d suffered, which he’s described as a vice crushing his closed fist, subsided almost completely. A second patient, who had less phantom pain after losing his right hand and much of his forearm in an accident, said his, too, is nearly gone.

Despite having phantom pain, both men said that the first time they were connected to the system and received the electrical stimulation, was the first time they’d felt their hands since their accidents. In the ensuing months, they began feeling sensations that were familiar and were able to control their prosthetic hands with more — well — dexterity.

To watch a video of the research, click here:

“The sense of touch is one of the ways we interact with objects around us,” said Dustin Tyler, an associate professor of biomedical engineering at Case Western Reserve and director of the research. “Our goal is not just to restore function, but to build a reconnection to the world. This is long-lasting, chronic restoration of sensation over multiple points across the hand.”

“The work reactivates areas of the brain that produce the sense of touch, said Tyler, who is also associate director of the Advanced Platform Technology Center at the Cleveland VA. “When the hand is lost, the inputs that switched on these areas were lost.”

How the system works and the results will be published online in the journal Science Translational Medicine Oct. 8.

“The sense of touch actually gets better,” said Keith Vonderhuevel, of Sidney, Ohio, who lost his hand in 2005 and had the system implanted in January 2013. “They change things on the computer to change the sensation.

“One time,” he said, “it felt like water running across the back of my hand.”

The system, which is limited to the lab at this point, uses electrical stimulation to give the sense of feeling. But there are key differences from other reported efforts.

First, the nerves that used to relay the sense of touch to the brain are stimulated by contact points on cuffs that encircle major nerve bundles in the arm, not by electrodes inserted through the protective nerve membranes.

Surgeons Michael W Keith, MD and J. Robert Anderson, MD, from Case Western Reserve School of Medicine and Cleveland VA, implanted three electrode cuffs in Spetic’s forearm, enabling him to feel 19 distinct points; and two cuffs in Vonderhuevel’s upper arm, enabling him to feel 16 distinct locations.

Second, when they began the study, the sensation Spetic felt when a sensor was touched was a tingle. To provide more natural sensations, the research team has developed algorithms that convert the input from sensors taped to a patient’s hand into varying patterns and intensities of electrical signals. The sensors themselves aren’t sophisticated enough to discern textures, they detect only pressure.

The different signal patterns, passed through the cuffs, are read as different stimuli by the brain. The scientists continue to fine-tune the patterns, and Spetic and Vonderhuevel appear to be becoming more attuned to them.

Third, the system has worked for 2 ½ years in Spetic and 1½ in Vonderhueval. Other research has reported sensation lasting one month and, in some cases, the ability to feel began to fade over weeks.

A blindfolded Vonderhuevel has held grapes or cherries in his prosthetic hand — the signals enabling him to gauge how tightly he’s squeezing — and pulled out the stems.

“When the sensation’s on, it’s not too hard,” he said. “When it’s off, you make a lot of grape juice.”

Different signal patterns interpreted as sandpaper, a smooth surface and a ridged surface enabled a blindfolded Spetic to discern each as they were applied to his hand. And when researchers touched two different locations with two different textures at the same time, he could discern the type and location of each.

Tyler believes that everyone creates a map of sensations from their life history that enables them to correlate an input to a given sensation.

“I don’t presume the stimuli we’re giving is hitting the spots on the map exactly, but they’re familiar enough that the brain identifies what it is,” he said.

Because of Vonderheuval’s and Spetic’s continuing progress, Tyler is hopeful the method can lead to a lifetime of use. He’s optimistic his team can develop a system a patient could use at home, within five years.

In addition to hand prosthetics, Tyler believes the technology can be used to help those using prosthetic legs receive input from the ground and adjust to gravel or uneven surfaces. Beyond that, the neural interfacing and new stimulation techniques may be useful in controlling tremors, deep brain stimulation and more.

Journal Reference:

  1. D. W. Tan, M. A. Schiefer, M. W. Keith, J. R. Anderson, J. Tyler, D. J. Tyler. A neural interface provides long-term stable natural touch perception. Science Translational Medicine, 2014; 6 (257): 257ra138 DOI:10.1126/scitranslmed.3008669

*   *   *

Mind-controlled prosthetic arms that work in daily life are now a reality (Science Daily)

Date: October 8, 2014

Source: Chalmers University of Technology

Summary: For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. A novel osseointegrated (bone-anchored) implant system gives patients new opportunities in their daily life and professional activities.

For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. Credit: Image courtesy of Chalmers University of Technology

For the first time, robotic prostheses controlled via implanted neuromuscular interfaces have become a clinical reality. A novel osseointegrated (bone-anchored) implant system gives patients new opportunities in their daily life and professional activities.

In January 2013 a Swedish arm amputee was the first person in the world to receive a prosthesis with a direct connection to bone, nerves and muscles. An article about this achievement and its long-term stability will now be published in the Science Translational Medicine journal.

“Going beyond the lab to allow the patient to face real-world challenges is the main contribution of this work,” says Max Ortiz Catalan, research scientist at Chalmers University of Technology and leading author of the publication.

“We have used osseointegration to create a long-term stable fusion between man and machine, where we have integrated them at different levels. The artificial arm is directly attached to the skeleton, thus providing mechanical stability. Then the human’s biological control system, that is nerves and muscles, is also interfaced to the machine’s control system via neuromuscular electrodes. This creates an intimate union between the body and the machine; between biology and mechatronics.”

The direct skeletal attachment is created by what is known as osseointegration, a technology in limb prostheses pioneered by associate professor Rickard Brånemark and his colleagues at Sahlgrenska University Hospital. Rickard Brånemark led the surgical implantation and collaborated closely with Max Ortiz Catalan and Professor Bo Håkansson at Chalmers University of Technology on this project.

The patient’s arm was amputated over ten years ago. Before the surgery, his prosthesis was controlled via electrodes placed over the skin. Robotic prostheses can be very advanced, but such a control system makes them unreliable and limits their functionality, and patients commonly reject them as a result.

Now, the patient has been given a control system that is directly connected to his own. He has a physically challenging job as a truck driver in northern Sweden, and since the surgery he has experienced that he can cope with all the situations he faces; everything from clamping his trailer load and operating machinery, to unpacking eggs and tying his children’s skates, regardless of the environmental conditions (read more about the benefits of the new technology below).

The patient is also one of the first in the world to take part in an effort to achieve long-term sensation via the prosthesis. Because the implant is a bidirectional interface, it can also be used to send signals in the opposite direction — from the prosthetic arm to the brain. This is the researchers’ next step, to clinically implement their findings on sensory feedback.

“Reliable communication between the prosthesis and the body has been the missing link for the clinical implementation of neural control and sensory feedback, and this is now in place,” says Max Ortiz Catalan. “So far we have shown that the patient has a long-term stable ability to perceive touch in different locations in the missing hand. Intuitive sensory feedback and control are crucial for interacting with the environment, for example to reliably hold an object despite disturbances or uncertainty. Today, no patient walks around with a prosthesis that provides such information, but we are working towards changing that in the very short term.”

The researchers plan to treat more patients with the novel technology later this year.

“We see this technology as an important step towards more natural control of artificial limbs,” says Max Ortiz Catalan. “It is the missing link for allowing sophisticated neural interfaces to control sophisticated prostheses. So far, this has only been possible in short experiments within controlled environments.”

More about: How the technology works

The new technology is based on the OPRA treatment (osseointegrated prosthesis for the rehabilitation of amputees), where a titanium implant is surgically inserted into the bone and becomes fixated to it by a process known as osseointegration (Osseo = bone). A percutaneous component (abutment) is then attached to the titanium implant to serve as a metallic bone extension, where the prosthesis is then fixated. Electrodes are implanted in nerves and muscles as the interfaces to the biological control system. These electrodes record signals which are transmitted via the osseointegrated implant to the prostheses, where the signals are finally decoded and translated into motions.

More about: Benefits of the new technology, compared to socket prostheses

Direct skeletal attachment by osseointegration means:

  • Increased range of motion since there are no physical limitations by the socket — the patient can move the remaining joints freely
  • Elimination of sores and pain caused by the constant pressure from the socket
  • Stable and easy attachment/detachment
  • Increased sensory feedback due to the direct transmission of forces and vibrations to the bone (osseoperception)
  • The prosthesis can be worn all day, every day
  • No socket adjustments required (there is no socket)

Implanting electrodes in nerves and muscles means that:

  • Due to the intimate connection, the patients can control the prosthesis with less effort and more precisely, and can thus handle smaller and more delicate items.
  • The close proximity between source and electrode also prevents activity from other muscles from interfering (cross-talk), so that the patient can move the arm to any position and still maintain control of the prosthesis.
  • More motor signals can be obtained from muscles and nerves, so that more movements can be intuitively controlled in the prosthesis.
  • After the first fitting of the controller, little or no recalibration is required because there is no need to reposition the electrodes on every occasion the prosthesis is worn (as opposed to superficial electrodes).
  • Since the electrodes are implanted rather than placed over the skin, control is not affected by environmental conditions (cold and heat) that change the skin state, or by limb motions that displace the skin over the muscles. The control is also resilient to electromagnetic interference (noise from other electric devices or power lines) as the electrodes are shielded by the body itself.
  • Electrodes in the nerves can be used to send signals to the brain as sensations coming from the prostheses.

Journal Reference:

  1. M. Ortiz-Catalan, B. Hakansson, R. Branemark. An osseointegrated human-machine gateway for long-term sensory feedback and motor control of artificial limbs. Science Translational Medicine, 2014; 6 (257): 257re6 DOI:10.1126/scitranslmed.3008933

Consciência pode permanecer por até três minutos após a morte, diz estudo (O Globo)

Cientistas entrevistaram pacientes que chegaram a ter morte clínica, mas voltaram à vida


Cena da novela "Amor Eterno Amor" da Rede Globo retrata a experiência de quase morte estudadas pelos cientistas da Universidade de Southampton Foto: ReproduçãoCena da novela “Amor Eterno Amor” da Rede Globo retrata a experiência de quase morte estudadas pelos cientistas da Universidade de Southampton – Reprodução

RIO – Aquele túnel com uma luz brilhante no fundo e uma sensação de paz descritos por filmes e outras pessoas que alegaram ter passado por experiência de quase morte podem ser reais. No maior estudo já feito sobre o tema, cientistas da Universidade de Southampton disseram ter comprovado que a consciência humana permanece por ao menos três minutos após o óbito biológico. Durante esse meio tempo, pacientes conseguiriam testemunhar e lembrar depois de eventos como a saída do corpo e os movimentos ao redor do quarto do hospital.

Ao longo de quatro anos, os especialistas examinaram mais de duas mil pessoas que sofreram paradas cardíacas em 15 hospitais no Reino Unido, Estados Unidos e Áustria. Cerca de 16% sobreviveram. E destes, mais de 40% descreveram algum tipo de “consciência” durante o tempo em que eles estavam clinicamente mortos, antes de seus corações voltarem a bater.

O caso mais emblemático foi de um homem ainda lembrou ter deixado seu corpo totalmente e assistindo sua reanimação do canto da sala. Apesar de ser inconsciente e “morto” por três minutos, o paciente narrou com detalhes as ações da equipe de enfermagem e descreveu o som das máquinas.

– Sabemos que o cérebro não pode funcionar quando o coração parou de bater. Mas neste caso, a percepção consciente parece ter continuado por até três minutos no período em que o coração não estava batendo, mesmo que o cérebro normalmente encerre as atividades dentro de 20 a 30 segundos após o coração – explicou ao jornal inglês The Telegraph o pesquisador Sam Parnia.

Dos 2.060 pacientes com parada cardíaca estudados, 330 sobreviveram e 140 disseram ter experimentado algum tipo de consciência ao ser ressuscitado. Embora muitos não se lembrassem de detalhes específicos, alguns relatos coincidiram. Um em cada cinco disseram que tinha sentido uma sensação incomum de tranquilidade, enquanto quase um terço disse que o tempo tinha se abrandado ou se acelerado.

Alguns lembraram de ter visto uma luz brilhante, um flash de ouro ou o sol brilhando. Outros relataram sentimentos de medo, afogamento ou sendo arrastado pelas águas profundas. Cerca de 13% disseram que se sentiam separados de seus corpos.

De acordo com Parnia, muito mais pessoas podem ter experiências quando estão perto da morte, mas as drogas ou sedativos utilizados no processo de ressuscitação podem afetar a memória:

– As estimativas sugerem que milhões de pessoas tiveram experiências vivas em relação à morte. Muitas assumiram que eram alucinações ou ilusões, mas os relatos parecem corresponder a eventos reais. E uma proporção maior de pessoas pode ter experiências vivas de morte, mas não se lembrarem delas devido aos efeitos da lesão cerebral ou sedativos em circuitos de memória.


Read more:

Near-death experiences? Results of the world’s largest medical study of the human mind and consciousness at time of death (Science Daily)

Date: October 7, 2014

Source: University of Southampton

Summary: The results of a four-year international study of 2060 cardiac arrest cases across 15 hospitals concludes the following. The themes relating to the experience of death appear far broader than what has been understood so far, or what has been described as so called near-death experiences. In some cases of cardiac arrest, memories of visual awareness compatible with so called out-of-body experiences may correspond with actual events. A higher proportion of people may have vivid death experiences, but do not recall them due to the effects of brain injury or sedative drugs on memory circuits. Widely used yet scientifically imprecise terms such as near-death and out-of-body experiences may not be sufficient to describe the actual experience of death. The recalled experience surrounding death merits a genuine investigation without prejudice.

The results of a four-year international study of 2060 cardiac arrest cases across 15 hospitals are in. Among those who reported a perception of awareness and completed further interviews, 46 per cent experienced a broad range of mental recollections in relation to death that were not compatible with the commonly used term of near death experiences. Credit: © sudok1 / Fotolia

The results of a four-year international study of 2060 cardiac arrest cases across 15 hospitals concludes the following. The themes relating to the experience of death appear far broader than what has been understood so far, or what has been described as so called near-death experiences. In some cases of cardiac arrest, memories of visual awareness compatible with so called out-of-body experiences may correspond with actual events. A higher proportion of people may have vivid death experiences, but do not recall them due to the effects of brain injury or sedative drugs on memory circuits. Widely used yet scientifically imprecise terms such as near-death and out-of-body experiences may not be sufficient to describe the actual experience of death.

Recollections in relation to death, so-called out-of-body experiences (OBEs) or near-death experiences (NDEs), are an often spoken about phenomenon which have frequently been considered hallucinatory or illusory in nature; however, objective studies on these experiences are limited.

In 2008, a large-scale study involving 2060 patients from 15 hospitals in the United Kingdom, United States and Austria was launched. The AWARE (AWAreness during REsuscitation) study, sponsored by the University of Southampton in the UK, examined the broad range of mental experiences in relation to death. Researchers also tested the validity of conscious experiences using objective markers for the first time in a large study to determine whether claims of awareness compatible with out-of-body experiences correspond with real or hallucinatory events.

Results of the study have been published in the journal Resuscitation.

Dr Sam Parnia, Assistant Professor of Critical Care Medicine and Director of Resuscitation Research at The State University of New York at Stony Brook, USA, and the study’s lead author, explained: “Contrary to perception, death is not a specific moment but a potentially reversible process that occurs after any severe illness or accident causes the heart, lungs and brain to cease functioning. If attempts are made to reverse this process, it is referred to as ‘cardiac arrest’; however, if these attempts do not succeed it is called ‘death’. In this study we wanted to go beyond the emotionally charged yet poorly defined term of NDEs to explore objectively what happens when we die.”

Thirty-nine per cent of patients who survived cardiac arrest and were able to undergo structured interviews described a perception of awareness, but interestingly did not have any explicit recall of events.

“This suggests more people may have mental activity initially but then lose their memories after recovery, either due to the effects of brain injury or sedative drugs on memory recall,” explained Dr Parnia, who was an Honorary Research Fellow at the University of Southampton when he started the AWARE study.

Among those who reported a perception of awareness and completed further interviews, 46 per cent experienced a broad range of mental recollections in relation to death that were not compatible with the commonly used term of NDE’s. These included fearful and persecutory experiences. Only 9 per cent had experiences compatible with NDEs and 2 per cent exhibited full awareness compatible with OBE’s with explicit recall of ‘seeing’ and ‘hearing’ events.

One case was validated and timed using auditory stimuli during cardiac arrest. Dr Parnia concluded: “This is significant, since it has often been assumed that experiences in relation to death are likely hallucinations or illusions, occurring either before the heart stops or after the heart has been successfully restarted, but not an experience corresponding with ‘real’ events when the heart isn’t beating. In this case, consciousness and awareness appeared to occur during a three-minute period when there was no heartbeat. This is paradoxical, since the brain typically ceases functioning within 20-30 seconds of the heart stopping and doesn’t resume again until the heart has been restarted. Furthermore, the detailed recollections of visual awareness in this case were consistent with verified events.

“Thus, while it was not possible to absolutely prove the reality or meaning of patients’ experiences and claims of awareness, (due to the very low incidence (2 per cent) of explicit recall of visual awareness or so called OBE’s), it was impossible to disclaim them either and more work is needed in this area. Clearly, the recalled experience surrounding death now merits further genuine investigation without prejudice.”

Further studies are also needed to explore whether awareness (explicit or implicit) may lead to long term adverse psychological outcomes including post-traumatic stress disorder.

Dr Jerry Nolan, Editor-in-Chief of Resuscitation, stated: “The AWARE study researchers are to be congratulated on the completion of a fascinating study that will open the door to more extensive research into what happens when we die.”

Journal Reference:

  1. Parnia S, et al. AWARE—AWAreness during REsuscitation—A prospective study. Resuscitation, 2014 DOI: 10.1016/j.resuscitation.2014.09.004

How learning to talk is in the genes (Science Daily)

Date: September 16, 2014

Source: University of Bristol

Summary: Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Scientists discovered a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.

Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Credit: © witthaya / Fotolia

Researchers have found evidence that genetic factors may contribute to the development of language during infancy.

Scientists from the Medical Research Council (MRC) Integrative Epidemiology Unit at the University of Bristol worked with colleagues around the world to discover a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.

Children produce words at about 10 to 15 months of age and our range of vocabulary expands as we grow — from around 50 words at 15 to 18 months, 200 words at 18 to 30 months, 14,000 words at six-years-old and then over 50,000 words by the time we leave secondary school.

The researchers found the genetic link during the ages of 15 to 18 months when toddlers typically communicate with single words only before their linguistic skills advance to two-word combinations and more complex grammatical structures.

The results, published in Nature Communications today [16 Sept], shed further light on a specific genetic region on chromosome 3, which has been previously implicated in dyslexia and speech-related disorders.

The ROBO2 gene contains the instructions for making the ROBO2 protein. This protein directs chemicals in brain cells and other neuronal cell formations that may help infants to develop language but also to produce sounds.

The ROBO2 protein also closely interacts with other ROBO proteins that have previously been linked to problems with reading and the storage of speech sounds.

Dr Beate St Pourcain, who jointly led the research with Professor Davey Smith at the MRC Integrative Epidemiology Unit, said: “This research helps us to better understand the genetic factors which may be involved in the early language development in healthy children, particularly at a time when children speak with single words only, and strengthens the link between ROBO proteins and a variety of linguistic skills in humans.”

Dr Claire Haworth, one of the lead authors, based at the University of Warwick, commented: “In this study we found that results using DNA confirm those we get from twin studies about the importance of genetic influences for language development. This is good news as it means that current DNA-based investigations can be used to detect most of the genetic factors that contribute to these early language skills.”

The study was carried out by an international team of scientists from the EArly Genetics and Lifecourse Epidemiology Consortium (EAGLE) and involved data from over 10,000 children.

Journal Reference:
  1. Beate St Pourcain, Rolieke A.M. Cents, Andrew J.O. Whitehouse, Claire M.A. Haworth, Oliver S.P. Davis, Paul F. O’Reilly, Susan Roulstone, Yvonne Wren, Qi W. Ang, Fleur P. Velders, David M. Evans, John P. Kemp, Nicole M. Warrington, Laura Miller, Nicholas J. Timpson, Susan M. Ring, Frank C. Verhulst, Albert Hofman, Fernando Rivadeneira, Emma L. Meaburn, Thomas S. Price, Philip S. Dale, Demetris Pillas, Anneli Yliherva, Alina Rodriguez, Jean Golding, Vincent W.V. Jaddoe, Marjo-Riitta Jarvelin, Robert Plomin, Craig E. Pennell, Henning Tiemeier, George Davey Smith. Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 2014; 5: 4831 DOI:10.1038/ncomms5831

Nudge: The gentle science of good governance (New Scientist)

25 June 2013

Magazine issue 2922

NOT long before David Cameron became UK prime minister, he famously prescribed some holiday reading for his colleagues: a book modestly entitled Nudge.

Cameron wasn’t the only world leader to find it compelling. US president Barack Obama soon appointed one of its authors, Cass Sunstein, a social scientist at the University of Chicago, to a powerful position in the White House. And thus the nudge bandwagon began rolling. It has been picking up speed ever since (see “Nudge power: Big government’s little pushes“).

So what’s the big idea? We don’t always do what’s best for ourselves, thanks to cognitive biases and errors that make us deviate from rational self-interest. The premise of Nudge is that subtly offsetting or exploiting these biases can help people to make better choices.

If you live in the US or UK, you’re likely to have been nudged towards a certain decision at some point. You probably didn’t notice. That’s deliberate: nudging is widely assumed to work best when people aren’t aware of it. But that stealth breeds suspicion: people recoil from the idea that they are being stealthily manipulated.

There are other grounds for suspicion. It sounds glib: a neat term for a slippery concept. You could argue that it is a way for governments to avoid taking decisive action. Or you might be concerned that it lets them push us towards a convenient choice, regardless of what we really want.

These don’t really hold up. Our distaste for being nudged is understandable, but is arguably just another cognitive bias, given that our behaviour is constantly being discreetly influenced by others. What’s more, interventions only qualify as nudges if they don’t create concrete incentives in any particular direction. So the choice ultimately remains a free one.

Nudging is a less blunt instrument than regulation or tax. It should supplement rather than supplant these, and nudgers must be held accountable. But broadly speaking, anyone who believes in evidence-based policy should try to overcome their distaste and welcome governance based on behavioural insights and controlled trials, rather than carrot-and-stick wishful thinking. Perhaps we just need a nudge in the right direction.

Brain circuit differences reflect divisions in social status (Science Daily)

Date: September 2, 2014

Source: University of Oxford

Summary: Life at opposite ends of primate social hierarchies is linked to specific brain networks, research has shown. The more dominant you are, the bigger some brain regions are. If your social position is more subordinate, other brain regions are bigger.


Group of young barbary macaques (stock image). The research determined the position of 25 macaque monkeys in their social hierarchy and then analyzed non-invasive scans of their brains that had been collected as part of other ongoing University research programs. The findings show that brain regions in one neural circuit are larger in more dominant animals. The regions composing this circuit are the amygdala, raphe nucleus and hypothalamus. Credit: © scphoto48 / Fotolia

Life at opposite ends of primate social hierarchies is linked to specific brain networks, a new Oxford University study has shown.

The importance of social rank is something we all learn at an early age. In non-human primates, social dominance influences access to food and mates. In humans, social hierarchies influence our performance everywhere from school to the workplace and have a direct influence on our well-being and mental health. Life on the lowest rung can be stressful, but life at the top also requires careful acts of balancing and coalition forming. However, we know very little about the relationship between these social ranks and brain function.

The new research, conducted at the University of Oxford, reveals differences between individual primate’s brains which depend on the their social status. The more dominant you are, the bigger some brain regions are. If your social position is more subordinate, other brain regions are bigger. Additionally, the way the brain regions interact with each other is also associated with social status. The pattern of results suggests that successful behaviour at each end of the social scale makes specialised demands of the brain.

The research, led by Dr MaryAnn Noonan of the Decision and Action Laboratory at the University of Oxford, determined the position of 25 macaque monkeys in their social hierarchy and then analysed non-invasive scans of their brains that had been collected as part of other ongoing University research programs. The findings, publishing September 2 in the open access journal PLOS Biology, show that brain regions in one neural circuit are larger in more dominant animals. The regions composing this circuit are the amygdala, raphe nucleus and hypothalamus. Previous research has shown that the amygdala is involved in learning, and processing social and emotional information. The raphe nucleus and hypothalamus are involved in controlling neurotransmitters and neurohormones, such as serotonin and oxytocin. The MRI scans also revealed that another circuit of brain regions, which collectively can be called the striatum, were found to be larger in more subordinate animals. The striatum is known to play a complex but important role in learning the value of our choices and actions.

The study also reports that the brain’s activity, not just its structure, varies with position in the social hierarchy. The researchers found that the strength with which activity in some of these areas was coupled together was also related to social status. Collectively, these results mean that social status is not only reflected in the brain’s hardware, it is also related to differences in the brain’s software, or communication patterns.

Finally, the size of another set of brain regions correlated not only with social status but also with the size of the animal’s social group. The macaque groups ranged in size between one and seven. The research showed that grey matter in regions involved in social cognition, such as the mid-superior temporal sulcus and rostral prefrontal cortex, correlated with both group size and social status. Previous research has shown that these regions are important for a variety of social behaviours, such as interpreting facial expressions or physical gestures, understanding the intentions of others and predicting their behaviour.

“This finding may reflect the fact that social status in macaques depends not only on the outcome of competitive social interactions but on social bonds formed that promote coalitions,” says Matthew Rushworth, the head of the Decision and Action Laboratory in Oxford. “The correlation with social group size and social status suggests this set of brain regions may coordinate behaviour that bridges these two social variables.”

The results suggest that just as animals assign value to environmental stimuli they may also assign values to themselves — ‘self-values’. Social rank is likely to be an important determinant of such self-values. We already know that some of the brain regions identified in the current study track the value of objects in our environment and so may also play a key role in monitoring longer-term values associated with an individual’s status.

The reasons behind the identified brain differences remain unclear, particularly whether they are present at birth or result from social differences. Dr Noonan said: “One possibility is that the demands of a life in a particular social position use certain brain regions more frequently and as a result those areas expand to step up to the task. Alternatively, it is possible that people born with brains organised in a particular way tend towards certain social positions. In all likelihood, both of these mechanisms will work together to produce behaviour appropriate for the social context.”

Social status also changes over time and in different contexts. Dr Noonan added: “While we might be top-dog in one circle of friends, at work we might be more of a social climber. The fluidity of our social position and how our brains adapt our behavior to succeed in each context is the next exciting direction for this area of research.”


Journal Reference:

  1. MaryAnn P. Noonan, Jerome Sallet, Rogier B. Mars, Franz X. Neubert, Jill X. O’Reilly, Jesper L. Andersson, Anna S. Mitchell, Andrew H. Bell, Karla L. Miller, Matthew F. S. Rushworth. A Neural Circuit Covarying with Social Hierarchy in Macaques. PLoS Biology, 2014; 12 (9): e1001940 DOI:10.1371/journal.pbio.1001940

Your Brain on Metaphors (The Chronicle of Higher Education)

September 1, 2014

Neuroscientists test the theory that your body shapes your ideas

Your Brain  on Metaphors 1

Chronicle Review illustration by Scott Seymour

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same.
The syntax is identical.
Does the brain notice, or care,
that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By,the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote.

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness.

Metaphors do differ across languages, but that doesn’t affect the theory. For example, in Aymara, spoken in Bolivia and Chile, speakers refer to past experiences as being in front of them, on the theory that past events are “visible” and future ones are not. However, the difference between behind and ahead is relatively unimportant compared with the central fact that space is being used as a metaphor for time. Lakoff argues that it isimpossible—not just difficult, but impossible—for humans to talk about time and many other fundamental aspects of life without using metaphors to do it.

Lakoff and Johnson’s program is as anti-Platonic as it’s possible to get. It undermines the argument that human minds can reveal transcendent truths about reality in transparent language. They argue instead that human cognition is embodied—that human concepts are shaped by the physical features of human brains and bodies. “Our physiology provides the concepts for our philosophy,” Lakoff wrote in his introduction to Benjamin Bergen’s 2012 book, Louder Than Words: The New Science of How the Mind Makes Meaning. Marianna Bolognesi, a linguist at the International Center for Intercultural Exchange, in Siena, Italy, puts it this way: “The classical view of cognition is that language is an independent system made with abstract symbols that work independently from our bodies. This view has been challenged by the embodied account of cognition which states that language is tightly connected to our experience. Our bodily experience.”

Modern brain-scanning technologies make it possible to test such claims empirically. “That would make a connection between the biology of our bodies on the one hand, and thinking and meaning on the other hand,” says Gerard Steen, a professor of linguistics at VU University Amsterdam. Neuroscientists have been stuffing volunteers into fMRI scanners and having them read sentences that are literal, metaphorical, and idiomatic.

Neuroscientists agree on what happens with literal sentences like “The player kicked the ball.” The brain reacts as if it were carrying out the described actions. This is called “simulation.” Take the sentence “Harry picked up the glass.” “If you can’t imagine picking up a glass or seeing someone picking up a glass,” Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, “then you can’t understand that sentence.” Lakoff argues that the brain understands sentences not just by analyzing syntax and looking up neural dictionaries, but also by igniting its memories of kicking and picking up.

But what about metaphorical sentences like “The patient kicked the habit”? An addiction can’t literally be struck with a foot. Does the brain simulate the action of kicking anyway? Or does it somehow automatically substitute a more literal verb, such as “stopped”? This is where functional MRI can help, because it can watch to see if the brain’s motor cortex lights up in areas related to the leg and foot.

The evidence says it does. “When you read action-related metaphors,” says Valentina Cuccio, a philosophy postdoc at the University of Palermo, in Italy, “you have activation of the motor area of the brain.” In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

But idioms are a major sticking point. Idioms are usually thought of as dead metaphors, that is, as metaphors that are so familiar that they have become clichés. What does the brain do with “The villain kicked the bucket” (“The villain died”)? What about “The students toed the line” (“The students conformed to the rules”)? Does the brain simulate the verb phrases, or does it treat them as frozen blocks of abstract language? And if it simulates them, what actions does it imagine? If the brain understands language by simulating it, then it should do so even when sentences are not literal.

The findings so far have been contradictory. Lisa Aziz-Zadeh, of the University of Southern California, and her colleagues reported in 2006 that idioms such as “biting off more than you can chew” did not activate the motor cortex. So did Ana Raposo, then at the University of Cambridge, and her colleagues in 2009. On the other hand, Véronique Boulenger, of the Laboratoire Dynamique du Langage, in Lyon, France, reported in the same year that they did, at least for leg and arm verbs.

In 2013, Desai and his colleagues tried to settle the problem of idioms. They first hypothesized that the inconsistent results come from differences of methodology. “Imaging studies of embodiment in figurative language have not compared idioms and metaphors,” they wrote in a report. “Some have mixed idioms and metaphors together, and in some cases, ‘idiom’ is used to refer to familiar metaphors.” Lera Boroditsky, an associate professor of psychology at the University of California at San Diego, agrees. “The field is new. The methods need to stabilize,” she says. “There are many different kinds of figurative language, and they may be importantly different from one another.”

Not only that, the nitty-gritty differences of procedure may be important. “All of these studies are carried out with different kinds of linguistic stimuli with different procedures,” Cuccio says. “So, for example, sometimes you have an experiment in which the person can read the full sentence on the screen. There are other experiments in which participants read the sentence just word by word, and this makes a difference.”

To try to clear things up, Desai and his colleagues presented subjects inside fMRI machines with an assorted set of metaphors and idioms. They concluded that in a sense, everyone was right. The more idiomatic the metaphor was, the less the motor system got involved: “When metaphors are very highly conventionalized, as is the case for idioms, engagement of sensory-motor systems is minimized or very brief.”

But George Lakoff thinks the problem of idioms can’t be settled so easily. The people who do fMRI studies are fine neuroscientists but not linguists, he says. “They don’t even know what the problem is most of the time. The people doing the experiments don’t know the linguistics.”

That is to say, Lakoff explains, their papers assume that every brain processes a given idiom the same way. Not true. Take “kick the bucket.” Lakoff offers a theory of what it means using a scene from Young Frankenstein. “Mel Brooks is there and they’ve got the patient dying,” he says. “The bucket is a slop bucket at the edge of the bed, and as he dies, his foot goes out in rigor mortis and the slop bucket goes over and they all hold their nose. OK. But what’s interesting about this is that the bucket starts upright and it goes down. It winds up empty. This is a metaphor—that you’re full of life, and life is a fluid. You kick the bucket, and it goes over.”

That’s a useful explanation of a rather obscure idiom. But it turns out that when linguists ask people what they think the metaphor means, they get different answers. “You say, ‘Do you have a mental image? Where is the bucket before it’s kicked?’ ” Lakoff says. “Some people say it’s upright. Some people say upside down. Some people say you’re standing on it. Some people have nothing. You know! There isn’t a systematic connection across people for this. And if you’re averaging across subjects, you’re probably not going to get anything.”

Similarly, Lakoff says, when linguists ask people to write down the idiom “toe the line,” half of them write “tow the line.” That yields a different mental simulation. And different mental simulations will activate different areas of the motor cortex—in this case, scrunching feet up to a line versus using arms to tow something heavy. Therefore, fMRI results could show different parts of different subjects’ motor cortexes lighting up to process “toe the line.” In that case, averaging subjects together would be misleading.

Furthermore, Lakoff questions whether functional MRI can really see what’s going on with language at the neural level. “How many neurons are there in one pixel or one voxel?” he says. “About 125,000. They’re one point in the picture.” MRI lacks the necessary temporal resolution, too. “What is the time course of that fMRI? It could be between one and five seconds. What is the time course of the firing of the neurons? A thousand times faster. So basically, you don’t know what’s going on inside of that voxel.” What it comes down to is that language is a wretchedly complex thing and our tools aren’t yet up to the job.

Nonetheless, the work supports a radically new conception of how a bunch of pulsing cells can understand anything at all. In a 2012 paper, Lakoff offered an account of how metaphors arise out of the physiology of neural firing, based on the work of a student of his, Srini Narayanan, who is now a faculty member at Berkeley. As children grow up, they are repeatedly exposed to basic experiences such as temperature and affection simultaneously when, for example, they are cuddled. The neural structures that record temperature and affection are repeatedly co-activated, leading to an increasingly strong neural linkage between them.

However, since the brain is always computing temperature but not always computing affection, the relationship between those neural structures is asymmetric. When they form a linkage, Lakoff says, “the one that spikes first and most regularly is going to get strengthened in its direction, and the other one is going to get weakened.” Lakoff thinks the asymmetry gives rise to a metaphor: Affection is Warmth. Because of the neural asymmetry, it doesn’t go the other way around: Warmth is not Affection. Feeling warm during a 100-degree day, for example, does not make one feel loved. The metaphor originates from the asymmetry of the neural firing. Lakoff is now working on a book on the neural theory of metaphor.

If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

On the other hand, roboticists such as Rodney Brooks, an emeritus professor at the Massachusetts Institute of Technology, have suggested that computers could be provided with bodies. For example, they could be given control of robots stuffed with sensors and actuators. Brooks pondered Lakoff’s ideas in his 2002 book, Flesh and Machines, and supposed, “For anything to develop the same sorts of conceptual understanding of the world as we do, it will have to develop the same sorts of metaphors, rooted in a body, that we humans do.”

But Lera Boroditsky wonders if giving computers humanlike bodies would only reproduce human limitations. “If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system,” she says. “I don’t know why we have to replicate our physical limitations in other systems.”

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there. And so may be the ability to create asymmetric neural linkages that say this is like (but not identical to) that. In an age of brain scanning as well as poetry, that’s where metaphor gets you.

Michael Chorost is the author of Rebuilt: How Becoming Part Computer Made Me More Human (Houghton Mifflin, 2005) and World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011).

Inside the teenage brain: New studies explain risky behavior (Science Daily)

Date: August 27, 2014

Source: Florida State University

Summary: It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Young man (stock image). “Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general. Credit: © iko / Fotolia

It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Florida State University College of Medicine Neuroscientist Pradeep Bhide brought together some of the world’s foremost researchers in a quest to explain why teenagers — boys, in particular — often behave erratically.

The result is a series of 19 studies that approached the question from multiple scientific domains, including psychology, neurochemistry, brain imaging, clinical neuroscience and neurobiology. The studies are published in a special volume of Developmental Neuroscience, “Teenage Brains: Think Different?”

“Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general.

“The emotional and economic burdens of such behaviors are quite huge. The research described in this book offers clues to what may cause such maladaptive behaviors and how one may be able to devise methods of countering, avoiding or modifying these behaviors.”

An example of findings published in the book that provide new insights about the inner workings of a teenage boy’s brain:

• Unlike children or adults, teenage boys show enhanced activity in the part of the brain that controls emotions when confronted with a threat. Magnetic resonance scanner readings in one study revealed that the level of activity in the limbic brain of adolescent males reacting to threat, even when they’ve been told not to respond to it, was strikingly different from that in adult men.

• Using brain activity measurements, another team of researchers found that teenage boys were mostly immune to the threat of punishment but hypersensitive to the possibility of large gains from gambling. The results question the effectiveness of punishment as a deterrent for risky or deviant behavior in adolescent boys.

• Another study demonstrated that a molecule known to be vital in developing fear of dangerous situations is less active in adolescent male brains. These findings point towards neurochemical differences between teenage and adult brains, which may underlie the complex behaviors exhibited by teenagers.

“The new studies illustrate the neurobiological basis of some of the more unusual but well-known behaviors exhibited by our teenagers,” Bhide said. “Stress, hormonal changes, complexities of psycho-social environment and peer-pressure all contribute to the challenges of assimilation faced by teenagers.

“These studies attempt to isolate, examine and understand some of these potential causes of a teenager’s complex conundrum. The research sheds light on how we may be able to better interact with teenagers at home or outside the home, how to design educational strategies and how best to treat or modify a teenager’s maladaptive behavior.”

Bhide conceived and edited “Teenage Brains: Think Different?” His co-editors were Barry Kasofsky and B.J. Casey, both of Weill Medical College at Cornell University. The book was published by Karger Medical and Scientific Publisher of Basel, Switzerland. More information on the book can be found at:

The table of contents to the special journal volume can be found at:

Stefano Mancuso, pionero en el estudio de la neurobiología de las plantas (La Vanguardia)

Victor-M Amela, Ima Sanchís, Lluís Amiguet

“Las plantas tienen neuronas, son seres inteligentes”

29/12/2010 – 02:03

"Las plantas tienen neuronas, son seres inteligentes"



Cerebro vegetal

Gracias a nuestros amigos de Redes, el programa de Eduard Punset, buscadores incansables de todo conocimiento científico que amplíe los límites del saber, de quiénes somos y qué papel desempeñamos en esta sopa de universos, descubrimos a Mancuso, que nos explica que las plantas, vistas a cámara rápida, se comportan como si tuvieran cerebro: tienen neuronas, se comunican mediante señales químicas, toman decisiones, son altruistas y manipuladoras. ¿Hace cinco años era imposible hablar de comportamiento de las plantas, hoy podemos empezar a hablar de su inteligencia¿… Puede que pronto empecemos a hablar de sus sentimientos. Mancuso estará en Redes el próximo día 2. No se lo pierdan.


Las plantas son organismos inteligentes, pero se mueven y toman decisiones en un tiempo más largo que el del hombre.

Lo intuía.

Hoy sabemos que tienen familia y parientes y que reconocen su cercanía. Se comportan de manera totalmente distinta si a su lado hay parientes o hay extraños. Si son parientes no compiten: a través de las raíces, dividen el territorio de manera equitativa.

¿Un árbol puede voluntariamente mandar savia a una planta pequeña?

Sí. Las plantas requieren luz para vivir, ypara que una semilla llegue a la luz deben pasar muchos años; mientras tanto, son nutridas por árboles de su misma especie.


Los cuidados parentales sólo se dan en animales muy evolucionados y es increíble que se den en las plantas.

Entonces, se comunican.

Sí, en una selva todas las plantas están en comunicación subterránea a través de las raíces. Y también fabrican moléculas volátiles que avisan a plantas lejanas sobre lo que está sucediendo.

¿Por ejemplo?

Cuando una planta es atacada por un patógeno, inmediatamente produce moléculas volátiles que pueden viajar kilómetros, y que avisan a todas las demás para que preparen sus defensas.

¿Qué defensas?

Producen moléculas químicas que las convierten en indigeribles, y pueden ser muy agresivas. Hace diez años, en Botsuana introdujeron en un gran parque 200.000 antílopes, que comenzaron a comerse las acacias con intensidad. Tras pocas semanas muchos murieron y al cabo de seis meses murieron más de 10.000, y no advertían por qué. Hoy sabemos que fueron las plantas.

Demasiada predación.

Sí, y las plantas aumentaron hasta tal punto la concentración de taninos en sus hojas, que se convirtieron en un veneno.

¿Las plantas también son empáticas con otros seres?

Es difícil decirlo, pero hay una cosa segura: las plantas pueden manipular a los animales. Durante la polinización producen néctar y otras sustancias para atraer a los insectos. Las orquídeas producen flores que son muy similares a las hembras de algunos insectos, que, engañados, acuden a ellas. Y hay quien afirma que hasta el ser humano es manipulado por las plantas.

¿. ..?

Todas las drogas que usa el hombre (café, tabaco, opio, marihuana…) derivan de las plantas, ¿pero por qué las plantas producen una sustancia que convierte a humanos en dependientes? Porque así las propagamos. Las plantas utilizan al hombre como transporte. Hay investigaciones sobre ello.


Si mañana desaparecieran las plantas del planeta, en un mes toda la vida se extinguiría porque no habría comida ni oxígeno. Todo el oxígeno que respiramos viene de ellas. Pero si nosotros desapareciéramos, no pasaría nada. Somos dependientes de las plantas, pero las plantas no lo son de nosotros. Quien es dependiente está en una situación inferior, ¿no?

Las plantas son mucho más sensibles. Cuando algo cambia en el ambiente, como ellas no pueden escapar, han de ser capaces de sentir con mucha anticipación cualquier mínimo cambio para adaptarse.

¿Y cómo perciben?

Cada punta de raíz es capaz de percibir continuamente y a la vez como mínimo quince parámetros distintos físicos y químicos (temperatura, luz, gravedad, presencia de nutrientes, oxígeno).

Es su gran descubrimiento, y es suyo.

En cada punta de las raíces existen células similares a nuestras neuronas y su función es la misma: comunicar señales mediante impulsos eléctricos, igual que nuestro cerebro. En una planta puede haber millones de puntas de raíces, cada una con su pequeña comunidad de células; y trabajan en red como internet.

Ha encontrado el cerebro vegetal.

Sí, su zona de cálculo. La cuestión es cómo medir su inteligencia. Pero de una cosa estamos seguros: son muy inteligentes, su poder de resolver problemas, de adaptación, es grande. Hoy sobre el planeta el 99,6% de todo lo que está vivo son plantas.

… Y sólo conocemos el 10%.

Y en ese porcentaje tenemos todo nuestro alimento y la medicina. ¿Qué habrá en el restante 90%?… A diario, cientos de especies vegetales desconocidas se extinguen. Tal vez poseían la capacidad de una cura importante, no lo sabremos nunca. Debemos proteger las plantas por nuestra supervivencia.

¿Qué le emociona de las plantas?

Algunos comportamientos son muy emocionantes. Todas las plantas duermen, se despiertan, buscan la luz con sus hojas; tienen una actividad similar a la de los animales. Filmé el crecimiento de unos girasoles, y se ve clarísimo cómo juegan entre ellos.


Sí, establecen el comportamiento típico del juego que se ve en tantos animales. Cogimos una de esas pequeñas plantas y la hicimos crecer sola. De adulta tenía problemas de comportamiento: le costaba girar en busca del sol, le faltaba el aprendizaje a través del juego. Ver estas cosas es emocionante.

Leer más: