Arquivo da tag: Cérebro

How big science failed to unlock the mysteries of the human brain (MIT Technology Review)

technologyreview.com

Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

Emily Mullin

August 25, 2021


In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. 

At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? 

Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” 

New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. 

The audacious proposal intrigued the Obama administration and laid the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” 

But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk

In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. 

An impossible dream?

A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? 

From the beginning, both projects had critics.

EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. 

In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. 

Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.  

Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)

There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.

While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.

In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. 

In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. 

“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” 

The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”

But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.

The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain. 

New day

Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. 

Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. 

And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.

The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.

While these are all important steps forward, though, they’re far from the initial grand ambitions. 

Lasting legacy

We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.

When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. 

“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” 

Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. 

Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. 

For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.

“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” 

Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. 

“I have to be honest,” says Yuste. “We had higher hopes.”

Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

Israeli Archaeologists Present Groundbreaking Universal Theory of Human Evolution (Haaretz)

Tel Aviv University archaeologists Miki Ben-Dor and Ran Barkai proffer novel hypothesis, showing how the greed of Homo erectus set us careening down an anomalous evolutionary path

Ruth Schuster, Feb. 25, 2021

Why the human brain evolved as it did never has been plausibly explained. Apparently, not since the first life-form billions of years ago did a single species gain dominance over all others – until we came along. Now, in a groundbreaking paper, two Israeli researchers propose that our anomalous evolution was propelled by the very mass extinctions we helped cause. Or: As we sawed off the culinary branches from which we swung, we had to get ever more inventive in order to survive.

As ambling, slow-to-reproduce large animals diminished and gradually went extinct, we were forced to resort to smaller, nimbler animals that flee as a strategy to escape predation. To catch them, we had to get smarter, nimbler and faster, according to the universal theory of human evolution proposed by researchers Miki Ben-Dor and Prof. Ran Barkai of Tel Aviv University, in a paper published in the journal Quaternary.

In fact, the great African megafauna began to decline about 4.6 million years ago. But our story begins with Homo habilis, which lived about 2.6 million years ago and apparently used crude stone tools to help it eat flesh, and with Homo erectus, which thronged Africa and expanded to Eurasia about 2 million years ago. The thing is, erectus wasn’t an omnivore: it was a carnivore, Ben-Dor explains to Haaretz.

“Eighty percent of mammals are omnivores but still specialize in a narrow food range. If anything, it seems Homo erectus was a hyper-carnivore,” he observes.

And in the last couple of million years, our brains grew threefold to a maximum capacity of about 1,500 cranial capacity, a size achieved about 300,000 years ago. We also gradually but consistently ramped up in technology and culture – until the Neolithic revolution and advent of the sedentary lifestyle, when our brains shrank to about 1,400 to 1,300cc, but more on that anomaly later.

The hypothesis suggested by Ben-Dor and Barkai – that we ate our way to our present physical, cultural and ecological state – is an original unifying explanation for the behavioral, physiological and cultural evolution of the human species.

Out of chaos

Evolution is chaotic. Charles Darwin came up with the theory of the survival of the fittest, and nobody has a better suggestion yet, but mutations aren’t “planned.” Bodies aren’t “designed,” if we leave genetic engineering out of it. The point is, evolution isn’t linear but chaotic, and that should theoretically apply to humans too.

Hence, it is strange that certain changes in the course of millions of years of human history, including the expansion of our brain, tool manufacture techniques and use of fire, for example, were uncharacteristically progressive, say Ben-Dor and Barkai.

“Uncharacteristically progressive” means that certain traits such as brain size, or cultural developments such as fire usage, evolved in one direction over a long time, in the direction of escalation. That isn’t what chaos is expected to produce over vast spans of time, Barkai explains to Haaretz: it is bizarre. Very few parameters behave like that.

So, their discovery of correlation between contraction of the average weight of African animals, the extinction of megafauna and the development of the human brain is intriguing.

From mammoth marrow to joint of rat

To be clear, just this month a new paper posited that the late Quaternary extinction of megafauna, in the last few tens of thousands of years, wasn’t entirely the fault of humanity. In North America specifically, it was due primarily to climate change, with the late-arriving humans apparently providing the coup de grâce to some species.

In the Old World, however, a human role is clearer. African megafauna apparently began to decline 4.6 million years ago, but during the Pleistocene (2.6 million to 11,600 years ago) the size of African animals trended sharply down, in what the authors term an abrupt reversal from a continuous growth trend of 65 million years (i.e., since the dinosaurs almost died out).

When Homo erectus the carnivore began to roam Africa around 2 million years ago, land mammals averaged nearly 500 kilograms. Barkai’s team and others have demonstrated that hominins ate elephants and large animals when they could. In fact, originally Africa had six elephant species (today there are two: the bush elephant and forest elephant). By the end of the Pleistocene, by which time all hominins other than modern humans were extinct too, that average weight of the African animal had shrunk by more than 90 percent.

And during the Pleistocene, as the African animals shrank, the Homo genus grew taller and more gracile, and our stone tool technology improved (which in no way diminished our affection for archaic implements like the hand ax or chopper, both of which remained in use for more than a million years, even as more sophisticated technologies were developed).

If we started some 3.3 million years ago with large, crude stone hammers that may have been used to bang big animals on the head or break bones to get at the marrow, over the epochs we invented the spear for remote slaughter. By about 80,000 years ago, the bow and arrow was making its appearance, which was more suitable for bringing down small fry like small deer and birds. Over a million years ago, we began to use fire, and later achieved better control of it, meaning the ability to ignite it at will. Later we domesticated the dog from the wolf, and it would help us hunt smaller, fleet animals.

Why did the earliest humans hunt large animals anyway? Wouldn’t a peeved elephant be more dangerous than a rat? Arguably, but catching one elephant is easier than catching a large number of rats. And megafauna had more fat.

A modern human can only derive up to about 50 percent of calories from lean meat (protein): past a certain point, our livers can’t digest more protein. We need energy from carbs or fat, but before developing agriculture about 10,000 years ago, a key source of calories had to be animal fat.

Big animals have a lot of fat. Small animals don’t. In Africa and Europe, and in Israel too, the researchers found a significant decline in the prevalence of animals weighing over 200 kilograms correlated to an increase in the volume of the human brain. Thus, Ben-Dor and Barkai deduce that the declining availability of large prey seems to have been a key element in the natural selection from Homo erectus onward. Catching one elephant is more efficient than catching 1,000 rabbits, but if we must catch 1,000 rabbits, improved cunning, planning and tools are in order.

Say it with fat

Our changing hunting habits would have had cultural impacts too, Ben-Dor and Barkai posit. “Cultural evolution in archaeology usually refers to objects, such as stone tools,” Ben-Dor tells Haaretz. But cultural evolution also refers to learned behavior, such as our choice of which animals to hunt, and how.

Thus, they posit, our hunting conundrum may have also been a key element to that enigmatic human characteristic: complex language. When language began, with what ancestor of Homo sapiens, if any before us, is hotly debated.

Ben-Dor, an economist by training prior to obtaining a Ph.D. in archaeology, believes it began early. “We just need to follow the money. When speaking of evolution, one must follow the energy. Language is energetically costly. Speaking requires devotion of part of the brain, which is costly. Our brain consumes huge amounts of energy. It’s an investment, and language has to produce enough benefit to make it worthwhile. What did language bring us? It had to be more energetically efficient hunting.”

Domestication of the dog also requires resources and, therefore, also had to bring sufficient compensation in the form of more efficient hunting of smaller animals, he points out. That may help explain the fact that Neolithic humans not only embraced the dog but ate it too, going by archaeological evidence of butchered dogs.

At the end of the day, wherever we went, humans devastated the local ecologies, given enough time.

There is a lot of thinking about the Neolithic agricultural revolution. Some think grain farming was driven by the desire to make beer. Given residue analysis indicating that it’s been around for over 10,000 years, that theory isn’t as far-fetched as one might think. Ben-Dor and Barkai suggest that once we could grow our own food and husband herbivores, the megafauna almost entirely gone, hunting for them became too energy-costly. So we had to use our large brains to develop agriculture.

And as the hunter-gathering lifestyle gave way to permanent settlement, our brain size decreased.

Note, Ben-Dor adds, that the brains of wolves which have to hunt to survive are larger than the brain of the domesticated wolf, i.e., dogs. We did promise more on that. That was it. Also: The chimpanzee brain has remained stable for 7 million years, since the split with the Homo line, Barkai points out.

“Why does any of this matter?” Ben-Dor asks. “People think humans reached this condition because it was ‘meant to be.’ But in the Earth’s 4.5 billion years, there have been billions of species. They rose and fell. What’s the probability that we would take over the world? It’s an accident of nature. It never happened before that one species achieved dominance over all, and now it’s all over. How did that happen? This is the answer: A non-carnivore entered the niche of carnivore, and ate out its niche. We can’t eat that much protein: we need fat too. Because we needed the fat, we began with the big animals. We hunted the prime adult animals which have more fat than the kiddies and the old. We wiped out the prime adults who were crucial to survival of species. Because of our need for fat, we wiped out the animals we depended on. And this required us to keep getting smarter and smarter, and thus we took over the world.”

Why did humans evolve such large brains? Because smarter people have more friends (The Conversation)

June 19, 2017 10.01am EDT

Humans are the only ultrasocial creature on the planet. We have outcompeted, interbred or even killed off all other hominin species. We cohabit in cities of tens of millions of people and, despite what the media tell us, violence between individuals is extremely rare. This is because we have an extremely large, flexible and complex “social brain”.

To truly understand how the brain maintains our human intellect, we would need to know about the state of all 86 billion neurons and their 100 trillion interconnections, as well as the varying strengths with which they are connected, and the state of more than 1,000 proteins that exist at each connection point. Neurobiologist Steven Rose suggests that even this is not enough – we would still need know how these connections have evolved over a person’s lifetime and even the social context in which they had occurred. It may take centuries just to figure out basic neuronal connectivity.

Many people assume that our brain operates like a powerful computer. But Robert Epstein, a psychologist at the American Institute for Behavioural Research and Technology, says this is just shoddy thinking and is holding back our understanding of the human brain. Because, while humans start with senses, reflexes and learning mechanisms, we are not born with any of the information, rules, algorithms or other key design elements that allow computers to behave somewhat intelligently. For instance, computers store exact copies of data that persist for long periods of time, even when the power is switched off. Our brains, meanwhile, are capable of creating false data or false memories, and they only maintain our intellect as long as we remain alive.

We are organisms, not computers

Of course, we can see many advantages in having a large brain. In my recent book on human evolution I suggest it firstly allows humans to exist in a group size of about 150. This builds resilience to environmental changes by increasing and diversifying food production and sharing.

 

As our ancestors got smarter, they became capable of living in larger and larger groups. Mark Maslin, Author provided

A social brain also allows specialisation of skills so individuals can concentrate on supporting childbirth, tool-making, fire setting, hunting or resource allocation. Humans have no natural weapons, but working in large groups and having tools allowed us to become the apex predator, hunting animals as large as mammoths to extinction.

Our social groups are large and complex, but this creates high stress levels for individuals because the rewards in terms of food, safety and reproduction are so great. Hence, Oxford anthropologist Robin Dunbar argues our huge brain is primarily developed to keep track of rapidly changing relationships. It takes a huge amount of cognitive ability to exist in large social groups, and if you fall out of the group you lose access to food and mates and are unlikely to reproduce and pass on your genes.

 

Great. But what about your soap opera knowledge? ronstik / shutterstock

My undergraduates come to university thinking they are extremely smart as they can do differential equations and understand the use of split infinitives. But I point out to them that almost anyone walking down the street has the capacity to hold the moral and ethical dilemmas of at least five soap operas in their head at any one time. And that is what being smart really means. It is the detailed knowledge of society and the need to track and control the ever changing relationship between people around us that has created our huge complex brain.

It seems our brains could be even more flexible that we previously thought. Recent genetic evidence suggests the modern human brain is more malleable and is modelled more by the surrounding environment than that of chimpanzees. The anatomy of the chimpanzee brain is strongly controlled by their genes, whereas the modern human brain is extensively shaped by the environment, no matter what the genetics.

This means the human brain is pre-programmed to be extremely flexible; its cerebral organisation is adjusted by the environment and society in which it is raised. So each new generation’s brain structure can adapt to the new environmental and social challenges without the need to physically evolve.

 

Evolution at work. OtmarW / shutterstock

This may also explain why we all complain that we do not understand the next generation as their brains are wired differently, having grown up in a different physical and social environment. An example of this is the ease with which the latest generation interacts with technology almost if they had co-evolved with it.

So next time you turn on a computer just remember how big and complex your brain is – to keep a track of your friends and enemies.

How the brain leads us to believe we have sharp vision (Science Daily)

Date: October 17, 2014

Source: Bielefeld University

Summary: We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists have been investigating how the brain fools us into believing that we see in sharp detail.

The thumbnail at the end of an outstretched arm: This is the area that the eye actually can see in sharp detail. Researchers have investigated why the rest of the world also appears to be uniformly detailed. Credit: Bielefeld University

We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists at Bielefeld University have been investigating how the brain fools us into believing that we see in sharp detail. The results have been published in the scientific magazine Journal of Experimental Psychology: General. Its central finding is that our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail.

“In our study we are dealing with the question of why we believe that we see the world uniformly detailed,” says Dr. Arvid Herwig from the Neuro-Cognitive Psychology research group of the Faculty of Psychology and Sports Science. The group is also affiliated to the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University and is led by Professor Dr. Werner X. Schneider.

Only the fovea, the central area of the retina, can process objects precisely. We should therefore only be able to see a small area of our environment in sharp detail. This area is about the size of a thumb nail at the end of an outstretched arm. In contrast, all visual impressions which occur outside the fovea on the retina become progressively coarse. Nevertheless, we commonly have the impression that we see large parts of our environment in sharp detail.

Herwig and Schneider have been getting to the bottom of this phenomenon with a series of experiments. Their approach presumes that people learn through countless eye movements over a lifetime to connect the coarse impressions of objects outside the fovea to the detailed visual impressions after the eye has moved to the object of interest. For example, the coarse visual impression of a football (blurred image of a football) is connected to the detailed visual impression after the eye has moved. If a person sees a football out of the corner of her eye, her brain will compare this current blurred picture with memorised images of blurred objects. If the brain finds an image that fits, it will replace the coarse image with a precise image from memory. This blurred visual impression is replaced before the eye moves. The person thus thinks that she already sees the ball clearly, although this is not the case.

The psychologists have been using eye-tracking experiments to test their approach. Using the eye-tracking technique, eye movements are measured accurately with a specific camera which records 1000 images per second. In their experiments, the scientists have recorded fast balistic eye movements (saccades) of test persons. Though most of the participants did not realise it, certain objects were changed during eye movement. The aim was that the test persons learn new connections between visual stimuli from inside and outside the fovea, in other words from detailed and coarse impressions. Afterwards, the participants were asked to judge visual characteristics of objects outside the area of the fovea. The result showed that the connection between a coarse and detailed visual impression occurred after just a few minutes. The coarse visual impressions became similar to the newly learnt detailed visual impressions.

“The experiments show that our perception depends in large measure on stored visual experiences in our memory,” says Arvid Herwig. According to Herwig and Schneider, these experiences serve to predict the effect of future actions (“What would the world look like after a further eye movement”). In other words: “We do not see the actual world, but our predictions.”


Journal Reference:

  1. Arvid Herwig, Werner X. Schneider. Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 2014; 143 (5): 1903 DOI: 10.1037/a0036781

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients (Science Daily)

Date: October 16, 2014

Source: University of Cambridge

Summary: Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

These images show brain networks in two behaviorally similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu

Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).

The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically — but importantly, not always — impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question — it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”

The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.

Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”


Journal Reference:

  1. Chennu S, Finoia P, Kamau E, Allanson J, Williams GB, et al. Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness. PLOS Computational Biology, 2014; 10 (10): e1003887 DOI:10.1371/journal.pcbi.1003887

Clouds in the Head: New Model of Brain’s Thought Processes (Science Daily)

May 21, 2013 — A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. Scientists funded by the Swiss National Science Foundation (SNSF) propose that the field of brain research should expand its focus.

A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. (Credit: iStockphoto/Sebastian Kaulitzki)

Many brain researchers cannot see the forest for the trees. When they use electrodes to record the activity patterns of individual neurons, the patterns often appear chaotic and difficult to interpret. “But when you zoom out from looking at individual cells, and observe a large number of neurons instead, their global activity is very informative,” says Mattia Rigotti, a scientist at Columbia University and New York University who is supported by the SNSF and the Janggen-Pöhn-Stiftung. Publishing inNature together with colleagues from the United States, he has shown that these difficult-to-interpret patterns in particular are especially important for complex brain functions.

What goes on in the heads of apes

The researchers have focussed their attention on the activity patterns of 237 neurons that had been recorded some years previously using electrodes implanted in the frontal lobes of two rhesus monkeys. At that time, the apes had been taught to recognise images of different objects on a screen. Around one third of the observed neurons demonstrated activity that Rigotti describes as “mixed selectivity.” A mixed selective neuron does not always respond to the same stimulus (the flowers or the sailing boat on the screen) in the same way. Rather, its response differs as it also takes account of the activity of other neurons. The cell adapts its response according to what else is going on in the ape’s brain.

Chaotic patterns revealed in context

Just as individual computers are networked to create concentrated processing and storage capacity in the field of Cloud Computing, links in the complex cognitive processes that take place in the prefrontal cortex play a key role. The greater the density of the network in the brain, in other words the greater the proportion of mixed selectivity in the activity patterns of the neurons, the better the apes were able to recall the images on the screen, as demonstrated by Rigotti in his analysis. Given that the brain and cognitive capabilities of rhesus monkeys are similar to those of humans, mixed selective neurons should also be important in our own brains. For him this is reason enough why brain research from now on should no longer be satisfied with just the simple activity patterns, but should also consider the apparently chaotic patterns that can only be revealed in context.

Journal Reference:

  1. Mattia Rigotti, Omri Barak, Melissa R. Warden, Xiao-Jing Wang, Nathaniel D. Daw, Earl K. Miller, Stefano Fusi. The importance of mixed selectivity in complex cognitive tasksNature, 2013; DOI: 10.1038/nature12160

Mind Over Matter? Core Body Temperature Controlled by the Brain (Science Daily)

Apr. 8, 2013 — A team of researchers led by Associate Professor Maria Kozhevnikov from the Department of Psychology at the National University of Singapore (NUS) Faculty of Arts and Social Sciences showed, for the first time, that it is possible for core body temperature to be controlled by the brain. The scientists found that core body temperature increases can be achieved using certain meditation techniques (g-tummo) which could help in boosting immunity to fight infectious diseases or immunodeficiency.

Meditation. (Credit: © Yuri Arcurs / Fotolia)

Published in science journal PLOS ONE in March 2013, the study documented reliable core body temperature increases for the first time in Tibetan nuns practising g-tummo meditation. Previous studies on g-tummo meditators showed only increases in peripheral body temperature in the fingers and toes. The g-tummo meditative practice controls “inner energy” and is considered by Tibetan practitioners as one of the most sacred spiritual practices in the region. Monasteries maintaining g-tummo traditions are very rare and are mostly located in the remote areas of eastern Tibet.

The researchers collected data during the unique ceremony in Tibet, where nuns were able to raise their core body temperature and dry up wet sheets wrapped around their bodies in the cold Himalayan weather (-25 degree Celsius) while meditating. Using electroencephalography (EEG) recordings and temperature measures, the team observed increases in core body temperature up to 38.3 degree Celsius. A second study was conducted with Western participants who used a breathing technique of the g-tummo meditative practice and they were also able to increase their core body temperature, within limits.

Applications of the research findings

The findings from the study showed that specific aspects of the meditation techniques can be used by non-meditators to regulate their body temperature through breathing and mental imagery. The techniques could potentially allow practitioners to adapt to and function in cold environments, improve resistance to infections, boost cognitive performance by speeding up response time and reduce performance problems associated with decreased body temperature.

The two aspects of g-tummo meditation that lead to temperature increases are “vase breath” and concentrative visualisation. “Vase breath” is a specific breathing technique which causes thermogenesis, which is a process of heat production. The other technique, concentrative visualisation, involves focusing on a mental imagery of flames along the spinal cord in order to prevent heat losses. Both techniques work in conjunction leading to elevated temperatures up to the moderate fever zone.

Assoc Prof Kozhevnikov explained, “Practicing vase breathing alone is a safe technique to regulate core body temperature in a normal range. The participants whom I taught this technique to were able to elevate their body temperature, within limits, and reported feeling more energised and focused. With further research, non-Tibetan meditators could use vase breathing to improve their health and regulate cognitive performance.”

Further research into controlling body temperature

Assoc Prof Kozhevnikov will continue to explore the effects of guided imagery on neurocognitive and physiological aspects. She is currently training a group of people to regulate their body temperature using vase breathing, which has potential applications in the field of medicine. Furthermore, the use of guided mental imagery in conjunction with vase breathing may lead to higher body temperature increases and better health.

Journal Reference:

  1. Maria Kozhevnikov, James Elliott, Jennifer Shephard, Klaus Gramann. Neurocognitive and Somatic Components of Temperature Increases during g-Tummo Meditation: Legend and RealityPLoS ONE, 2013; 8 (3): e58244 DOI:10.1371/journal.pone.0058244

Red Brain, Blue Brain: Republicans and Democrats Process Risk Differently, Research Finds (Science Daily)

Feb. 13, 2013 — A team of political scientists and neuroscientists has shown that liberals and conservatives use different parts of the brain when they make risky decisions, and these regions can be used to predict which political party a person prefers. The new study suggests that while genetics or parental influence may play a significant role, being a Republican or Democrat changes how the brain functions.

Republicans and Democrats differ in the neural mechanisms activated while performing a risk-taking task. Republicans more strongly activate their right amygdala, associated with orienting attention to external cues. Democrats have higher activity in their left posterior insula, associated with perceptions of internal physiological states. This activation also borders the temporal-parietal junction, and therefore may reflect a difference in internal physiological drive as well as the perception of the internal state and drive of others. (Credit: From: Darren Schreiber, Greg Fonzo, Alan N. Simmons, Christopher T. Dawes, Taru Flagan, James H. Fowler, Martin P. Paulus. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS ONE, 2013; 8 (2): e52970 DOI: 10.1371/journal.pone.0052970)

Dr. Darren Schreiber, a researcher in neuropolitics at the University of Exeter, has been working in collaboration with colleagues at the University of California, San Diego on research that explores the differences in the way the brain functions in American liberals and conservatives. The findings are published Feb. 13 in the journalPLOS ONE.

In a prior experiment, participants had their brain activity measured as they played a simple gambling game. Dr. Schreiber and his UC San Diego collaborators were able to look up the political party registration of the participants in public records. Using this new analysis of 82 people who performed the gambling task, the academics showed that Republicans and Democrats do not differ in the risks they take. However, there were striking differences in the participants’ brain activity during the risk-taking task.

Democrats showed significantly greater activity in the left insula, a region associated with social and self-awareness. Meanwhile Republicans showed significantly greater activity in the right amygdala, a region involved in the body’s fight-or-flight system. These results suggest that liberals and conservatives engage different cognitive processes when they think about risk.

In fact, brain activity in these two regions alone can be used to predict whether a person is a Democrat or Republican with 82.9% accuracy. By comparison, the longstanding traditional model in political science, which uses the party affiliation of a person’s mother and father to predict the child’s affiliation, is only accurate about 69.5% of the time. And another model based on the differences in brain structure distinguishes liberals from conservatives with only 71.6% accuracy.

The model also outperforms models based on differences in genes. Dr. Schreiber said: “Although genetics have been shown to contribute to differences in political ideology and strength of party politics, the portion of variation in political affiliation explained by activity in the amygdala and insula is significantly larger, suggesting that affiliating with a political party and engaging in a partisan environment may alter the brain, above and beyond the effect of heredity.”

These results may pave the way for new research on voter behaviour, yielding better understanding of the differences in how liberals and conservatives think. According to Dr. Schreiber: “The ability to accurately predict party politics using only brain activity while gambling suggests that investigating basic neural differences between voters may provide us with more powerful insights than the traditional tools of political science.”

Journal Reference:

  1. Darren Schreiber, Greg Fonzo, Alan N. Simmons, Christopher T. Dawes, Taru Flagan, James H. Fowler, Martin P. Paulus. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and RepublicansPLoS ONE, 2013; 8 (2): e52970 DOI:10.1371/journal.pone.0052970

Books Change How a Child’s Brain Grows (Wired)

By Moheb Costandi, ScienceNOW – October 18, 2012

Image: Peter Dedina/Flickr

NEW ORLEANS, LOUISIANA — Books and educational toys can make a child smarter, but they also influence how the brain grows, according to new research presented here on Sunday at the annual meeting of the Society for Neuroscience. The findings point to a “sensitive period” early in life during which the developing brain is strongly influenced by environmental factors.

Studies comparing identical and nonidentical twins show that genes play an important role in the development of the cerebral cortex, the thin, folded structure that supports higher mental functions. But less is known about how early life experiences influence how the cortex grows.

To investigate, neuroscientist Martha Farah of the University of Pennsylvania and her colleagues recruited 64 children from a low-income background and followed them from birth through to late adolescence. They visited the children’s homes at 4 and 8 years of age to evaluate their environment, noting factors such as the number of books and educational toys in their houses, and how much warmth and support they received from their parents.

More than 10 years after the second home visit, the researchers used MRI to obtain detailed images of the participants’ brains. They found that the level of mental stimulation a child receives in the home at age 4 predicted the thickness of two regions of the cortex in late adolescence, such that more stimulation was associated with a thinner cortex. One region, the lateral inferior temporal gyrus, is involved in complex visual skills such as word recognition.

Home environment at age 8 had a smaller impact on development of these brain regions, whereas other factors, such as the mother’s intelligence and the degree and quality of her care, had no such effect.

Previous work has shown that adverse experiences, such as childhood neglect, abuse, and poverty, can stunt the growth of the brain. The new findings highlight the sensitivity of the growing brain to environmental factors, Farah says, and provide strong evidence that subtle variations in early life experience can affect the brain throughout life.

As the brain develops, it produces more synapses, or neuronal connections, than are needed, she explains. Underused connections are later eliminated, and this elimination process, called synaptic pruning, is highly dependent upon experience. The findings suggest that mental stimulation in early life increases the extent to which synaptic pruning occurs in the lateral temporal lobe. Synaptic pruning reduces the volume of tissue in the cortex. This makes the cortex thinner, but it also makes information processing more efficient.

“This is a first look at how nurture influences brain structure later in life,” Farah reported at the meeting. “As with all observational studies, we can’t really speak about causality, but it seems likely that cognitive stimulation experienced early in life led to changes in cortical thickness.”

She adds, however, that the research is still in its infancy, and that more work is needed to gain a better understanding of exactly how early life experiences impact brain structure and function.

The findings add to the growing body of evidence that early life is a period of “extreme vulnerability,” says psychiatrist Jay Giedd, head of the brain imaging unit in the Child Psychiatry Branch at the National Institute of Mental Health in Bethesda, Maryland. But early life, he says, also offers a window of opportunity during which the effects of adversity can be offset. Parents can help young children develop their cognitive skills by providing a stimulating environment.

Irony Seen Through the Eye of MRI (Science Daily)

ScienceDaily (Aug. 3, 2012) — In the cognitive sciences, the capacity to interpret the intentions of others is called “Theory of Mind” (ToM). This faculty is involved in the understanding of language, in particular by bridging the gap between the meaning of the words that make up a statement and the meaning of the statement as a whole.

In recent years, researchers have identified the neural network dedicated to ToM, but no one had yet demonstrated that this set of neurons is specifically activated by the process of understanding of an utterance. This has now been accomplished: a team from L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) has shown that the activation of the ToM neural network increases when an individual is reacting to ironic statements.

Published in Neuroimage, these findings represent an important breakthrough in the study of Theory of Mind and linguistics, shedding light on the mechanisms involved in interpersonal communication.

In our communications with others, we are constantly thinking beyond the basic meaning of words. For example, if asked, “Do you have the time?” one would not simply reply, “Yes.” The gap between what is saidand what it means is the focus of a branch of linguistics called pragmatics. In this science, “Theory of Mind” (ToM) gives listeners the capacity to fill this gap. In order to decipher the meaning and intentions hidden behind what is said, even in the most casual conversation, ToM relies on a variety of verbal and non-verbal elements: the words used, their context, intonation, “body language,” etc.

Within the past 10 years, researchers in cognitive neuroscience have identified a neural network dedicated to ToM that includes specific areas of the brain: the right and left temporal parietal junctions, the medial prefrontal cortex and the precuneus. To identify this network, the researchers relied primarily on non-verbal tasks based on the observation of others’ behavior[1]. Today, researchers at L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) have established, for the first time, the link between this neural network and the processing of implicit meanings.

To identify this link, the team focused their attention on irony. An ironic statement usually means the opposite of what is said. In order to detect irony in a statement, the mechanisms of ToM must be brought into play. In their experiment, the researchers prepared 20 short narratives in two versions, one literal and one ironic. Each story contained a key sentence that, depending on the version, yielded an ironic or literal meaning. For example, in one of the stories an opera singer exclaims after a premiere, “Tonight we gave a superb performance.” Depending on whether the performance was in fact very bad or very good, the statement is or is not ironic.

The team then carried out functional magnetic resonance imaging (fMRI) analyses on 20 participants who were asked to read 18 of the stories, chosen at random, in either their ironic or literal version. The participants were not aware that the test concerned the perception of irony. The researchers had predicted that the participants’ ToM neural networks would show increased activity in reaction to the ironic sentences, and that was precisely what they observed: as each key sentence was read, the network activity was greater when the statement was ironic. This shows that this network is directly involved in the processes of understanding irony, and, more generally, in the comprehension of language.

Next, the L2C2 researchers hope to expand their research on the ToM network in order to determine, for example, whether test participants would be able to perceive irony if this network were artificially inactivated.

Note:

[1] For example, Grèzes, Frith & Passingham (J. Neuroscience, 2004) showed a series of short (3.5 second) films in which actors came into a room and lifted boxes. Some of the actors were instructed to act as though the boxes were heavier (or lighter) than they actually were. Having thus set up deceptive situations, the experimenters asked the participants to determine if they had or had not been deceived by the actors in the films. The films containing feigned actions elicited increased activity in the rTPJ (right temporal parietal junction) compared with those containing unfeigned actions.

Journal Reference:

Nicola Spotorno, Eric Koun, Jérôme Prado, Jean-Baptiste Van Der Henst, Ira A. Noveck. Neural evidence that utterance-processing entails mentalizing: The case of ironyNeuroImage, 2012; 63 (1): 25 DOI:10.1016/j.neuroimage.2012.06.046

Brain Imaging Can Predict How Intelligent You Are: ‘Global Brain Connectivity’ Explains 10 Percent of Variance in Individual Intelligence (Science Daily)

ScienceDaily (Aug. 1, 2012) — When it comes to intelligence, what factors distinguish the brains of exceptionally smart humans from those of average humans?

New research suggests as much as 10 percent of individual variances in human intelligence can be predicted based on the strength of neural connections between the lateral prefrontal cortex and other regions of the brain. (Credit: WUSTL Image / Michael Cole)

As science has long suspected, overall brain size matters somewhat, accounting for about 6.7 percent of individual variation in intelligence. More recent research has pinpointed the brain’s lateral prefrontal cortex, a region just behind the temple, as a critical hub for high-level mental processing, with activity levels there predicting another 5 percent of variation in individual intelligence.

Now, new research from Washington University in St. Louis suggests that another 10 percent of individual differences in intelligence can be explained by the strength of neural pathways connecting the left lateral prefrontal cortex to the rest of the brain.

Published in the Journal of Neuroscience, the findings establish “global brain connectivity” as a new approach for understanding human intelligence.

“Our research shows that connectivity with a particular part of the prefrontal cortex can predict how intelligent someone is,” suggests lead author Michael W. Cole, PhD, a postdoctoral research fellow in cognitive neuroscience at Washington University.

The study is the first to provide compelling evidence that neural connections between the lateral prefrontal cortex and the rest of the brain make a unique and powerful contribution to the cognitive processing underlying human intelligence, says Cole, whose research focuses on discovering the cognitive and neural mechanisms that make human behavior uniquely flexible and intelligent.

“This study suggests that part of what it means to be intelligent is having a lateral prefrontal cortex that does its job well; and part of what that means is that it can effectively communicate with the rest of the brain,” says study co-author Todd Braver, PhD, professor of psychology in Arts & Sciences and of neuroscience and radiology in the School of Medicine. Braver is a co-director of the Cognitive Control and Psychopathology Lab at Washington University, in which the research was conducted.

One possible explanation of the findings, the research team suggests, is that the lateral prefrontal region is a “flexible hub” that uses its extensive brain-wide connectivity to monitor and influence other brain regions in a goal-directed manner.

“There is evidence that the lateral prefrontal cortex is the brain region that ‘remembers’ (maintains) the goals and instructions that help you keep doing what is needed when you’re working on a task,” Cole says. “So it makes sense that having this region communicating effectively with other regions (the ‘perceivers’ and ‘doers’ of the brain) would help you to accomplish tasks intelligently.”

While other regions of the brain make their own special contribution to cognitive processing, it is the lateral prefrontal cortex that helps coordinate these processes and maintain focus on the task at hand, in much the same way that the conductor of a symphony monitors and tweaks the real-time performance of an orchestra.

“We’re suggesting that the lateral prefrontal cortex functions like a feedback control system that is used often in engineering, that it helps implement cognitive control (which supports fluid intelligence), and that it doesn’t do this alone,” Cole says.

The findings are based on an analysis of functional magnetic resonance brain images captured as study participants rested passively and also when they were engaged in a series of mentally challenging tasks associated with fluid intelligence, such as indicating whether a currently displayed image was the same as one displayed three images ago.

Previous findings relating lateral prefrontal cortex activity to challenging task performance were supported. Connectivity was then assessed while participants rested, and their performance on additional tests of fluid intelligence and cognitive control collected outside the brain scanner was associated with the estimated connectivity.

Results indicate that levels of global brain connectivity with a part of the left lateral prefrontal cortex serve as a strong predictor of both fluid intelligence and cognitive control abilities.

Although much remains to be learned about how these neural connections contribute to fluid intelligence, new models of brain function suggested by this research could have important implications for the future understanding — and perhaps augmentation — of human intelligence.

The findings also may offer new avenues for understanding how breakdowns in global brain connectivity contribute to the profound cognitive control deficits seen in schizophrenia and other mental illnesses, Cole suggests.

Other co-authors include Tal Yarkoni, PhD, a postdoctoral fellow in the Department of Psychology and Neuroscience at the University of Colorado at Boulder; Grega Repovs, PhD, professor of psychology at the University of Ljubljana, Slovenia; and Alan Anticevic, an associate research scientist in psychiatry at Yale University School of Medicine.

Funding from the National Institute of Mental Health supported the study (National Institutes of Health grants MH66088, NR012081, MH66078, MH66078-06A1W1, and 1K99MH096801).