Blood (red; artificially coloured) pools in the brain of a person who has had a stroke, a common cause of coma.Credit: Zephyr/Science Photo Library
At least one-quarter of people who have severe brain injuries and cannot respond physically to commands are actually conscious, according to the first international study of its kind1.
Although these people could not, say, give a thumbs-up when prompted, they nevertheless repeatedly showed brain activity when asked to imagine themselves moving or exercising.
“This is one of the very big landmark studies” in the field of coma and other consciousness disorders, says Daniel Kondziella, a neurologist at Rigshospitalet, the teaching hospital for Copenhagen University.
The results mean that a substantial number of people with brain injuries who seem unresponsive can hear things going on around them and might even be able to use brain–computer interfaces (BCIs) to communicate, says study leader Nicholas Schiff, a neurologist at Weill Cornell Medicine in New York City. BCIs are devices implanted into a person’s head that capture brain activity, decode it and translate it into commands that can, for instance, move a computer cursor. “We should be allocating resources to go out and find these people and help them,” Schiff says. The work was published today in The New England Journal of Medicine1.
Scanning the brain
The study included 353 people with brain injuries caused by events such as physical trauma, heart attacks or strokes. Of these, 241 could not react to any of a battery of standard bedside tests for responsiveness, including one that asks for a thumbs-up; the other 112 could.
Everyone enrolled in the study underwent one or both of two types of brain scan. The first was functional magnetic resonance imaging (fMRI), which measures mental activity indirectly by detecting the oxygenation of blood in the brain. The second was electroencephalography (EEG), which uses an electrode-covered cap on a person’s scalp to measure brain-wave activity directly. During each scan, people were told to imagine themselves playing tennis or opening and closing their hand. The commands were repeated continuously for 15–30 seconds, then there was a pause; the exercise was then repeated for six to eight command sessions.
Of the physically unresponsive people, about 25% showed brain activity across the entire exam for either EEG or fMRI. The medical name for being able to respond mentally but not physically is cognitive motor dissociation. The 112 people in the study who were classified as responsive did a bit better on the brain-activity tests, but not much: only about 38% showed consistent activity. This is probably because the tests set a high bar, Schiff says. “I’ve been in the MRI, and I’ve done this experiment, and it’s hard,” he adds.
This isn’t the first time a study has found cognitive motor dissociation in people with brain injuries who were physically unresponsive. For instance, in a 2019 paper, 15% of the 104 people undergoing testing displayed this behaviour2. The latest study, however, is larger and is the first multicentre investigation of its kind. Tests were run at six medical facilities in four countries: Belgium, France, the United Kingdom and the United States.
The 25% of unresponsive people who showed brain activity tended to be younger than those who did not, to have injuries that were from physical trauma and to have been living with their injuries for longer than the others. Kondziella cautions that further investigating these links would require repeat assessments of individuals over weeks or months. “We know very little about consciousness-recovery trajectories over time and across different brain injuries,” he says.
Room for improvement
But the study has some limitations. For example, the medical centres did not all use the same number or set of tasks during the EEG or fMRI scans, or the same number of electrodes during EEG sessions, which could skew results.
In the end, however, with such a high bar set for registering brain activity, the study probably underestimates the proportion of physically unresponsive people who are conscious, Schiff says. Kondziella agrees. Rates of cognitive motor dissociation were highest for people tested with both EEG and fMRI, he points out, so if both methods were used with every person in the study, the overall rates might have been even higher.
However, the kinds of test used are logistically and computationally challenging, “which is why really only a handful or so of centres worldwide are able to adopt these techniques”, Kondziella says.
Schiff stresses that it’s important to be able to identify people with brain injuries who seem unresponsive but are conscious. “There are going to be people we can help get out of this condition,” he says, perhaps by using BCIs or other treatments, or simply continuing to provide medical care. Knowing that someone is conscious can change how families and medical teams make decisions about life support and treatment. “It makes a difference every time you find out that somebody is responsive,” he says.
Patient One was 24 years old and pregnant with her third child when she was taken off life support. It was 2014. A couple of years earlier, she had been diagnosed with a disorder that caused an irregular heartbeat, and during her two previous pregnancies she had suffered seizures and faintings. Four weeks into her third pregnancy, she collapsed on the floor of her home. Her mother, who was with her, called 911. By the time an ambulance arrived, Patient One had been unconscious for more than 10 minutes. Paramedics found that her heart had stopped.
After being driven to a hospital where she couldn’t be treated, Patient One was taken to the emergency department at the University of Michigan. There, medical staff had to shock her chest three times with a defibrillator before they could restart her heart. She was placed on an external ventilator and pacemaker, and transferred to the neurointensive care unit, where doctors monitored her brain activity. She was unresponsive to external stimuli, and had a massive swelling in her brain. After she lay in a deep coma for three days, her family decided it was best to take her off life support. It was at that point – after her oxygen was turned off and nurses pulled the breathing tube from her throat – that Patient One became one of the most intriguing scientific subjects in recent history.
For several years, Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness. Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived.
Dying seemed like such an important area of research – we all do it, after all – that Borjigin assumed other scientists had already developed a thorough understanding of what happens to the brain in the process of death. But when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit. Among them was Patient One.
At the time Borjigin began her research into Patient One, the scientific understanding of death had reached an impasse. Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies. A handful of those patients even claimed to witness, from above, doctors’ attempts to resuscitate them. According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.
As remarkable as these near-death experiences sounded, they were consistent enough that some scientists began to believe there was truth to them: maybe people really did have minds or souls that existed separately from their living bodies. In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.
Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences. Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased. “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.
But by 2015, experiments such as Parnia’s had yielded ambiguous results, and the field of near-death studies was not much closer to understanding death than it had been when it was founded four decades earlier. That’s when Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.
“I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”
For all that science has learned about the workings of life, death remains among the most intractable of mysteries. “At times I have been tempted to believe that the creator has eternally intended this department of nature to remain baffling, to prompt our curiosities and hopes and suspicions all in equal measure,” the philosopher William James wrote in 1909.
The first time that the question Borjigin began asking in 2015 was posed – about what happens to the brain during death – was a quarter of a millennium earlier. Around 1740, a French military physician reviewed the case of a famous apothecary who, after a “malign fever” and several blood-lettings, fell unconscious and thought he hadtravelled to the Kingdom of the Blessed. The physician speculated that the apothecary’s experience had been caused by a surge of blood to the brain. But between that early report and the mid-20th century, scientific interest in near-death experiences remained sporadic.
In 1892, the Swiss climber and geologist Albert Heim collected the first systematic accounts of near-death experiences from 30 fellow climbers who had suffered near-fatal falls. In many cases, the climbers underwent a sudden review of their entire past, heard beautiful music, and “fell in a superbly blue heaven containing roseate cloudlets”, Heim wrote. “Then consciousness was painlessly extinguished, usually at the moment of impact.” There were a few more attempts to do research in the early 20th century, but little progress was made in understanding near-death experiences scientifically. Then, in 1975, an American medical student named Raymond Moody published a book called Life After Life.
Photograph: Getty Images/Blend Images
In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”. Moody’s book became an international bestseller.
In 1976, the New York Times reported on the burgeoning scientific interest in “life after death” and the “emerging field of thanatology”. The following year, Moody and several fellow thanatologists founded an organisation that became the International Association for Near-Death Studies. In 1981, they printed the inaugural issue of Vital Signs, a magazine for the general reader that was largely devoted to stories of near-death experiences. The following year they began producing the field’s first peer-reviewed journal, which became the Journal of Near-Death Studies. The field was growing, and taking on the trappings of scientific respectability. Reviewing its rise in 1988, the British Journal of Psychiatry captured the field’s animating spirit: “A grand hope has been expressed that, through NDE research, new insights can be gained into the ageless mystery of human mortality and its ultimate significance, and that, for the first time, empirical perspectives on the nature of death may be achieved.”
But near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine. As researchers, the spiritualists’ aim was to collect as many reports of near-death experience as possible, and to proselytise society about the reality of life after death. Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.
The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individual. Many of them were physicians and psychiatrists who had been deeply affected after hearing the near-death stories of patients they had treated in the ICU. Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.
Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain. (Indeed, many of the states reported by near-death experiencers can apparently be achieved by taking a hero’s dose of ketamine.) Their basic premise was: no functioning brain means no consciousness, and certainly no life after death. Their task, which Borjigin took up in 2015, was to discover what was happening during near-death experiences on a fundamentally physical level.
Slowly, the spiritualists left the field of research for the loftier domains of Christian talk radio, and the parapsychologists and physicalists started bringing near-death studies closer to the scientific mainstream. Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221. Those articles have appeared everywhere from the Canadian Urological Association Journal to the esteemed pages of The Lancet.
Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries. Charlotte Martial, a neuroscientist at the University of Liège in Belgium who has done some of the best physicalist work on near-death experiences, hopes we will soon develop a new understanding of the relationship between the internal experience of consciousness and its outward manifestations, for example in coma patients. “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,” she told me. Parnia, the resuscitation specialist, who studies the physical processes of dying but is also sympathetic to a parapsychological theory of consciousness, has a radically different take on what we are poised to find out. “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”
If the field of near-death studies is at the threshold of new discoveries about consciousness and death, it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest. Lance Becker has been a leader in resuscitation science for more than 30 years. As a young doctor attempting to revive people through CPR in the mid-1980s, senior physicians would often step in to declare patients dead. “At a certain point, they would just say, ‘OK, that’s enough. Let’s stop. This is unsuccessful. Time of death: 1.37pm,’” he recalled recently. “And that would be the last thing. And one of the things running through my head as a young doctor was, ‘Well, what really happened at 1.37?’”
In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest. (It is different from a heart attack, in which there is a blockage in a heart that’s still pumping.) Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.
For almost all people at all times in history, cardiac arrest was basically the end of the line. That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques, and soon people were being revived in previously unthinkable, if still modest, numbers.
As more and more people were resuscitated, scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.
It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care. In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.
“I don’t think there’s ever been a more exciting time for the field,” Becker told me. “We’re discovering new drugs, we’re discovering new devices, and we’re discovering new things about the brain.”
The brain – that’s the tricky part. In January 2021, as the Covid-19 pandemic was surging toward what would become its deadliest week on record, Netflix released a documentary series called Surviving Death. In the first episode, some of near-death studies’ most prominent parapsychologists presented the core of their arguments for why they believe near-death experiences show that consciousness exists independently of the brain. “When the heart stops, within 20 seconds or so, you get flatlining, which means no brain activity,” Bruce Greyson, an emeritus professor of psychiatry at the University of Virginia and one of the founding members of the International Association for Near-Death Studies, says in the documentary. “And yet,” he goes on to claim, “people have near-death experiences when they’ve been (quote) ‘flatlined’ for longer than that.”
That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain. Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.
In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down. To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead. Dozens of such reports exist. One of the most famous is about a woman who apparently travelled so far outside her body that she was able to spot a shoe on a window ledge in another part of the hospital where she went into cardiac arrest; the shoe was later reportedly found by a nurse.
Photograph: Chronicle/Alamy
At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,” Sue Blackmore, a well-known researcher into parapsychology who had her own near-death experience as a young woman in 1970, has written.
The case of the shoe, Blackmore pointed out, relied solely on the report of the nurse who claimed to have found it. That’s far from the standard of proof the scientific community would require to accept a result as radical as that consciousness can travel beyond the body and exist after death. In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,” Charlotte Martial, the University of Liège neuroscientist, told me.
The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?
Perhaps the story to be written about near-death experiences is not that they prove consciousness is radically different from what we thought it was. Instead, it is that the process of dying is far stranger than scientists ever suspected. The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.
In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.
“As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.
In those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irrevocably deeper into death, something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.
Photograph: Richard Baker/Corbis/Getty Images
Those glimmers and flashes of something like life contradict the expectations of almost everyone working in the field of resuscitation science and near-death studies. The predominant belief – expressed by Greyson, the psychiatrist and co-founder of the International Association of Near Death Studies, in the Netflix series Surviving Death – was that as soon as oxygen stops going to the brain, neurological activity falls precipitously. Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.
Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course, Patient One did not recover, so no one can prove that the extraordinary happenings in her dying brain had experiential counterparts. Greyson and one of the other grandees of near-death studies, a Dutch cardiologist named Pim van Lommel, have asserted that Patient One’s brain activity can shed no light on near-death experiences because her heart hadn’t fully flatlined, but that is a self-defeating argument: there is no rigorous empirical evidence that near-death experiences occur in people whose hearts have completely stopped.
At the very least, Patient One’s brain activity – and the activity in the dying brain of another patient Borjigin studied, a 77-year-old woman known as Patient Three – seems to close the door on the argument that the brain always and nearly immediately ceases to function in a coherent manner in the moments after clinical death. “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.
Borjigin believes that understanding the dying brain is one of the “holy grails” of neuroscience. “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?” Although most people would take that result for granted, Borjigin thinks that, on a physical level, it actually makes little sense.
Borjigin hopes that understanding the neurophysiology of death can help us to reverse it. She already has brain activity data from dozens of deceased patients that she is waiting to analyse. But because of the paranormal stigma associated with near-death studies, she says, few research agencies want to grant her funding. “Consciousness is almost a dirty word amongst funders,” she added. “Hardcore scientists think research into it should belong to maybe theology, philosophy, but not in hardcore science. Other people ask, ‘What’s the use? The patients are gonna die anyway, so why study that process? There’s nothing you can do about it.’”
Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing. The pigs’ brain scans didn’t show the widespread electrical activity that we typically associate with sentience or consciousness. But the fact that there was any activity at all suggests the frontiers of life may one day extend much, much farther into the realms of death than most scientists currently imagine.
Other serious avenues of research into near-death experience are ongoing. Martial and her colleagues at the University of Liège are working on many issues relating to near-death experiences. One is whether people with a history of trauma, or with more creative minds, tend to have such experiences at higher rates than the general population. Another is on the evolutionary biology of near-death experiences. Why, evolutionarily speaking, should we have such experiences at all? Martial and her colleagues speculate that it may be a form of the phenomenon known as thanatosis, in which creatures throughout the animal kingdom feign death to escape mortal dangers. Other researchers have proposed that the surge of electrical activity in the moments after cardiac arrest is just the final seizure of a dying brain, or have hypothesised that it’s a last-ditch attempt by the brain to restart itself, like jump-starting the engine on a car.
Meanwhile, in parts of the culture where enthusiasm is reserved not for scientific discovery in this world, but for absolution or benediction in the next, the spiritualists, along with sundry other kooks and grifters, are busily peddling their tales of the afterlife. Forget the proverbial tunnel of light: in America in particular, a pipeline of money has been discovered from death’s door, through Christian media, to the New York Times bestseller list and thence to the fawning, gullible armchairs of the nation’s daytime talk shows. First stop, paradise; next stop, Dr Oz.
But there is something that binds many of these people – the physicalists, the parapsychologists, the spiritualists – together. It is the hope that by transcending the current limits of science and of our bodies, we will achieve not a deeper understanding of death, but a longer and more profound experience of life. That, perhaps, is the real attraction of the near-death experience: it shows us what is possible not in the next world, but in this one.
Follow the Long Read on X at @gdnlongread, listen to our podcasts here and sign up to the long read weekly email here.
JAMES WATSON, the 1962 Nobel laureate, recently asserted that he was “inherently gloomy about the prospect of Africa” and its citizens because “all our social policies are based on the fact that their intelligence is the same as ours whereas all the testing says not really.”
Dr. Watson’s remarks created a huge stir because they implied that blacks were genetically inferior to whites, and the controversy resulted in his resignation as chancellor of Cold Spring Harbor Laboratory. But was he right? Is there a genetic difference between blacks and whites that condemns blacks in perpetuity to be less intelligent?
The first notable public airing of the scientific question came in a 1969 article in The Harvard Educational Review by Arthur Jensen, a psychologist at the University of California, Berkeley. Dr. Jensen maintained that a 15-point difference in I.Q. between blacks and whites was mostly due to a genetic difference between the races that could never be erased. But his argument gave a misleading account of the evidence. And others who later made the same argument Richard Herrnstein and Charles Murray in “The Bell Curve,” in 1994, for example, and just recently, William Saletan in a series of articles on Slate have made the same mistake.
In fact, the evidence heavily favors the view that race differences in I.Q. are environmental in origin, not genetic.
The hereditarians begin with the assertion that 60 percent to 80 percent of variation in I.Q. is genetically determined. However, most estimates of heritability have been based almost exclusively on studies of middle-class groups. For the poor, a group that includes a substantial proportion of minorities, heritability of I.Q. is very low, in the range of 10 percent to 20 percent, according to recent research by Eric Turkheimer at the University of Virginia. This means that for the poor, improvements in environment have great potential to bring about increases in I.Q.
In any case, the degree of heritability of a characteristic tells us nothing about how much the environment can affect it. Even when a trait is highly heritable (think of the height of corn plants), modifiability can also be great (think of the difference growing conditions can make).
Nearly all the evidence suggesting a genetic basis for the I.Q. differential is indirect. There is, for example, the evidence that brain size is correlated with intelligence, and that blacks have smaller brains than whites. But the brain size difference between men and women is substantially greater than that between blacks and whites, yet men and women score the same, on average, on I.Q. tests. Likewise, a group of people in a community in Ecuador have a genetic anomaly that produces extremely small head sizes and hence brain sizes. Yet their intelligence is as high as that of their unaffected relatives.
Why rely on such misleading and indirect findings when we have much more direct evidence about the basis for the I.Q. gap? About 25 percent of the genes in the American black population are European, meaning that the genes of any individual can range from 100 percent African to mostly European. If European intelligence genes are superior, then blacks who have relatively more European genes ought to have higher I.Q.’s than those who have more African genes. But it turns out that skin color and “negroidness” of features both measures of the degree of a black person’s European ancestry are only weakly associated with I.Q. (even though we might well expect a moderately high association due to the social advantages of such features).
Credit: Balint Zsako
During World War II, both black and white American soldiers fathered children with German women. Thus some of these children had 100 percent European heritage and some had substantial African heritage. Tested in later childhood, the German children of the white fathers were found to have an average I.Q. of 97, and those of the black fathers had an average of 96.5, a trivial difference.
If European genes conferred an advantage, we would expect that the smartest blacks would have substantial European heritage. But when a group of investigators sought out the very brightest black children in the Chicago school system and asked them about the race of their parents and grandparents, these children were found to have no greater degree of European ancestry than blacks in the population at large.
Most tellingly, blood-typing tests have been used to assess the degree to which black individuals have European genes. The blood group assays show no association between degree of European heritage and I.Q. Similarly, the blood groups most closely associated with high intellectual performance among blacks are no more European in origin than other blood groups.
The closest thing to direct evidence that the hereditarians have is a study from the 1970s showing that black children who had been adopted by white parents had lower I.Q.’s than those of mixed-race children adopted by white parents. But, as the researchers acknowledged, the study had many flaws; for instance, the black children had been adopted at a substantially later age than the mixed-race children, and later age at adoption is associated with lower I.Q.
A superior adoption study and one not discussed by the hereditarians was carried out at Arizona State University by the psychologist Elsie Moore, who looked at black and mixed-race children adopted by middle-class families, either black or white, and found no difference in I.Q. between the black and mixed-race children. Most telling is Dr. Moore’s finding that children adopted by white families had I.Q.’s 13 points higher than those of children adopted by black families. The environments that even middle-class black children grow up in are not as favorable for the development of I.Q. as those of middle-class whites.
Important recent psychological research helps to pinpoint just what factors shape differences in I.Q. scores. Joseph Fagan of Case Western Reserve University and Cynthia Holland of Cuyahoga Community College tested blacks and whites on their knowledge of, and their ability to learn and reason with, words and concepts. The whites had substantially more knowledge of the various words and concepts, but when participants were tested on their ability to learn new words, either from dictionary definitions or by learning their meaning in context, the blacks did just as well as the whites.
Whites showed better comprehension of sayings, better ability to recognize similarities and better facility with analogies when solutions required knowledge of words and concepts that were more likely to be known to whites than to blacks. But when these kinds of reasoning were tested with words and concepts known equally well to blacks and whites, there were no differences. Within each race, prior knowledge predicted learning and reasoning, but between the races it was prior knowledge only that differed.
What do we know about the effects of environment?
That environment can markedly influence I.Q. is demonstrated by the so-called Flynn Effect. James Flynn, a philosopher and I.Q. researcher in New Zealand, has established that in the Western world as a whole, I.Q. increased markedly from 1947 to 2002. In the United States alone, it went up by 18 points. Our genes could not have changed enough over such a brief period to account for the shift; it must have been the result of powerful social factors. And if such factors could produce changes over time for the population as a whole, they could also produce big differences between subpopulations at any given time.
In fact, we know that the I.Q. difference between black and white 12-year-olds has dropped to 9.5 points from 15 points in the last 30 years a period that was more favorable for blacks in many ways than the preceding era. Black progress on the National Assessment of Educational Progress shows equivalent gains. Reading and math improvement has been modest for whites but substantial for blacks.
Most important, we know that interventions at every age from infancy to college can reduce racial gaps in both I.Q. and academic achievement, sometimes by substantial amounts in surprisingly little time. This mutability is further evidence that the I.Q. difference has environmental, not genetic, causes. And it should encourage us, as a society, to see that all children receive ample opportunity to develop their minds.
Richard E. Nisbett, a professor of psychology at the University of Michigan, is the author of “The Geography of Thought: How Asians and Westerners Think Differently and Why.”
Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.
Emily Mullin
August 25, 2021
In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together.
At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness?
Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.”
New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts.
But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk.
In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun.
An impossible dream?
A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever?
From the beginning, both projects had critics.
EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with.
In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful.
Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.
Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)
There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.
While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.
In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project.
In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain.
“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.”
The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”
But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.
The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain.
New day
Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged.
And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.
The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.
While these are all important steps forward, though, they’re far from the initial grand ambitions.
Lasting legacy
We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.
When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system.
“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.”
Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity.
Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes.
For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.
“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.”
Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is.
“I have to be honest,” says Yuste. “We had higher hopes.”
Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.
Tel Aviv University archaeologists Miki Ben-Dor and Ran Barkai proffer novel hypothesis, showing how the greed of Homo erectus set us careening down an anomalous evolutionary path
Why the human brain evolved as it did never has been plausibly explained. Apparently, not since the first life-form billions of years ago did a single species gain dominance over all others – until we came along. Now, in a groundbreaking paper, two Israeli researchers propose that our anomalous evolution was propelled by the very mass extinctions we helped cause. Or: As we sawed off the culinary branches from which we swung, we had to get ever more inventive in order to survive.
As ambling, slow-to-reproduce large animals diminished and gradually went extinct, we were forced to resort to smaller, nimbler animals that flee as a strategy to escape predation. To catch them, we had to get smarter, nimbler and faster, according to the universal theory of human evolution proposed by researchers Miki Ben-Dor and Prof. Ran Barkai of Tel Aviv University, in a paper published in the journal Quaternary.
In fact, the great African megafauna began to decline about 4.6 million years ago. But our story begins with Homo habilis, which lived about 2.6 million years ago and apparently used crude stone tools to help it eat flesh, and with Homo erectus, which thronged Africa and expanded to Eurasia about 2 million years ago. The thing is, erectus wasn’t an omnivore: it was a carnivore, Ben-Dor explains to Haaretz.
“Eighty percent of mammals are omnivores but still specialize in a narrow food range. If anything, it seems Homo erectus was a hyper-carnivore,” he observes.
And in the last couple of million years, our brains grew threefold to a maximum capacity of about 1,500 cranial capacity, a size achieved about 300,000 years ago. We also gradually but consistently ramped up in technology and culture – until the Neolithic revolution and advent of the sedentary lifestyle, when our brains shrank to about 1,400 to 1,300cc, but more on that anomaly later.
The hypothesis suggested by Ben-Dor and Barkai – that we ate our way to our present physical, cultural and ecological state – is an original unifying explanation for the behavioral, physiological and cultural evolution of the human species.
Out of chaos
Evolution is chaotic. Charles Darwin came up with the theory of the survival of the fittest, and nobody has a better suggestion yet, but mutations aren’t “planned.” Bodies aren’t “designed,” if we leave genetic engineering out of it. The point is, evolution isn’t linear but chaotic, and that should theoretically apply to humans too.
Hence, it is strange that certain changes in the course of millions of years of human history, including the expansion of our brain, tool manufacture techniques and use of fire, for example, were uncharacteristically progressive, say Ben-Dor and Barkai.
“Uncharacteristically progressive” means that certain traits such as brain size, or cultural developments such as fire usage, evolved in one direction over a long time, in the direction of escalation. That isn’t what chaos is expected to produce over vast spans of time, Barkai explains to Haaretz: it is bizarre. Very few parameters behave like that.
So, their discovery of correlation between contraction of the average weight of African animals, the extinction of megafauna and the development of the human brain is intriguing.
From mammoth marrow to joint of rat
To be clear, just this month a new paper posited that the late Quaternary extinction of megafauna, in the last few tens of thousands of years, wasn’t entirely the fault of humanity. In North America specifically, it was due primarily to climate change, with the late-arriving humans apparently providing the coup de grâce to some species.
In the Old World, however, a human role is clearer. African megafauna apparently began to decline 4.6 million years ago, but during the Pleistocene (2.6 million to 11,600 years ago) the size of African animals trended sharply down, in what the authors term an abrupt reversal from a continuous growth trend of 65 million years (i.e., since the dinosaurs almost died out).
When Homo erectus the carnivore began to roam Africa around 2 million years ago, land mammals averaged nearly 500 kilograms. Barkai’s team and others have demonstrated that hominins ate elephants and large animals when they could. In fact, originally Africa had six elephant species (today there are two: the bush elephant and forest elephant). By the end of the Pleistocene, by which time all hominins other than modern humans were extinct too, that average weight of the African animal had shrunk by more than 90 percent.
And during the Pleistocene, as the African animals shrank, the Homo genus grew taller and more gracile, and our stone tool technology improved (which in no way diminished our affection for archaic implements like the hand ax or chopper, both of which remained in use for more than a million years, even as more sophisticated technologies were developed).
If we started some 3.3 million years ago with large, crude stone hammers that may have been used to bang big animals on the head or break bones to get at the marrow, over the epochs we invented the spear for remote slaughter. By about 80,000 years ago, the bow and arrow was making its appearance, which was more suitable for bringing down small fry like small deer and birds. Over a million years ago, we began to use fire, and later achieved better control of it, meaning the ability to ignite it at will. Later we domesticated the dog from the wolf, and it would help us hunt smaller, fleet animals.
Why did the earliest humans hunt large animals anyway? Wouldn’t a peeved elephant be more dangerous than a rat? Arguably, but catching one elephant is easier than catching a large number of rats. And megafauna had more fat.
A modern human can only derive up to about 50 percent of calories from lean meat (protein): past a certain point, our livers can’t digest more protein. We need energy from carbs or fat, but before developing agriculture about 10,000 years ago, a key source of calories had to be animal fat.
Big animals have a lot of fat. Small animals don’t. In Africa and Europe, and in Israel too, the researchers found a significant decline in the prevalence of animals weighing over 200 kilograms correlated to an increase in the volume of the human brain. Thus, Ben-Dor and Barkai deduce that the declining availability of large prey seems to have been a key element in the natural selection from Homo erectus onward. Catching one elephant is more efficient than catching 1,000 rabbits, but if we must catch 1,000 rabbits, improved cunning, planning and tools are in order.
Say it with fat
Our changing hunting habits would have had cultural impacts too, Ben-Dor and Barkai posit. “Cultural evolution in archaeology usually refers to objects, such as stone tools,” Ben-Dor tells Haaretz. But cultural evolution also refers to learned behavior, such as our choice of which animals to hunt, and how.
Thus, they posit, our hunting conundrum may have also been a key element to that enigmatic human characteristic: complex language. When language began, with what ancestor of Homo sapiens, if any before us, is hotly debated.
Ben-Dor, an economist by training prior to obtaining a Ph.D. in archaeology, believes it began early. “We just need to follow the money. When speaking of evolution, one must follow the energy. Language is energetically costly. Speaking requires devotion of part of the brain, which is costly. Our brain consumes huge amounts of energy. It’s an investment, and language has to produce enough benefit to make it worthwhile. What did language bring us? It had to be more energetically efficient hunting.”
Domestication of the dog also requires resources and, therefore, also had to bring sufficient compensation in the form of more efficient hunting of smaller animals, he points out. That may help explain the fact that Neolithic humans not only embraced the dog but ate it too, going by archaeological evidence of butchered dogs.
At the end of the day, wherever we went, humans devastated the local ecologies, given enough time.
There is a lot of thinking about the Neolithic agricultural revolution. Some think grain farming was driven by the desire to make beer. Given residue analysis indicating that it’s been around for over 10,000 years, that theory isn’t as far-fetched as one might think. Ben-Dor and Barkai suggest that once we could grow our own food and husband herbivores, the megafauna almost entirely gone, hunting for them became too energy-costly. So we had to use our large brains to develop agriculture.
And as the hunter-gathering lifestyle gave way to permanent settlement, our brain size decreased.
Note, Ben-Dor adds, that the brains of wolves which have to hunt to survive are larger than the brain of the domesticated wolf, i.e., dogs. We did promise more on that. That was it. Also: The chimpanzee brain has remained stable for 7 million years, since the split with the Homo line, Barkai points out.
“Why does any of this matter?” Ben-Dor asks. “People think humans reached this condition because it was ‘meant to be.’ But in the Earth’s 4.5 billion years, there have been billions of species. They rose and fell. What’s the probability that we would take over the world? It’s an accident of nature. It never happened before that one species achieved dominance over all, and now it’s all over. How did that happen? This is the answer: A non-carnivore entered the niche of carnivore, and ate out its niche. We can’t eat that much protein: we need fat too. Because we needed the fat, we began with the big animals. We hunted the prime adult animals which have more fat than the kiddies and the old. We wiped out the prime adults who were crucial to survival of species. Because of our need for fat, we wiped out the animals we depended on. And this required us to keep getting smarter and smarter, and thus we took over the world.”
Humans are the only ultrasocial creature on the planet. We have outcompeted, interbred or even killed off all other hominin species. We cohabit in cities of tens of millions of people and, despite what the media tell us, violence between individuals is extremely rare. This is because we have an extremely large, flexible and complex “social brain”.
To truly understand how the brain maintains our human intellect, we would need to know about the state of all 86 billion neurons and their 100 trillion interconnections, as well as the varying strengths with which they are connected, and the state of more than 1,000 proteins that exist at each connection point. Neurobiologist Steven Rose suggests that even this is not enough – we would still need know how these connections have evolved over a person’s lifetime and even the social context in which they had occurred. It may take centuries just to figure out basic neuronal connectivity.
Many people assume that our brain operates like a powerful computer. But Robert Epstein, a psychologist at the American Institute for Behavioural Research and Technology, says this is just shoddy thinking and is holding back our understanding of the human brain. Because, while humans start with senses, reflexes and learning mechanisms, we are not born with any of the information, rules, algorithms or other key design elements that allow computers to behave somewhat intelligently. For instance, computers store exact copies of data that persist for long periods of time, even when the power is switched off. Our brains, meanwhile, are capable of creating false data or false memories, and they only maintain our intellect as long as we remain alive.
We are organisms, not computers
Of course, we can see many advantages in having a large brain. In my recent book on human evolution I suggest it firstly allows humans to exist in a group size of about 150. This builds resilience to environmental changes by increasing and diversifying food production and sharing.
As our ancestors got smarter, they became capable of living in larger and larger groups.Mark Maslin, Author provided
A social brain also allows specialisation of skills so individuals can concentrate on supporting childbirth, tool-making, fire setting, hunting or resource allocation. Humans have no natural weapons, but working in large groups and having tools allowed us to become the apex predator, hunting animals as large as mammoths to extinction.
Our social groups are large and complex, but this creates high stress levels for individuals because the rewards in terms of food, safety and reproduction are so great. Hence, Oxford anthropologist Robin Dunbar argues our huge brain is primarily developed to keep track of rapidly changing relationships. It takes a huge amount of cognitive ability to exist in large social groups, and if you fall out of the group you lose access to food and mates and are unlikely to reproduce and pass on your genes.
Great. But what about your soap opera knowledge?ronstik / shutterstock
My undergraduates come to university thinking they are extremely smart as they can do differential equations and understand the use of split infinitives. But I point out to them that almost anyone walking down the street has the capacity to hold the moral and ethical dilemmas of at least five soap operas in their head at any one time. And that is what being smart really means. It is the detailed knowledge of society and the need to track and control the ever changing relationship between people around us that has created our huge complex brain.
It seems our brains could be even more flexible that we previously thought. Recent genetic evidence suggests the modern human brain is more malleable and is modelled more by the surrounding environment than that of chimpanzees. The anatomy of the chimpanzee brain is strongly controlled by their genes, whereas the modern human brain is extensively shaped by the environment, no matter what the genetics.
This means the human brain is pre-programmed to be extremely flexible; its cerebral organisation is adjusted by the environment and society in which it is raised. So each new generation’s brain structure can adapt to the new environmental and social challenges without the need to physically evolve.
Evolution at work.OtmarW / shutterstock
This may also explain why we all complain that we do not understand the next generation as their brains are wired differently, having grown up in a different physical and social environment. An example of this is the ease with which the latest generation interacts with technology almost if they had co-evolved with it.
So next time you turn on a computer just remember how big and complex your brain is – to keep a track of your friends and enemies.
Summary: We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists have been investigating how the brain fools us into believing that we see in sharp detail.
The thumbnail at the end of an outstretched arm: This is the area that the eye actually can see in sharp detail. Researchers have investigated why the rest of the world also appears to be uniformly detailed. Credit: Bielefeld University
We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists at Bielefeld University have been investigating how the brain fools us into believing that we see in sharp detail. The results have been published in the scientific magazine Journal of Experimental Psychology: General. Its central finding is that our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail.
“In our study we are dealing with the question of why we believe that we see the world uniformly detailed,” says Dr. Arvid Herwig from the Neuro-Cognitive Psychology research group of the Faculty of Psychology and Sports Science. The group is also affiliated to the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University and is led by Professor Dr. Werner X. Schneider.
Only the fovea, the central area of the retina, can process objects precisely. We should therefore only be able to see a small area of our environment in sharp detail. This area is about the size of a thumb nail at the end of an outstretched arm. In contrast, all visual impressions which occur outside the fovea on the retina become progressively coarse. Nevertheless, we commonly have the impression that we see large parts of our environment in sharp detail.
Herwig and Schneider have been getting to the bottom of this phenomenon with a series of experiments. Their approach presumes that people learn through countless eye movements over a lifetime to connect the coarse impressions of objects outside the fovea to the detailed visual impressions after the eye has moved to the object of interest. For example, the coarse visual impression of a football (blurred image of a football) is connected to the detailed visual impression after the eye has moved. If a person sees a football out of the corner of her eye, her brain will compare this current blurred picture with memorised images of blurred objects. If the brain finds an image that fits, it will replace the coarse image with a precise image from memory. This blurred visual impression is replaced before the eye moves. The person thus thinks that she already sees the ball clearly, although this is not the case.
The psychologists have been using eye-tracking experiments to test their approach. Using the eye-tracking technique, eye movements are measured accurately with a specific camera which records 1000 images per second. In their experiments, the scientists have recorded fast balistic eye movements (saccades) of test persons. Though most of the participants did not realise it, certain objects were changed during eye movement. The aim was that the test persons learn new connections between visual stimuli from inside and outside the fovea, in other words from detailed and coarse impressions. Afterwards, the participants were asked to judge visual characteristics of objects outside the area of the fovea. The result showed that the connection between a coarse and detailed visual impression occurred after just a few minutes. The coarse visual impressions became similar to the newly learnt detailed visual impressions.
“The experiments show that our perception depends in large measure on stored visual experiences in our memory,” says Arvid Herwig. According to Herwig and Schneider, these experiences serve to predict the effect of future actions (“What would the world look like after a further eye movement”). In other words: “We do not see the actual world, but our predictions.”
Journal Reference:
Arvid Herwig, Werner X. Schneider. Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 2014; 143 (5): 1903 DOI: 10.1037/a0036781
Summary: Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.
These images show brain networks in two behaviorally similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu
Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.
There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.
Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).
The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically — but importantly, not always — impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.
Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question — it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”
The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.
Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”
Journal Reference:
Chennu S, Finoia P, Kamau E, Allanson J, Williams GB, et al. Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness. PLOS Computational Biology, 2014; 10 (10): e1003887 DOI:10.1371/journal.pcbi.1003887
May 21, 2013 — A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. Scientists funded by the Swiss National Science Foundation (SNSF) propose that the field of brain research should expand its focus.
A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. (Credit: iStockphoto/Sebastian Kaulitzki)
Many brain researchers cannot see the forest for the trees. When they use electrodes to record the activity patterns of individual neurons, the patterns often appear chaotic and difficult to interpret. “But when you zoom out from looking at individual cells, and observe a large number of neurons instead, their global activity is very informative,” says Mattia Rigotti, a scientist at Columbia University and New York University who is supported by the SNSF and the Janggen-Pöhn-Stiftung. Publishing inNature together with colleagues from the United States, he has shown that these difficult-to-interpret patterns in particular are especially important for complex brain functions.
What goes on in the heads of apes
The researchers have focussed their attention on the activity patterns of 237 neurons that had been recorded some years previously using electrodes implanted in the frontal lobes of two rhesus monkeys. At that time, the apes had been taught to recognise images of different objects on a screen. Around one third of the observed neurons demonstrated activity that Rigotti describes as “mixed selectivity.” A mixed selective neuron does not always respond to the same stimulus (the flowers or the sailing boat on the screen) in the same way. Rather, its response differs as it also takes account of the activity of other neurons. The cell adapts its response according to what else is going on in the ape’s brain.
Chaotic patterns revealed in context
Just as individual computers are networked to create concentrated processing and storage capacity in the field of Cloud Computing, links in the complex cognitive processes that take place in the prefrontal cortex play a key role. The greater the density of the network in the brain, in other words the greater the proportion of mixed selectivity in the activity patterns of the neurons, the better the apes were able to recall the images on the screen, as demonstrated by Rigotti in his analysis. Given that the brain and cognitive capabilities of rhesus monkeys are similar to those of humans, mixed selective neurons should also be important in our own brains. For him this is reason enough why brain research from now on should no longer be satisfied with just the simple activity patterns, but should also consider the apparently chaotic patterns that can only be revealed in context.
Journal Reference:
Mattia Rigotti, Omri Barak, Melissa R. Warden, Xiao-Jing Wang, Nathaniel D. Daw, Earl K. Miller, Stefano Fusi. The importance of mixed selectivity in complex cognitive tasks. Nature, 2013; DOI: 10.1038/nature12160
Apr. 8, 2013 — A team of researchers led by Associate Professor Maria Kozhevnikov from the Department of Psychology at the National University of Singapore (NUS) Faculty of Arts and Social Sciences showed, for the first time, that it is possible for core body temperature to be controlled by the brain. The scientists found that core body temperature increases can be achieved using certain meditation techniques (g-tummo) which could help in boosting immunity to fight infectious diseases or immunodeficiency.
Published in science journal PLOS ONE in March 2013, the study documented reliable core body temperature increases for the first time in Tibetan nuns practising g-tummo meditation. Previous studies on g-tummo meditators showed only increases in peripheral body temperature in the fingers and toes. The g-tummo meditative practice controls “inner energy” and is considered by Tibetan practitioners as one of the most sacred spiritual practices in the region. Monasteries maintaining g-tummo traditions are very rare and are mostly located in the remote areas of eastern Tibet.
The researchers collected data during the unique ceremony in Tibet, where nuns were able to raise their core body temperature and dry up wet sheets wrapped around their bodies in the cold Himalayan weather (-25 degree Celsius) while meditating. Using electroencephalography (EEG) recordings and temperature measures, the team observed increases in core body temperature up to 38.3 degree Celsius. A second study was conducted with Western participants who used a breathing technique of the g-tummo meditative practice and they were also able to increase their core body temperature, within limits.
Applications of the research findings
The findings from the study showed that specific aspects of the meditation techniques can be used by non-meditators to regulate their body temperature through breathing and mental imagery. The techniques could potentially allow practitioners to adapt to and function in cold environments, improve resistance to infections, boost cognitive performance by speeding up response time and reduce performance problems associated with decreased body temperature.
The two aspects of g-tummo meditation that lead to temperature increases are “vase breath” and concentrative visualisation. “Vase breath” is a specific breathing technique which causes thermogenesis, which is a process of heat production. The other technique, concentrative visualisation, involves focusing on a mental imagery of flames along the spinal cord in order to prevent heat losses. Both techniques work in conjunction leading to elevated temperatures up to the moderate fever zone.
Assoc Prof Kozhevnikov explained, “Practicing vase breathing alone is a safe technique to regulate core body temperature in a normal range. The participants whom I taught this technique to were able to elevate their body temperature, within limits, and reported feeling more energised and focused. With further research, non-Tibetan meditators could use vase breathing to improve their health and regulate cognitive performance.”
Further research into controlling body temperature
Assoc Prof Kozhevnikov will continue to explore the effects of guided imagery on neurocognitive and physiological aspects. She is currently training a group of people to regulate their body temperature using vase breathing, which has potential applications in the field of medicine. Furthermore, the use of guided mental imagery in conjunction with vase breathing may lead to higher body temperature increases and better health.
Journal Reference:
Maria Kozhevnikov, James Elliott, Jennifer Shephard, Klaus Gramann. Neurocognitive and Somatic Components of Temperature Increases during g-Tummo Meditation: Legend and Reality. PLoS ONE, 2013; 8 (3): e58244 DOI:10.1371/journal.pone.0058244
Feb. 13, 2013 — A team of political scientists and neuroscientists has shown that liberals and conservatives use different parts of the brain when they make risky decisions, and these regions can be used to predict which political party a person prefers. The new study suggests that while genetics or parental influence may play a significant role, being a Republican or Democrat changes how the brain functions.
Republicans and Democrats differ in the neural mechanisms activated while performing a risk-taking task. Republicans more strongly activate their right amygdala, associated with orienting attention to external cues. Democrats have higher activity in their left posterior insula, associated with perceptions of internal physiological states. This activation also borders the temporal-parietal junction, and therefore may reflect a difference in internal physiological drive as well as the perception of the internal state and drive of others. (Credit: From: Darren Schreiber, Greg Fonzo, Alan N. Simmons, Christopher T. Dawes, Taru Flagan, James H. Fowler, Martin P. Paulus. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS ONE, 2013; 8 (2): e52970 DOI: 10.1371/journal.pone.0052970)
Dr. Darren Schreiber, a researcher in neuropolitics at the University of Exeter, has been working in collaboration with colleagues at the University of California, San Diego on research that explores the differences in the way the brain functions in American liberals and conservatives. The findings are published Feb. 13 in the journalPLOS ONE.
In a prior experiment, participants had their brain activity measured as they played a simple gambling game. Dr. Schreiber and his UC San Diego collaborators were able to look up the political party registration of the participants in public records. Using this new analysis of 82 people who performed the gambling task, the academics showed that Republicans and Democrats do not differ in the risks they take. However, there were striking differences in the participants’ brain activity during the risk-taking task.
Democrats showed significantly greater activity in the left insula, a region associated with social and self-awareness. Meanwhile Republicans showed significantly greater activity in the right amygdala, a region involved in the body’s fight-or-flight system. These results suggest that liberals and conservatives engage different cognitive processes when they think about risk.
In fact, brain activity in these two regions alone can be used to predict whether a person is a Democrat or Republican with 82.9% accuracy. By comparison, the longstanding traditional model in political science, which uses the party affiliation of a person’s mother and father to predict the child’s affiliation, is only accurate about 69.5% of the time. And another model based on the differences in brain structure distinguishes liberals from conservatives with only 71.6% accuracy.
The model also outperforms models based on differences in genes. Dr. Schreiber said: “Although genetics have been shown to contribute to differences in political ideology and strength of party politics, the portion of variation in political affiliation explained by activity in the amygdala and insula is significantly larger, suggesting that affiliating with a political party and engaging in a partisan environment may alter the brain, above and beyond the effect of heredity.”
These results may pave the way for new research on voter behaviour, yielding better understanding of the differences in how liberals and conservatives think. According to Dr. Schreiber: “The ability to accurately predict party politics using only brain activity while gambling suggests that investigating basic neural differences between voters may provide us with more powerful insights than the traditional tools of political science.”
Journal Reference:
Darren Schreiber, Greg Fonzo, Alan N. Simmons, Christopher T. Dawes, Taru Flagan, James H. Fowler, Martin P. Paulus. Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS ONE, 2013; 8 (2): e52970 DOI:10.1371/journal.pone.0052970
NEW ORLEANS, LOUISIANA — Books and educational toys can make a child smarter, but they also influence how the brain grows, according to new research presented here on Sunday at the annual meeting of the Society for Neuroscience. The findings point to a “sensitive period” early in life during which the developing brain is strongly influenced by environmental factors.
Studies comparing identical and nonidentical twins show that genes play an important role in the development of the cerebral cortex, the thin, folded structure that supports higher mental functions. But less is known about how early life experiences influence how the cortex grows.
To investigate, neuroscientist Martha Farah of the University of Pennsylvania and her colleagues recruited 64 children from a low-income background and followed them from birth through to late adolescence. They visited the children’s homes at 4 and 8 years of age to evaluate their environment, noting factors such as the number of books and educational toys in their houses, and how much warmth and support they received from their parents.
More than 10 years after the second home visit, the researchers used MRI to obtain detailed images of the participants’ brains. They found that the level of mental stimulation a child receives in the home at age 4 predicted the thickness of two regions of the cortex in late adolescence, such that more stimulation was associated with a thinner cortex. One region, the lateral inferior temporal gyrus, is involved in complex visual skills such as word recognition.
Home environment at age 8 had a smaller impact on development of these brain regions, whereas other factors, such as the mother’s intelligence and the degree and quality of her care, had no such effect.
Previous work has shown that adverse experiences, such as childhood neglect, abuse, and poverty, can stunt the growth of the brain. The new findings highlight the sensitivity of the growing brain to environmental factors, Farah says, and provide strong evidence that subtle variations in early life experience can affect the brain throughout life.
As the brain develops, it produces more synapses, or neuronal connections, than are needed, she explains. Underused connections are later eliminated, and this elimination process, called synaptic pruning, is highly dependent upon experience. The findings suggest that mental stimulation in early life increases the extent to which synaptic pruning occurs in the lateral temporal lobe. Synaptic pruning reduces the volume of tissue in the cortex. This makes the cortex thinner, but it also makes information processing more efficient.
“This is a first look at how nurture influences brain structure later in life,” Farah reported at the meeting. “As with all observational studies, we can’t really speak about causality, but it seems likely that cognitive stimulation experienced early in life led to changes in cortical thickness.”
She adds, however, that the research is still in its infancy, and that more work is needed to gain a better understanding of exactly how early life experiences impact brain structure and function.
The findings add to the growing body of evidence that early life is a period of “extreme vulnerability,” says psychiatrist Jay Giedd, head of the brain imaging unit in the Child Psychiatry Branch at the National Institute of Mental Health in Bethesda, Maryland. But early life, he says, also offers a window of opportunity during which the effects of adversity can be offset. Parents can help young children develop their cognitive skills by providing a stimulating environment.
ScienceDaily (Aug. 3, 2012) — In the cognitive sciences, the capacity to interpret the intentions of others is called “Theory of Mind” (ToM). This faculty is involved in the understanding of language, in particular by bridging the gap between the meaning of the words that make up a statement and the meaning of the statement as a whole.
In recent years, researchers have identified the neural network dedicated to ToM, but no one had yet demonstrated that this set of neurons is specifically activated by the process of understanding of an utterance. This has now been accomplished: a team from L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) has shown that the activation of the ToM neural network increases when an individual is reacting to ironic statements.
Published in Neuroimage, these findings represent an important breakthrough in the study of Theory of Mind and linguistics, shedding light on the mechanisms involved in interpersonal communication.
In our communications with others, we are constantly thinking beyond the basic meaning of words. For example, if asked, “Do you have the time?” one would not simply reply, “Yes.” The gap between what is saidand what it means is the focus of a branch of linguistics called pragmatics. In this science, “Theory of Mind” (ToM) gives listeners the capacity to fill this gap. In order to decipher the meaning and intentions hidden behind what is said, even in the most casual conversation, ToM relies on a variety of verbal and non-verbal elements: the words used, their context, intonation, “body language,” etc.
Within the past 10 years, researchers in cognitive neuroscience have identified a neural network dedicated to ToM that includes specific areas of the brain: the right and left temporal parietal junctions, the medial prefrontal cortex and the precuneus. To identify this network, the researchers relied primarily on non-verbal tasks based on the observation of others’ behavior[1]. Today, researchers at L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) have established, for the first time, the link between this neural network and the processing of implicit meanings.
To identify this link, the team focused their attention on irony. An ironic statement usually means the opposite of what is said. In order to detect irony in a statement, the mechanisms of ToM must be brought into play. In their experiment, the researchers prepared 20 short narratives in two versions, one literal and one ironic. Each story contained a key sentence that, depending on the version, yielded an ironic or literal meaning. For example, in one of the stories an opera singer exclaims after a premiere, “Tonight we gave a superb performance.” Depending on whether the performance was in fact very bad or very good, the statement is or is not ironic.
The team then carried out functional magnetic resonance imaging (fMRI) analyses on 20 participants who were asked to read 18 of the stories, chosen at random, in either their ironic or literal version. The participants were not aware that the test concerned the perception of irony. The researchers had predicted that the participants’ ToM neural networks would show increased activity in reaction to the ironic sentences, and that was precisely what they observed: as each key sentence was read, the network activity was greater when the statement was ironic. This shows that this network is directly involved in the processes of understanding irony, and, more generally, in the comprehension of language.
Next, the L2C2 researchers hope to expand their research on the ToM network in order to determine, for example, whether test participants would be able to perceive irony if this network were artificially inactivated.
Note:
[1] For example, Grèzes, Frith & Passingham (J. Neuroscience, 2004) showed a series of short (3.5 second) films in which actors came into a room and lifted boxes. Some of the actors were instructed to act as though the boxes were heavier (or lighter) than they actually were. Having thus set up deceptive situations, the experimenters asked the participants to determine if they had or had not been deceived by the actors in the films. The films containing feigned actions elicited increased activity in the rTPJ (right temporal parietal junction) compared with those containing unfeigned actions.
Journal Reference:
Nicola Spotorno, Eric Koun, Jérôme Prado, Jean-Baptiste Van Der Henst, Ira A. Noveck. Neural evidence that utterance-processing entails mentalizing: The case of irony. NeuroImage, 2012; 63 (1): 25 DOI:10.1016/j.neuroimage.2012.06.046
ScienceDaily (Aug. 1, 2012) — When it comes to intelligence, what factors distinguish the brains of exceptionally smart humans from those of average humans?
New research suggests as much as 10 percent of individual variances in human intelligence can be predicted based on the strength of neural connections between the lateral prefrontal cortex and other regions of the brain. (Credit: WUSTL Image / Michael Cole)
As science has long suspected, overall brain size matters somewhat, accounting for about 6.7 percent of individual variation in intelligence. More recent research has pinpointed the brain’s lateral prefrontal cortex, a region just behind the temple, as a critical hub for high-level mental processing, with activity levels there predicting another 5 percent of variation in individual intelligence.
Now, new research from Washington University in St. Louis suggests that another 10 percent of individual differences in intelligence can be explained by the strength of neural pathways connecting the left lateral prefrontal cortex to the rest of the brain.
Published in the Journal of Neuroscience, the findings establish “global brain connectivity” as a new approach for understanding human intelligence.
“Our research shows that connectivity with a particular part of the prefrontal cortex can predict how intelligent someone is,” suggests lead author Michael W. Cole, PhD, a postdoctoral research fellow in cognitive neuroscience at Washington University.
The study is the first to provide compelling evidence that neural connections between the lateral prefrontal cortex and the rest of the brain make a unique and powerful contribution to the cognitive processing underlying human intelligence, says Cole, whose research focuses on discovering the cognitive and neural mechanisms that make human behavior uniquely flexible and intelligent.
“This study suggests that part of what it means to be intelligent is having a lateral prefrontal cortex that does its job well; and part of what that means is that it can effectively communicate with the rest of the brain,” says study co-author Todd Braver, PhD, professor of psychology in Arts & Sciences and of neuroscience and radiology in the School of Medicine. Braver is a co-director of the Cognitive Control and Psychopathology Lab at Washington University, in which the research was conducted.
One possible explanation of the findings, the research team suggests, is that the lateral prefrontal region is a “flexible hub” that uses its extensive brain-wide connectivity to monitor and influence other brain regions in a goal-directed manner.
“There is evidence that the lateral prefrontal cortex is the brain region that ‘remembers’ (maintains) the goals and instructions that help you keep doing what is needed when you’re working on a task,” Cole says. “So it makes sense that having this region communicating effectively with other regions (the ‘perceivers’ and ‘doers’ of the brain) would help you to accomplish tasks intelligently.”
While other regions of the brain make their own special contribution to cognitive processing, it is the lateral prefrontal cortex that helps coordinate these processes and maintain focus on the task at hand, in much the same way that the conductor of a symphony monitors and tweaks the real-time performance of an orchestra.
“We’re suggesting that the lateral prefrontal cortex functions like a feedback control system that is used often in engineering, that it helps implement cognitive control (which supports fluid intelligence), and that it doesn’t do this alone,” Cole says.
The findings are based on an analysis of functional magnetic resonance brain images captured as study participants rested passively and also when they were engaged in a series of mentally challenging tasks associated with fluid intelligence, such as indicating whether a currently displayed image was the same as one displayed three images ago.
Previous findings relating lateral prefrontal cortex activity to challenging task performance were supported. Connectivity was then assessed while participants rested, and their performance on additional tests of fluid intelligence and cognitive control collected outside the brain scanner was associated with the estimated connectivity.
Results indicate that levels of global brain connectivity with a part of the left lateral prefrontal cortex serve as a strong predictor of both fluid intelligence and cognitive control abilities.
Although much remains to be learned about how these neural connections contribute to fluid intelligence, new models of brain function suggested by this research could have important implications for the future understanding — and perhaps augmentation — of human intelligence.
The findings also may offer new avenues for understanding how breakdowns in global brain connectivity contribute to the profound cognitive control deficits seen in schizophrenia and other mental illnesses, Cole suggests.
Other co-authors include Tal Yarkoni, PhD, a postdoctoral fellow in the Department of Psychology and Neuroscience at the University of Colorado at Boulder; Grega Repovs, PhD, professor of psychology at the University of Ljubljana, Slovenia; and Alan Anticevic, an associate research scientist in psychiatry at Yale University School of Medicine.
Funding from the National Institute of Mental Health supported the study (National Institutes of Health grants MH66088, NR012081, MH66078, MH66078-06A1W1, and 1K99MH096801).