Arquivo da tag: Cognição

Excessive empathy can impair understanding of others (Science Daily)

April 28, 2016
Julius-Maximilians-Universität Würzburg, JMU
People who empathize easily with others do not necessarily understand them well. To the contrary: Excessive empathy can even impair understanding as a new study conducted by psychologists has established.

Excessive empathy can impair understanding as a new study conducted by psychologists from Würzburg and Leipzig has established. Credit: © ibreakstock / Fotolia

People who empathize easily with others do not necessarily understand them well. To the contrary: Excessive empathy can even impair understanding as a new study conducted by psychologists from Würzburg and Leipzig has established.

Imagine your best friend tells you that his girlfriend has just proposed “staying friends.” Now you have to accomplish two things: Firstly, you have to grasp that this nice sounding proposition actually means that she wants to break up with him and secondly, you should feel with your friend and comfort him.

Whether empathy and understanding other people’s mental states (mentalising) — i.e. the ability to understand what others know, plan and want — are interrelated has recently been examined by the psychologists Anne Böckler, Philipp Kanske, Mathis Trautwein, Franca Parianen-Lesemann and Tania Singer.

Anne Böckler has been a junior professor at the University of Würzburg’s Institute of Psychology since October 2015. Previously, the post-doc had worked in the Department of Social Neurosciences at the Max Planck Institute of Human Cognitive and Brain Sciences in Leipzig where she conducted the study together with her co-workers. In the scientific journal Social Cognitive and Affective Neuroscience, the scientists present the results of their work.

“Successful social interaction is based on our ability to feel with others and to understand their thoughts and intentions,” Anne Böckler explains. She says that it had been unclear previously whether and to what extend these two skills were interrelated — that is whether people who empathise easily with others are also capable of grasping their thoughts and intentions. According to the junior professor, the scientists also looked into the question of whether the neuronal networks responsible for these abilities interact.

Answers can be gleaned from the study conducted by Anne Böckler, Philipp Kanske and their colleagues at the Max Planck Institute in Leipzig within the scope of a large-scale study led by Tania Singer which included some 200 participants. The study enabled the scientists to prove that people who tend to be empathic do not necessarily understand other people well at a cognitive level. Hence, social skills seem to be based on multiple abilities that are rather independent of one another.

The study also delivered new insight as to how the different networks in the brain are orchestrated, revealing that networks crucial for empathy and cognitive perspective-taking interact with one another. In highly emotional moments — for example when somebody talks about the death of a close person — activation of the insula, which forms part of the empathy-relevant network, can have an inhibiting effect in some people on brain areas important for taking someone else’s perspective. And this in turn can cause excessive empathy to impair social understanding.

The participants to the study watched a number of video sequences in which the narrator was more or less emotional. Afterwards, they had to rate how they felt and how much compassion they felt for the person in the film. Then they had to answer questions about the video — for example what the persons could have thought, known or intended. Having thus identified persons with a high level of empathy, the psychologists looked at their portion among the test participants who had had good or poor results in the test about cognitive perspective-taking — and vice versa.

Using functional magnetic resonance imaging, the scientists observed which areas of the brain where active at what time.

The authors believe that the results of this study are important both for neuroscience and clinical applications. For example, they suggest that training aimed at improving social skills, the willingness to empathise and the ability to understand others at the cognitive level and take their perspective should be promoted selectively and separately of one another. The group in the Department of Social Neurosciences in Leipzig is currently working on exactly this topic within the scope of the ReSource project, namely how to specifically train different social skills.

Journal Reference:

  1. Artyom Zinchenko, Philipp Kanske, Christian Obermeier, Erich Schröger, Sonja A. Kotz. Emotion and goal-directed behavior: ERP evidence on cognitive and emotional conflictSocial Cognitive and Affective Neuroscience, 2015; 10 (11): 1577 DOI: 10.1093/scan/nsv050

The Boy Whose Brain Could Unlock Autism (Matter)


Autism changed Henry Markram’s family. Now his Intense World theory could transform our understanding of the condition.

SOMETHING WAS WRONG with Kai Markram. At five days old, he seemed like an unusually alert baby, picking his head up and looking around long before his sisters had done. By the time he could walk, he was always in motion and required constant attention just to ensure his safety.

“He was super active, batteries running nonstop,” says his sister, Kali. And it wasn’t just boyish energy: When his parents tried to set limits, there were tantrums—not just the usual kicking and screaming, but biting and spitting, with a disproportionate and uncontrollable ferocity; and not just at age two, but at three, four, five and beyond. Kai was also socially odd: Sometimes he was withdrawn, but at other times he would dash up to strangers and hug them.

Things only got more bizarre over time. No one in the Markram family can forget the 1999 trip to India, when they joined a crowd gathered around a snake charmer. Without warning, Kai, who was five at the time, darted out and tapped the deadly cobra on its head.

Coping with such a child would be difficult for any parent, but it was especially frustrating for his father, one of the world’s leading neuroscientists. Henry Markram is the man behind Europe’s $1.3 billion Human Brain Project, a gargantuan research endeavor to build a supercomputer model of the brain. Markram knows as much about the inner workings of our brains as anyone on the planet, yet he felt powerless to tackle Kai’s problems.

“As a father and a neuroscientist, you realize that you just don’t know what to do,” he says. In fact, Kai’s behavior—which was eventually diagnosed as autism—has transformed his father’s career, and helped him build a radical new theory of autism: one that upends the conventional wisdom. And, ironically, his sideline may pay off long before his brain model is even completed.

IMAGINE BEING BORN into a world of bewildering, inescapable sensory overload, like a visitor from a much darker, calmer, quieter planet. Your mother’s eyes: a strobe light. Your father’s voice: a growling jackhammer. That cute little onesie everyone thinks is so soft? Sandpaper with diamond grit. And what about all that cooing and affection? A barrage of chaotic, indecipherable input, a cacophony of raw, unfilterable data.

Just to survive, you’d need to be excellent at detecting any pattern you could find in the frightful and oppressive noise. To stay sane, you’d have to control as much as possible, developing a rigid focus on detail, routine and repetition. Systems in which specific inputs produce predictable outputs would be far more attractive than human beings, with their mystifying and inconsistent demands and their haphazard behavior.

This, Markram and his wife, Kamila, argue, is what it’s like to be autistic.

They call it the “intense world” syndrome.

The behavior that results is not due to cognitive deficits—the prevailing view in autism research circles today—but the opposite, they say. Rather than being oblivious, autistic people take in too much and learn too fast. While they may appear bereft of emotion, the Markrams insist they are actually overwhelmed not only by their own emotions, but by the emotions of others.

Consequently, the brain architecture of autism is not just defined by its weaknesses, but also by its inherent strengths. The developmental disorder now believed to affect around 1 percent of the population is not characterized by lack of empathy, the Markrams claim. Social difficulties and odd behavior result from trying to cope with a world that’s just too much.

After years of research, the couple came up with their label for the theory during a visit to the remote area where Henry Markram was born, in the South African part of the Kalahari desert. He says “intense world” was Kamila’s phrase; she says she can’t recall who hit upon it. But he remembers sitting in the rust-colored dunes, watching the unusual swaying yellow grasses while contemplating what it must be like to be inescapably flooded by sensation and emotion.

That, he thought, is what Kai experiences. The more he investigated the idea of autism not as a deficit of memory, emotion and sensation, but an excess, the more he realized how much he himself had in common with his seemingly alien son.

HENRY MARKRAM IS TALL, with intense blue eyes, sandy hair and the air of unmistakable authority that goes with the job of running a large, ambitious, well-funded research project. It’s hard to see what he might have in common with a troubled, autistic child. He rises most days at 4 a.m. and works for a few hours in his family’s spacious apartment in Lausanne before heading to the institute, where the Human Brain Project is based. “He sleeps about four or five hours,” says Kamila. “That’s perfect for him.”

As a small child, Markram says, he “wanted to know everything.” But his first few years of high school were mostly spent “at the bottom of the F class.” A Latin teacher inspired him to pay more attention to his studies, and when a beloved uncle became profoundly depressed and died young—he was only in his 30s, but “just went downhill and gave up”—Markram turned a corner. He’d recently been given an assignment about brain chemistry, which got him thinking. “If chemicals and the structure of the brain can change and then I change, who am I? It’s a profound question. So I went to medical school and wanted to become a psychiatrist.”

Markram attended the University of Cape Town, but in his fourth year of medical school, he took a fellowship in Israel. “It was like heaven,” he says, “It was all the toys that I ever could dream of to investigate the brain.” He never returned to med school, and married his first wife, Anat, an Israeli, when he was 26. Soon, they had their first daughter, Linoy, now 24, then a second, Kali, now 23. Kai came four years afterwards.

During graduate research at the Weizmann Institute in Israel, Markram made his first important discovery, elucidating a key relationship between two neurotransmitters involved in learning, acetylcholine and glutamate. The work was important and impressive—especially so early in a scientist’s career—but it was what he did next that really made his name.

During a postdoc with Nobel laureate Bert Sakmann at Germany’s Max Planck Institute, Markram showed how brain cells that “fire together, wire together.” That had been a basic tenet of neuroscience since the 1940s—but no one had been able to figure out how the process actually worked.

By studying the precise timing of electrical signaling between neurons, Markram demonstrated that firing in specific patterns increases the strength of the synapses linking cells, while missing the beat weakens them. This simple mechanism allows the brain to learn, forging connections both literally and figuratively between various experiences and sensations—and between cause and effect.

Measuring these fine temporal distinctions was also a technical triumph. Sakmann won his 1991 Nobel for developing the required “patch clamp” technique, which measures the tiny changes in electrical activity inside nerve cells. To patch just one neuron, you first harvest a sliver of brain, about 1/3 of a millimeter thick and containing around 6 million neurons, typically from a freshly guillotined rat.

To keep the tissue alive, you bubble it in oxygen, and bathe the slice of brain in a laboratory substitute for cerebrospinal fluid. Under a microscope, using a minuscule glass pipette, you carefully pierce a single cell. The technique is similar to injecting a sperm into an egg for in vitro fertilization—except that neurons are hundreds of times smaller than eggs.

It requires steady hands and exquisite attention to detail. Markram’s ultimate innovation was to build a machine that could study 12 such carefully prepared cells simultaneously, measuring their electrical and chemical interactions. Researchers who have done it say you can sometimes go a whole day without getting one right—but Markram became a master.

Still, there was a problem. He seemed to go from one career peak to another—a Fulbright at the National Institutes of Health, tenure at Weizmann, publication in the most prestigious journals—but at the same time it was becoming clear that something was not right in his youngest child’s head. He studied the brain all day, but couldn’t figure out how to help Kai learn and cope. As he told a New York Times reporter earlier this year, “You know how powerless you feel. You have this child with autism and you, even as a neuroscientist, really don’t know what to do.”

AT FIRST, MARKRAM THOUGHT Kai had attention deficit/ hyperactivity disorder (ADHD): Once Kai could move, he never wanted to be still. “He was running around, very difficult to control,” Markram says. As Kai grew, however, he began melting down frequently, often for no apparent reason. “He became more particular, and he started to become less hyperactive but more behaviorally difficult,” Markram says. “Situations were very unpredictable. He would have tantrums. He would be very resistant to learning and to any kind of instruction.”

Preventing Kai from harming himself by running into the street or following other capricious impulses was a constant challenge. Even just trying to go to the movies became an ordeal: Kai would refuse to enter the cinema or hold his hands tightly over his ears.

However, Kai also loved to hug people, even strangers, which is one reason it took years to get a diagnosis. That warmth made many experts rule out autism. Only after multiple evaluations was Kai finally diagnosed with Asperger syndrome, a type of autism that includes social difficulties and repetitive behaviors, but not lack of speech or profound intellectual disability.

“We went all over the world and had him tested, and everybody had a different interpretation,” Markram says. As a scientist who prizes rigor, this infuriated him. He’d left medical school to pursue neuroscience because he disliked psychiatry’s vagueness. “I was very disappointed in how psychiatry operates,” he says.

Over time, trying to understand Kai became Markram’s obsession.

It drove what he calls his “impatience” to model the brain: He felt neuroscience was too piecemeal and could not progress without bringing more data together. “I wasn’t satisfied with understanding fragments of things in the brain; we have to understand everything,” he says. “Every molecule, every gene, every cell. You can’t leave anything out.”

This impatience also made him decide to study autism, beginning by reading every study and book he could get his hands on. At the time, in the 1990s, the condition was getting increased attention. The diagnosis had only been introduced into the psychiatric bible, then the DSM III, in 1980. The 1988 Dustin Hoffman film Rain Man, about an autistic savant, brought the idea that autism was both a disability and a source of quirky intelligence into the popular imagination.

The dark days of the mid–20th century, when autism was thought to be caused by unloving “refrigerator mothers” who icily rejected their infants, were long past. However, while experts now agree that the condition is neurological, its causes remain unknown.

The most prominent theory suggests that autism results from problems with the brain’s social regions, which results in a deficit of empathy. This “theory of mind” concept was developed by Uta Frith, Alan Leslie, and Simon Baron-Cohen in the 1980s. They found that autistic children are late to develop the ability to distinguish between what they know themselves and what others know—something that other children learn early on.

In a now famous experiment, children watched two puppets, “Sally” and “Anne.” Sally has a marble, which she places in a basket and then leaves. While she’s gone, Anne moves Sally’s marble into a box. By age four or five, normal children can predict that Sally will look for the marble in the basket first because she doesn’t know that Anne moved it. But until they are much older, most autistic children say that Sally will look in the box because they know it’s there. While typical children automatically adopt Sally’s point of view and know she was out of the room when Anne hid the marble, autistic children have much more difficulty thinking this way.

The researchers linked this “mind blindness”—a failure of perspective-taking—to their observation that autistic children don’t engage in make-believe. Instead of pretending together, autistic children focus on objects or systems—spinning tops, arranging blocks, memorizing symbols, or becoming obsessively involved with mechanical items like trains and computers.

This apparent social indifference was viewed as central to the condition. Unfortunately, the theory also seemed to imply that autistic people are uncaring because they don’t easily recognize that other people exist as intentional agents who can be loved, thwarted or hurt. But while the Sally-Anne experiment shows that autistic people have difficulty knowing that other people have different perspectives—what researchers call cognitive empathy or “theory of mind”—it doesn’t show that they don’t care when someone is hurt or feeling pain, whether emotional or physical. In terms of caring—technically called affective empathy—autistic people aren’t necessarily impaired.

Sadly, however, the two different kinds of empathy are combined in one English word. And so, since the 1980s, this idea that autistic people “lack empathy” has taken hold.

“When we looked at the autism field we couldn’t believe it,” Markram says. “Everybody was looking at it as if they have no empathy, no theory of mind. And actually Kai, as awkward as he was, saw through you. He had a much deeper understanding of what really was your intention.” And he wanted social contact.

 The obvious thought was: Maybe Kai’s not really autistic? But by the time Markram was fully up to speed in the literature, he was convinced that Kai had been correctly diagnosed. He’d learned enough to know that the rest of his son’s behavior was too classically autistic to be dismissed as a misdiagnosis, and there was no alternative condition that explained as much of his behavior and tendencies. And accounts by unquestionably autistic people, like bestselling memoirist and animal scientist Temple Grandin, raised similar challenges to the notion that autistic people could never really see beyond themselves.

Markram began to do autism work himself as visiting professor at the University of California, San Francisco in 1999. Colleague Michael Merzenich, a neuroscientist, proposed that autism is caused by an imbalance between inhibitory and excitatory neurons. A failure of inhibitions that tamp down impulsive actions might explain behavior like Kai’s sudden move to pat the cobra. Markram started his research there.

MARKRAM MET HIS second wife, Kamila Senderek, at a neuroscience conference in Austria in 2000. He was already separated from Anat. “It was love at first sight,” Kamila says.

Her parents left communist Poland for West Germany when she was five. When she met Markram, she was pursuing a master’s in neuroscience at the Max Planck Institute. When Markram moved to Lausanne to start the Human Brain Project, she began studying there as well.

Tall like her husband, with straight blonde hair and green eyes, Kamila wears a navy twinset and jeans when we meet in her open-plan office overlooking Lake Geneva. There, in addition to autism research, she runs the world’s fourth largest open-access scientific publishing firm, Frontiers, with a network of over 35,000 scientists serving as editors and reviewers. She laughs when I observe a lizard tattoo on her ankle, a remnant of an adolescent infatuation with The Doors.

When asked whether she had ever worried about marrying a man whose child had severe behavioral problems, she responds as though the question never occurred to her. “I knew about the challenges with Kai,” she says, “Back then, he was quite impulsive and very difficult to steer.”

The first time they spent a day together, Kai was seven or eight. “I probably had some blue marks and bites on my arms because he was really quite something. He would just go off and do something dangerous, so obviously you would have to get in rescue mode,” she says, noting that he’d sometimes walk directly into traffic. “It was difficult to manage the behavior,” she shrugs, “But if you were nice with him then he was usually nice with you as well.”

“Kamila was amazing with Kai,” says Markram, “She was much more systematic and could lay out clear rules. She helped him a lot. We never had that thing that you see in the movies where they don’t like their stepmom.”

At the Swiss Federal Institute of Technology in Lausanne (EPFL), the couple soon began collaborating on autism research. “Kamila and I spoke about it a lot,” Markram says, adding that they were both “frustrated” by the state of the science and at not being able to help more. Their now-shared parental interest fused with their scientific drives.

They started by studying the brain at the circuitry level. Markram assigned a graduate student, Tania Rinaldi Barkat, to look for the best animal model, since such research cannot be done on humans.

Barkat happened to drop by Kamila’s office while I was there, a decade after she had moved on to other research. She greeted her former colleagues enthusiastically.

She started her graduate work with the Markrams by searching the literature for prospective animal models. They agreed that the one most like human autism involved rats prenatally exposed to an epilepsy drug called valproic acid (VPA; brand name, Depakote). Like other “autistic” rats, VPA rats show aberrant social behavior and increased repetitive behaviors like excessive self-grooming.

But more significant is that when pregnant women take high doses of VPA, which is sometimes necessary for seizure control, studies have found that the risk of autism in their children increases sevenfold. One 2005 study found that close to 9 percent of these children have autism.

Because VPA has a link to human autism, it seemed plausible that its cellular effects in animals would be similar. A neuroscientist who has studied VPA rats once told me, “I see it not as a model, but as a recapitulation of the disease in other species.”

Barkat got to work. Earlier research showed that the timing and dose of exposure was critical: Different timing could produce opposite symptoms, and large doses sometimes caused physical deformities. The “best” time to cause autistic symptoms in rats is embryonic day 12, so that’s when Barkat dosed them.

At first, the work was exasperating. For two years, Barkat studied inhibitory neurons from the VPA rat cortex, using the same laborious patch-clamping technique perfected by Markram years earlier. If these cells were less active, that would confirm the imbalance that Merzenich had theorized.

She went through the repetitious preparation, making delicate patches to study inhibitory networks. But after two years of this technically demanding, sometimes tedious, and time-consuming work, Barkat had nothing to show for it.

“I just found no difference at all,” she told me, “It looked completely normal.” She continued to patch cell after cell, going through the exacting procedure endlessly—but still saw no abnormalities. At least she was becoming proficient at the technique, she told herself.

Markram was ready to give up, but Barkat demurred, saying she would like to shift her focus from inhibitory to excitatory VPA cell networks. It was there that she struck gold.

 “There was a difference in the excitability of the whole network,” she says, reliving her enthusiasm. The networked VPA cells responded nearly twice as strongly as normal—and they were hyper-connected. If a normal cell had connections to ten other cells, a VPA cell connected with twenty. Nor were they under-responsive. Instead, they were hyperactive, which isn’t necessarily a defect: A more responsive, better-connected network learns faster.

But what did this mean for autistic people? While Barkat was investigating the cortex, Kamila Markram had been observing the rats’ behavior, noting high levels of anxiety as compared to normal rats. “It was pretty much a gold mine then,” Markram says. The difference was striking. “You could basically see it with the eye. The VPAs were different and they behaved differently,” Markram says. They were quicker to get frightened, and faster at learning what to fear, but slower to discover that a once-threatening situation was now safe.

While ordinary rats get scared of an electrified grid where they are shocked when a particular tone sounds, VPA rats come to fear not just that tone, but the whole grid and everything connected with it—like colors, smells, and other clearly distinguishable beeps.

“The fear conditioning was really hugely amplified,” Markram says. “We then looked at the cell response in the amygdala and again they were hyper-reactive, so it made a beautiful story.”

THE MARKRAMS RECOGNIZED the significance of their results. Hyper-responsive sensory, memory and emotional systems might explain both autistic talents and autistic handicaps, they realized. After all, the problem with VPA rats isn’t that they can’t learn—it’s that they learn too quickly, with too much fear, and irreversibly.

They thought back to Kai’s experiences: how he used to cover his ears and resist going to the movies, hating the loud sounds; his limited diet and apparent terror of trying new foods.

“He remembers exactly where he sat at exactly what restaurant one time when he tried for hours to get himself to eat a salad,” Kamila says, recalling that she’d promised him something he’d really wanted if he did so. Still, he couldn’t make himself try even the smallest piece of lettuce. That was clearly overgeneralization of fear.

The Markrams reconsidered Kai’s meltdowns, too, wondering if they’d been prompted by overwhelming experiences. They saw that identifying Kai’s specific sensitivities preemptively might prevent tantrums by allowing him to leave upsetting situations or by mitigating his distress before it became intolerable. The idea of an intense world had immediate practical implications.

 The amygdala.

The VPA data also suggested that autism isn’t limited to a single brain network. In VPA rat brains, both the amygdala and the cortex had proved hyper-responsive to external stimuli. So maybe, the Markrams decided, autistic social difficulties aren’t caused by social-processing defects; perhaps they are the result of total information overload.

CONSIDER WHAT IT MIGHT FEEL like to be a baby in a world of relentless and unpredictable sensation. An overwhelmed infant might, not surprisingly, attempt to escape. Kamila compares it to being sleepless, jetlagged, and hung over, all at once. “If you don’t sleep for a night or two, everything hurts. The lights hurt. The noises hurt. You withdraw,” she says.

Unlike adults, however, babies can’t flee. All they can do is cry and rock, and, later, try to avoid touch, eye contact, and other powerful experiences. Autistic children might revel in patterns and predictability just to make sense of the chaos.

At the same time, if infants withdraw to try to cope, they will miss what’s known as a “sensitive period”—a developmental phase when the brain is particularly responsive to, and rapidly assimilates, certain kinds of external stimulation. That can cause lifelong problems.

Language learning is a classic example: If babies aren’t exposed to speech during their first three years, their verbal abilities can be permanently stunted. Historically, this created a spurious link between deafness and intellectual disability: Before deaf babies were taught sign language at a young age, they would often have lasting language deficits. Their problem wasn’t defective “language areas,” though—it was that they had been denied linguistic stimuli at a critical time. (Incidentally, the same phenomenon accounts for why learning a second language is easy for small children and hard for virtually everyone else.)

This has profound implications for autism. If autistic babies tune out when overwhelmed, their social and language difficulties may arise not from damaged brain regions, but because critical data is drowned out by noise or missed due to attempts to escape at a time when the brain actually needs this input.

The intense world could also account for the tragic similarities between autistic children and abused and neglected infants. Severely maltreated children often rock, avoid eye contact, and have social problems—just like autistic children. These parallels led to decades of blaming the parents of autistic children, including the infamous “refrigerator mother.” But if those behaviors are coping mechanisms, autistic people might engage in them not because of maltreatment, but because ordinary experience is overwhelming or even traumatic.

The Markrams teased out further implications: Social problems may not be a defining or even fixed feature of autism. Early intervention to reduce or moderate the intensity of an autistic child’s environment might allow their talents to be protected while their autism-related disabilities are mitigated or, possibly, avoided.

The VPA model also captures other paradoxical autistic traits. For example, while oversensitivities are most common, autistic people are also frequently under-reactive to pain. The same is true of VPA rats. In addition, one of the most consistent findings in autism is abnormal brain growth, particularly in the cortex. There, studies find an excess of circuits called mini-columns, which can be seen as the brain’s microprocessors. VPA rats also exhibit this excess.

Moreover, extra minicolumns have been found in autopsies of scientists who were not known to be autistic, suggesting that this brain organization can appear without social problems and alongside exceptional intelligence.

Like a high-performance engine, the autistic brain may only work properly under specific conditions. But under those conditions, such machines can vastly outperform others—like a Ferrari compared to a Ford.

THE MARKRAMS’ FIRST PUBLICATION of their intense world research appeared in 2007: a paper on the VPA rat in the Proceedings of the National Academy of Sciences. This was followed by an overview in Frontiers in Neuroscience. The next year, at the Society for Neuroscience (SFN), the field’s biggest meeting, a symposium was held on the topic. In 2010, they updated and expanded their ideas in a second Frontiers paper.

Since then, more than three dozen papers have been published by other groups on VPA rodents, replicating and extending the Markrams’ findings. At this year’s SFN, at least five new studies were presented on VPA autism models. The sensory aspects of autism have long been neglected, but the intense world and VPA rats are bringing it to the fore.

Nevertheless, reaction from colleagues in the field has been cautious. One exception is Laurent Mottron, professor of psychiatry and head of autism research at the University of Montreal. He was the first to highlight perceptual differences as critical in autism—even before the Markrams. Only a minority of researchers even studied sensory issues before him. Almost everyone else focused on social problems.

But when Mottron first proposed that autism is linked with what he calls “enhanced perceptual functioning,” he, like most experts, viewed this as the consequence of a deficit. The idea was that the apparently superior perception exhibited by some autistic people is caused by problems with higher level brain functioning—and it had historically been dismissed as mere“splinter skills,” not a sign of genuine intelligence. Autistic savants had earlier been known as “idiot savants,” the implication being that, unlike “real” geniuses, they didn’t have any creative control of their exceptional minds. Mottron described it this way in a review paper: “[A]utistics were not displaying atypical perceptual strengths but a failure to form global or high level representations.”

 However, Mottron’s research led him to see this view as incorrect. His own and other studies showed superior performance by autistic people not only in “low level” sensory tasks, like better detection of musical pitch and greater ability to perceive certain visual information, but also in cognitive tasks like pattern finding in visual IQ tests.

In fact, it has long been clear that detecting and manipulating complex systems is an autistic strength—so much so that the autistic genius has become a Silicon Valley stereotype. In May, for example, the German software firm SAP announced plans to hire 650 autistic people because of their exceptional abilities. Mathematics, musical virtuosity, and scientific achievement all require understanding and playing with systems, patterns, and structure. Both autistic people and their family members are over-represented in these fields, which suggests genetic influences.

“Our points of view are in different areas [of research,] but we arrive at ideas that are really consistent,” says Mottron of the Markrams and their intense world theory. (He also notes that while they study cell physiology, he images actual human brains.)

Because Henry Markram came from outside the field and has an autistic son, Mottron adds, “He could have an original point of view and not be influenced by all the clichés,” particularly those that saw talents as defects. “I’m very much in sympathy with what they do,” he says, although he is not convinced that they have proven all the details.

Mottron’s support is unsurprising, of course, because the intense world dovetails with his own findings. But even one of the creators of the “theory of mind” concept finds much of it plausible.

Simon Baron-Cohen, who directs the Autism Research Centre at Cambridge University, told me, “I am open to the idea that the social deficits in autism—like problems with the cognitive aspects of empathy, which is also known as ‘theory of mind’—may be upstream from a more basic sensory abnormality.” In other words, the Markrams’ physiological model could be the cause, and the social deficits he studies, the effect. He adds that the VPA rat is an “interesting” model. However, he also notes that most autism is not caused by VPA and that it’s possible that sensory and social defects co-occur, rather than one causing the other.

His collaborator, Uta Frith, professor of cognitive development at University College London, is not convinced. “It just doesn’t do it for me,” she says of the intense world theory. “I don’t want to say it’s rubbish,” she says, “but I think they try to explain too much.”

AMONG AFFECTED FAMILIES, by contrast, the response has often been rapturous. “There are elements of the intense world theory that better match up with autistic experience than most of the previously discussed theories,” says Ari Ne’eman, president of the Autistic Self Advocacy Network, “The fact that there’s more emphasis on sensory issues is very true to life.” Ne’eman and other autistic people fought to get sensory problems added to the diagnosis in DSM-5 — the first time the symptoms have been so recognized, and another sign of the growing receptiveness to theories like intense world.

Steve Silberman, who is writing a history of autism titled NeuroTribes: Thinking Smarter About People Who Think Differently, says, “We had 70 years of autism research [based] on the notion that autistic people have brain deficits. Instead, the intense world postulates that autistic people feel too much and sense too much. That’s valuable, because I think the deficit model did tremendous injury to autistic people and their families, and also misled science.”

Priscilla Gilman, the mother of an autistic child, is also enthusiastic. Her memoir, The Anti-Romantic Child, describes her son’s diagnostic odyssey. Before Benjamin was in preschool, Gilman took him to the Yale Child Study Center for a full evaluation. At the time, he did not display any classic signs of autism, but he did seem to be a candidate for hyperlexia—at age two-and-a-half, he could read aloud from his mother’s doctoral dissertation with perfect intonation and fluency. Like other autistic talents, hyperlexia is often dismissed as a “splinter” strength.

At that time, Yale experts ruled autism out, telling Gilman that Benjamin “is not a candidate because he is too ‘warm’ and too ‘related,’” she recalls. Kai Markram’s hugs had similarly been seen as disqualifying. At twelve years of age, however, Benjamin was officially diagnosed with Autism Spectrum Disorder.

According to the intense world perspective, however, warmth isn’t incompatible with autism. What looks like antisocial behavior results from being too affected by others’ emotions—the opposite of indifference.

Indeed, research on typical children and adults finds that too much distress can dampen ordinary empathy as well. When someone else’s pain becomes too unbearable to witness, even typical people withdraw and try to soothe themselves first rather than helping—exactly like autistic people. It’s just that autistic people become distressed more easily, and so their reactions appear atypical.

“The overwhelmingness of understanding how people feel can lead to either what is perceived as inappropriate emotional response, or to what is perceived as shutting down, which people see as lack of empathy,” says Emily Willingham. Willingham is a biologist and the mother of an autistic child; she also suspects that she herself has Asperger syndrome. But rather than being unemotional, she says, autistic people are “taking it all in like a tsunami of emotion that they feel on behalf of others. Going internal is protective.”

At least one study supports this idea, showing that while autistic people score lower on cognitive tests of perspective-taking—recall Anne, Sally, and the missing marble—they are more affected than typical folks by other people’s feelings. “I have three children, and my autistic child is my most empathetic,” Priscilla Gilman says, adding that when her mother first read about the intense world, she said, “This explains Benjamin.”

Benjamin’s hypersensitivities are also clearly linked to his superior perception. “He’ll sometimes say, ‘Mommy, you’re speaking in the key of D, could you please speak in the key of C? It’s easier for me to understand you and pay attention.”

Because he has musical training and a high IQ, Benjamin can use his own sense of “absolute pitch”—the ability to name a note without hearing another for comparison—to define the problem he’s having. But many autistic people can’t verbalize their needs like this. Kai, too, is highly sensitive to vocal intonation, preferring his favorite teacher because, he explains, she “speaks soft,” even when she’s displeased. But even at 19, he isn’t able to articulate the specifics any better than that.

ON A RECENT VISIT to Lausanne, Kai wears a sky blue hoodie, his gray Chuck Taylor–style sneakers carefully unlaced at the top. “My rapper sneakers,” he says, smiling. He speaks Hebrew and English and lives with his mother in Israel, attending a school for people with learning disabilities near Rehovot. His manner is unselfconscious, though sometimes he scowls abruptly without explanation. But when he speaks, it is obvious that he wants to connect, even when he can’t answer a question. Asked if he thinks he sees things differently than others do, he says, “I feel them different.”

He waits in the Markrams’ living room as they prepare to take him out for dinner. Henry’s aunt and uncle are here, too. They’ve been living with the family to help care for its newest additions: nine-month-old Charlotte and Olivia, who is one-and-a-half years old.

“It’s our big patchwork family,” says Kamila, noting that when they visit Israel, they typically stay with Henry’s ex-wife’s family, and that she stays with them in Lausanne. They all travel constantly, which has created a few problems now and then. None of them will ever forget a tantrum Kai had when he was younger, which got him barred from a KLM flight. A delay upset him so much that he kicked, screamed, and spat.

Now, however, he rarely melts down. A combination of family and school support, an antipsychotic medication that he’s been taking recently, and increased understanding of his sensitivities has mitigated the disabilities Kai associated with his autism.

 “I was a bad boy. I always was hitting and doing a lot of trouble,” Kai says of his past. “I was really bad because I didn’t know what to do. But I grew up.” His relatives nod in agreement. Kai has made tremendous strides, though his parents still think that his brain has far greater capacity than is evident in his speech and schoolwork.

As the Markrams see it, if autism results from a hyper-responsive brain, the most sensitive brains are actually the most likely to be disabled by our intense world. But if autistic people can learn to filter the blizzard of data, especially early in life, then those most vulnerable to the most severe autism might prove to be the most gifted of all.

Markram sees this in Kai. “It’s not a mental retardation,” he says, “He’s handicapped, absolutely, but something is going crazy in his brain. It’s a hyper disorder. It’s like he’s got an amplification of many of my quirks.”

One of these involves an insistence on timeliness. “If I say that something has to happen,” he says, “I can become quite difficult. It has to happen at that time.

He adds, “For me it’s an asset, because it means that I deliver. If I say I’ll do something, I do it.” For Kai, however, anticipation and planning run wild. When he travels, he obsesses about every move, over and over, long in advance. “He will sit there and plan, okay, when he’s going to get up. He will execute. You know he will get on that plane come hell or high water,” Markram says. “But he actually loses the entire day. It’s like an extreme version of my quirks, where for me they are an asset and for him they become a handicap.”

If this is true, autistic people have incredible unrealized potential. Say Kai’s brain was even more finely tuned than his father’s, then it might give him the capacity to be even more brilliant. Consider Markram’s visual skills. Like Temple Grandin, whose first autism memoir was titled Thinking In Pictures, he has stunning visual abilities. “I see what I think,” he says, adding that when he considers a scientific or mathematical problem, “I can see how things are supposed to look. If it’s not there, I can actually simulate it forward in time.”

At the offices of Markram’s Human Brain Project, visitors are given a taste of what it might feel like to inhabit such a mind. In a small screening room furnished with sapphire-colored, tulip-shaped chairs, I’m handed 3-D glasses. The instant the lights dim, I’m zooming through a brightly colored forest of neurons so detailed and thick that they appear to be velvety, inviting to the touch.

 The simulation feels so real and enveloping that it is hard to pay attention to the narration, which includes mind-blowing facts about the project. But it is also dizzying, overwhelming. If this is just a smidgen of what ordinary life is like for Kai it’s easier to see how hard his early life must have been. That’s the paradox about autism and empathy. The problem may not be that autistic people can’t understand typical people’s points of view—but that typical people can’t imagine autism.

Critics of the intense world theory are dismayed and put off by this idea of hidden talent in the most severely disabled. They see it as wishful thinking, offering false hope to parents who want to see their children in the best light and to autistic people who want to fight the stigma of autism. In some types of autism, they say, intellectual disability is just that.

“The maxim is, ‘If you’ve seen one person with autism, you’ve seen one person with autism,’” says Matthew Belmonte, an autism researcher affiliated with the Groden Center in Rhode Island. The assumption should be that autistic people have intelligence that may not be easily testable, he says, but it can still be highly variable.

He adds, “Biologically, autism is not a unitary condition. Asking at the biological level ‘What causes autism?’ makes about as much sense as asking a mechanic ‘Why does my car not start?’ There are many possible reasons.” Belmonte believes that the intense world may account for some forms of autism, but not others.

Kamila, however, insists that the data suggests that the most disabled are also the most gifted. “If you look from the physiological or connectivity point of view, those brains are the most amplified.”

The question, then, is how to unleash that potential.

“I hope we give hope to others,” she says, while acknowledging that intense-world adherents don’t yet know how or even if the right early intervention can reduce disability.

The secret-ability idea also worries autistic leaders like Ne’eman, who fear that it contains the seeds of a different stigma. “We agree that autistic people do have a number of cognitive advantages and it’s valuable to do research on that,” he says. But, he stresses, “People have worth regardless of whether they have special abilities. If society accepts us only because we can do cool things every so often, we’re not exactly accepted.”

The MARKRAMS ARE NOW EXPLORING whether providing a calm, predictable early environment—one aimed at reducing overload and surprise—can help VPA rats, soothing social difficulties while nurturing enhanced learning. New research suggests that autism can be detected in two-month-old babies, so the treatment implications are tantalizing.

So far, Kamila says, the data looks promising. Unexpected novelty seems to make the rats worse—while the patterned, repetitive, and safe introduction of new material seems to cause improvement.

In humans, the idea would be to keep the brain’s circuitry calm when it is most vulnerable, during those critical periods in infancy and toddlerhood. “With this intensity, the circuits are going to lock down and become rigid,” says Markram. “You want to avoid that, because to undo it is very difficult.”

For autistic children, intervening early might mean improvements in learning language and socializing. While it’s already clear that early interventions can reduce autistic disability, they typically don’t integrate intense-world insights. The behavioral approach that is most popular—Applied Behavior Analysis—rewards compliance with “normal” behavior, rather than seeking to understand what drives autistic actions and attacking the disabilities at their inception.

Research shows, in fact, that everyone learns best when receiving just the right dose of challenge—not so little that they’re bored, not so much that they’re overwhelmed; not in the comfort zone, and not in the panic zone, either. That sweet spot may be different in autism. But according to the Markrams, it is different in degree, not kind.

Markram suggests providing a gentle, predictable environment. “It’s almost like the fourth trimester,” he says.

To prevent the circuits from becoming locked into fearful states or behavioral patterns you need a filtered environment from as early as possible,” Markram explains. “I think that if you can avoid that, then those circuits would get locked into having the flexibility that comes with security.”

Creating this special cocoon could involve using things like headphones to block excess noise, gradually increasing exposure and, as much as possible, sticking with routines and avoiding surprise. If parents and educators get it right, he concludes, “I think they’ll be geniuses.”

IN SCIENCE, CONFIRMATION BIAS is always the unseen enemy. Having a dog in the fight means you may bend the rules to favor it, whether deliberately or simply because we’re wired to ignore inconvenient truths. In fact, the entire scientific method can be seen as a series of attempts to drive out bias: The double-blind controlled trial exists because both patients and doctors tend to see what they want to see—improvement.

At the same time, the best scientists are driven by passions that cannot be anything but deeply personal. The Markrams are open about the fact that their subjective experience with Kai influences their work.

But that doesn’t mean that they disregard the scientific process. The couple could easily deal with many of the intense world critiques by simply arguing that their theory only applies to some cases of autism. That would make it much more difficult to disprove. But that’s not the route they’ve chosen to take. In their 2010 paper, they list a series of possible findings that would invalidate the intense world, including discovering human cases where the relevant brain circuits are not hyper-reactive, or discovering that such excessive responsiveness doesn’t lead to deficiencies in memory, perception, or emotion. So far, however, the known data has been supportive.

But whether or not the intense world accounts for all or even most cases of autism, the theory already presents a major challenge to the idea that the condition is primarily a lack of empathy, or a social disorder. Intense world theory confronts the stigmatizing stereotypes that have framed autistic strengths as defects, or at least as less significant because of associated weaknesses.

And Henry Markram, by trying to take his son Kai’s perspective—and even by identifying so closely with it—has already done autistic people a great service, demonstrating the kind of compassion that people on the spectrum are supposed to lack. If the intense world does prove correct, we’ll all have to think about autism, and even about typical people’s reactions to the data overload endemic in modern life, very differently.

From left: Kamila, Henry, Kai, and Anat

This story was written by Maia Szalavitz, edited by Mark Horowitz, fact-checked by Kyla Jones, and copy-edited by Tim Heffernan, with photography by Darrin Vanselow and an audiobook narrated by Jack Stewart.

free download
ePub • Kindle • Audiobook

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)





A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.


Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at:,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf


Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)


Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages


We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Extreme weather: Is it all in your mind? (USA Today)

Thomas M. Kostigen, Special for USA TODAY9:53 a.m. EDT October 17, 2015

Weather is not as objective an occurrence as it might seem. People’s perceptions of what makes weather extreme are influenced by where they live, their income, as well as their political views, a new study finds.

There is a difference in both seeing and believing in extreme weather events, according to the study in the journal Environmental Sociology.

“Odds were higher among younger, female, more educated, and Democratic respondents to perceive effects from extreme weather than older, male, less educated, and Republican respondents,” said the study’s author, Matthew Cutler of the University of New Hampshire.

There were other correlations, too. For example, people with lower incomes had higher perceptions of extreme weather than people who earned more. Those who live in more vulnerable areas, as might be expected, interpret the effects of weather differently when the costs to their homes and communities are highest.

Causes of extreme weather and the frequency of extreme weather events is an under-explored area from a sociological perspective. Better understanding is important to building more resilient and adaptive communities. After all, why prepare or take safety precautions if you believe the weather isn’t going to be all that bad or occur all that often?

The U.S. Climate Extremes Index, compiled by the National Oceanic and Atmospheric Administration (NOAA), shows a significant rise in extreme weather events since the 1970s, the most back-to-back years of extremes over the past decade since 1910, and all-time record-high levels clocked in 1998 and 2012.

“Some recent research has demonstrated linkages between objectively measured weather, or climate anomalies, and public concern or beliefs about climate change,” Cutler notes. “But the factors influencing perceptions of extreme or unusual weather events have received less attention.”

Indeed, there is a faction of the public that debates how much the climate is changing and which factors are responsible for such consequences as global warming.

Weather, on the other hand, is a different order of things: it is typically defined in the here and now or in the immediate future. It also is largely confined, because of its variability, to local or regional areas. Moreover, weather is something we usually experience directly.

Climate is a more abstract concept, typically defined as atmospheric conditions over a 30-year period.

When weather isn’t experiential, reports are relied upon to gauge extremes. This is when beliefs become more muddied.

“The patterns found in this research provide evidence that individuals experience extreme weather in the context of their social circumstances and thus perceive the impacts of extreme weather through the lens of cultural and social influences. In other words, it is not simply a matter of seeing to believe, but rather an emergent process of both seeing and believing — individuals experiencing extreme weather and interpreting the impacts against the backdrop of social and economic circumstances central to and surrounding their lives,” Cutler concludes.

Sophocles said, “what people believe prevails over the truth.” The consequences of disbelief come at a price in the context of extreme weather, however, as damage, injury, and death are often results.

Too many times do we hear about people being unprepared for storms, ignoring officials’ warnings, failing to evacuate, or engaging in reckless behavior during weather extremes.

There is a need to draw a more complete picture of “weather prejudice,” as I’ll call it, in order to render more practical advice about preparing, surviving, and recovering from what is indisputable: extreme weather disasters to come.

Thomas M. Kostigen is the founder of and a New York Times bestselling author and journalist. He is the National Geographic author of “The Extreme Weather Survival Guide: Understand, Prepare, Survive, Recover” and the NG Kids book, “Extreme Weather: Surviving Tornadoes, Tsunamis, Hailstorms, Thundersnow, Hurricanes and More!” Follow him @weathersurvival, or email

What Concepts and Emotions Are (and Aren’t) (Knowledge Ecology)

August 1, 2015

Adam Robbert

Lisa Feldman Barrett has an interesting piece up in yesterday’s New York Times that I think is worth some attention here. Barrett is the director of the The Interdisciplinary Affective Science Laboratory, where she studies the nature of emotional experience. Here is the key part of the article, describing her latest findings:

The Interdisciplinary Affective Science Laboratory (which I direct) collectively analyzed brain-imaging studies published from 1990 to 2011 that examined fear, sadness, anger, disgust and happiness. We divided the human brain virtually into tiny cubes, like 3-D pixels, and computed the probability that studies of each emotion found an increase in activation in each cube.

Overall, we found that no brain region was dedicated to any single emotion. We also found that every alleged “emotion” region of the brain increased its activity during nonemotional thoughts and perceptions as well . . .

Emotion words like “anger,” “happiness” and “fear” each name a population of diverse biological states that vary depending on the context. When you’re angry with your co-worker, sometimes your heart rate will increase, other times it will decrease and still other times it will stay the same. You might scowl, or you might smile as you plot your revenge. You might shout or be silent. Variation is the norm.

This highly distributed, variable, and contextual description of emotions matches up quite well with what scientists have found to be true of conceptualization—namely, that it is a situated process drawn from a plurality of bodily forces. For instance, compare Barrett’s findings above to what I wrote about concepts in my paper on concepts and capacities from June (footnote references are in the paper):

In short, concepts are flexible and distributed modes of bodily organization grounded in modality-specific regions of the brain;[1] they comprise semantic knowledge embodied in perception and action;[2] and they underwrite the organization of sensory experience and guide action within an environment.[3] Concepts are tools for constructing in the mind new pathways of relationship and discrimination, for shaping the body, and for attuning it to contrast. Such pathways are recruited in an ecologically specific way as part of the dynamic bringing-to-apprehension of phenomena.

I think the parallel is clear enough, and we would do well to adopt this more ecological view of emotions and concepts into our thinking. The empirical data is giving us a strong argument for talking about the ecological basis of emotion and conceptuality, a basis that continues to grow stronger by the day.

Can the Bacteria in Your Gut Explain Your Mood? (New York Times)

Eighteen vials were rocking back and forth on a squeaky mechanical device the shape of a butcher scale, and Mark Lyte was beside himself with excitement. ‘‘We actually got some fresh yesterday — freshly frozen,’’ Lyte said to a lab technician. Each vial contained a tiny nugget of monkey feces that were collected at the Harlow primate lab near Madison, Wis., the day before and shipped to Lyte’s lab on the Texas Tech University Health Sciences Center campus in Abilene, Tex.

Lyte’s interest was not in the feces per se but in the hidden form of life they harbor. The digestive tube of a monkey, like that of all vertebrates, contains vast quantities of what biologists call gut microbiota. The genetic material of these trillions of microbes, as well as others living elsewhere in and on the body, is collectively known as the microbiome. Taken together, these bacteria can weigh as much as six pounds, and they make up a sort of organ whose functions have only begun to reveal themselves to science. Lyte has spent his career trying to prove that gut microbes communicate with the nervous system using some of the same neurochemicals that relay messages in the brain.

Inside a closet-size room at his lab that afternoon, Lyte hunched over to inspect the vials, whose samples had been spun down in a centrifuge to a radiant, golden broth. Lyte, 60, spoke fast and emphatically. ‘‘You wouldn’t believe what we’re extracting out of poop,’’ he told me. ‘‘We found that the guys here in the gut make neurochemicals. We didn’t know that. Now, if they make this stuff here, does it have an influence there? Guess what? We make the same stuff. Maybe all this communication has an influence on our behavior.’’

Since 2007, when scientists announced plans for a Human Microbiome Project to catalog the micro-organisms living in our body, the profound appreciation for the influence of such organisms has grown rapidly with each passing year. Bacteria in the gut produce vitamins and break down our food; their presence or absence has been linked to obesity, inflammatory bowel disease and the toxic side effects of prescription drugs. Biologists now believe that much of what makes us human depends on microbial activity. The two million unique bacterial genes found in each human microbiome can make the 23,000 genes in our cells seem paltry, almost negligible, by comparison. ‘‘It has enormous implications for the sense of self,’’ Tom Insel, the director of the National Institute of Mental Health, told me. ‘‘We are, at least from the standpoint of DNA, more microbial than human. That’s a phenomenal insight and one that we have to take seriously when we think about human development.’’

Given the extent to which bacteria are now understood to influence human physiology, it is hardly surprising that scientists have turned their attention to how bacteria might affect the brain. Micro-organisms in our gut secrete a profound number of chemicals, and researchers like Lyte have found that among those chemicals are the same substances used by our neurons to communicate and regulate mood, like dopamine, serotonin and gamma-aminobutyric acid (GABA). These, in turn, appear to play a function in intestinal disorders, which coincide with high levels of major depression and anxiety. Last year, for example, a group in Norway examined feces from 55 people and found certain bacteria were more likely to be associated with depressive patients.

At the time of my visit to Lyte’s lab, he was nearly six months into an experiment that he hoped would better establish how certain gut microbes influenced the brain, functioning, in effect, as psychiatric drugs. He was currently compiling a list of the psychoactive compounds found in the feces of infant monkeys. Once that was established, he planned to transfer the microbes found in one newborn monkey’s feces into another’s intestine, so that the recipient would end up with a completely new set of microbes — and, if all went as predicted, change their neurodevelopment. The experiment reflected an intriguing hypothesis. Anxiety, depression and several pediatric disorders, including autism and hyperactivity, have been linked with gastrointestinal abnormalities. Microbial transplants were not invasive brain surgery, and that was the point: Changing a patient’s bacteria might be difficult but it still seemed more straightforward than altering his genes.

When Lyte began his work on the link between microbes and the brain three decades ago, it was dismissed as a curiosity. By contrast, last September, the National Institute of Mental Health awarded four grants worth up to $1 million each to spur new research on the gut microbiome’s role in mental disorders, affirming the legitimacy of a field that had long struggled to attract serious scientific credibility. Lyte and one of his longtime colleagues, Christopher Coe, at the Harlow primate lab, received one of the four. ‘‘What Mark proposed going back almost 25 years now has come to fruition,’’ Coe told me. ‘‘Now what we’re struggling to do is to figure out the logic of it.’’ It seems plausible, if not yet proved, that we might one day use microbes to diagnose neurodevelopmental disorders, treat mental illnesses and perhaps even fix them in the brain.

In 2011, a team of researchers at University College Cork, in Ireland, and McMaster University, in Ontario, published a study in Proceedings of the National Academy of Science that has become one of the best-known experiments linking bacteria in the gut to the brain. Laboratory mice were dropped into tall, cylindrical columns of water in what is known as a forced-swim test, which measures over six minutes how long the mice swim before they realize that they can neither touch the bottom nor climb out, and instead collapse into a forlorn float. Researchers use the amount of time a mouse floats as a way to measure what they call ‘‘behavioral despair.’’ (Antidepressant drugs, like Zoloft and Prozac, were initially tested using this forced-swim test.)

For several weeks, the team, led by John Cryan, the neuroscientist who designed the study, fed a small group of healthy rodents a broth infused with Lactobacillus rhamnosus, a common bacterium that is found in humans and also used to ferment milk into probiotic yogurt. Lactobacilli are one of the dominant organisms babies ingest as they pass through the birth canal. Recent studies have shown that mice stressed during pregnancy pass on lowered levels of the bacterium to their pups. This type of bacteria is known to release immense quantities of GABA; as an inhibitory neurotransmitter, GABA calms nervous activity, which explains why the most common anti-anxiety drugs, like Valium and Xanax, work by targeting GABA receptors.

Cryan found that the mice that had been fed the bacteria-laden broth kept swimming longer and spent less time in a state of immobilized woe. ‘‘They behaved as if they were on Prozac,’’ he said. ‘‘They were more chilled out and more relaxed.’’ The results suggested that the bacteria were somehow altering the neural chemistry of mice.

Until he joined his colleagues at Cork 10 years ago, Cryan thought about microbiology in terms of pathology: the neurological damage created by diseases like syphilis or H.I.V. ‘‘There are certain fields that just don’t seem to interact well,’’ he said. ‘‘Microbiology and neuroscience, as whole disciplines, don’t tend to have had much interaction, largely because the brain is somewhat protected.’’ He was referring to the fact that the brain is anatomically isolated, guarded by a blood-brain barrier that allows nutrients in but keeps out pathogens and inflammation, the immune system’s typical response to germs. Cryan’s study added to the growing evidence that signals from beneficial bacteria nonetheless find a way through the barrier. Somehow — though his 2011 paper could not pinpoint exactly how — micro-organisms in the gut tickle a sensory nerve ending in the fingerlike protrusion lining the intestine and carry that electrical impulse up the vagus nerve and into the deep-brain structures thought to be responsible for elemental emotions like anxiety. Soon after that, Cryan and a co-author, Ted Dinan, published a theory paper in Biological Psychiatry calling these potentially mind-altering microbes ‘‘psychobiotics.’’

It has long been known that much of our supply of neurochemicals — an estimated 50 percent of the dopamine, for example, and a vast majority of the serotonin — originate in the intestine, where these chemical signals regulate appetite, feelings of fullness and digestion. But only in recent years has mainstream psychiatric research given serious consideration to the role microbes might play in creating those chemicals. Lyte’s own interest in the question dates back to his time as a postdoctoral fellow at the University of Pittsburgh in 1985, when he found himself immersed in an emerging field with an unwieldy name: psychoneuroimmunology, or PNI, for short. The central theory, quite controversial at the time, suggested that stress worsened disease by suppressing our immune system.

By 1990, at a lab in Mankato, Minn., Lyte distilled the theory into three words, which he wrote on a chalkboard in his office: Stress->Immune->Disease. In the course of several experiments, he homed in on a paradox. When he dropped an intruder mouse in the cage of an animal that lived alone, the intruder ramped up its immune system — a boost, he suspected, intended to fight off germ-ridden bites or scratches. Surprisingly, though, this did not stop infections. It instead had the opposite effect: Stressed animals got sick. Lyte walked up to the board and scratched a line through the word ‘‘Immune.’’ Stress, he suspected, directly affected the bacterial bugs that caused infections.

To test how micro-organisms reacted to stress, he filled petri plates with a bovine-serum-based medium and laced the dishes with a strain of bacterium. In some, he dropped norepinephrine, a neurochemical that mammals produce when stressed. The next day, he snapped a Polaroid. The results were visible and obvious: The control plates were nearly barren, but those with the norepinephrine bloomed with bacteria that filigreed in frostlike patterns. Bacteria clearly responded to stress.

Then, to see if bacteria could induce stress, Lyte fed white mice a liquid solution of Campylobacter jejuni, a bacterium that can cause food poisoning in humans but generally doesn’t prompt an immune response in mice. To the trained eye, his treated mice were as healthy as the controls. But when he ran them through a plexiglass maze raised several feet above the lab floor, the bacteria-fed mice were less likely to venture out on the high, unprotected ledges of the maze. In human terms, they seemed anxious. Without the bacteria, they walked the narrow, elevated planks.

Credit: Illustration by Andrew Rae 

Each of these results was fascinating, but Lyte had a difficult time finding microbiology journals that would publish either. ‘‘It was so anathema to them,’’ he told me. When the mouse study finally appeared in the journal Physiology & Behavior in 1998, it garnered little attention. And yet as Stephen Collins, a gastroenterologist at McMaster University, told me, those first papers contained the seeds of an entire new field of research. ‘‘Mark showed, quite clearly, in elegant studies that are not often cited, that introducing a pathological bacterium into the gut will cause a change in behavior.’’

Lyte went on to show how stressful conditions for newborn cattle worsened deadly E. coli infections. In another experiment, he fed mice lean ground hamburger that appeared to improve memory and learning — a conceptual proof that by changing diet, he could change gut microbes and change behavior. After accumulating nearly a decade’s worth of evidence, in July 2008, he flew to Washington to present his research. He was a finalist for the National Institutes of Health’s Pioneer Award, a $2.5 million grant for so-called blue-sky biomedical research. Finally, it seemed, his time had come. When he got up to speak, Lyte described a dialogue between the bacterial organ and our central nervous system. At the two-minute mark, a prominent scientist in the audience did a spit take.

‘‘Dr. Lyte,’’ he later asked at a question-and-answer session, ‘‘if what you’re saying is right, then why is it when we give antibiotics to patients to kill bacteria, they are not running around crazy on the wards?’’

Lyte knew it was a dismissive question. And when he lost out on the grant, it confirmed to him that the scientific community was still unwilling to imagine that any part of our neural circuitry could be influenced by single-celled organisms. Lyte published his theory in Medical Hypotheses, a low-ranking journal that served as a forum for unconventional ideas. The response, predictably, was underwhelming. ‘‘I had people call me crazy,’’ he said.

But by 2011 — when he published a second theory paper in Bioessays, proposing that probiotic bacteria could be tailored to treat specific psychological diseases — the scientific community had become much more receptive to the idea. A Canadian team, led by Stephen Collins, had demonstrated that antibiotics could be linked to less cautious behavior in mice, and only a few months before Lyte, Sven Pettersson, a microbiologist at the Karolinska Institute in Stockholm, published a landmark paper in Proceedings of the National Academy of Science that showed that mice raised without microbes spent far more time running around outside than healthy mice in a control group; without the microbes, the mice showed less apparent anxiety and were more daring. In Ireland, Cryan published his forced-swim-test study on psychobiotics. There was now a groundswell of new research. In short order, an implausible idea had become a hypothesis in need of serious validation.

Late last year, Sarkis Mazmanian, a microbiologist at the California Institute of Technology, gave a presentation at the Society for Neuroscience, ‘‘Gut Microbes and the Brain: Paradigm Shift in Neuroscience.’’ Someone had inadvertently dropped a question mark from the end, so the speculation appeared to be a definitive statement of fact. But if anyone has a chance of delivering on that promise, it’s Mazmanian, whose research has moved beyond the basic neurochemicals to focus on a broader class of molecules called metabolites: small, equally druglike chemicals that are produced by micro-organisms. Using high-powered computational tools, he also hopes to move beyond the suggestive correlations that have typified psychobiotic research to date, and instead make decisive discoveries about the mechanisms by which microbes affect brain function.

Two years ago, Mazmanian published a study in the journal Cell with Elaine Hsiao, then a graduate student at his lab and now a neuroscientist at Caltech, that made a provocative link between a single molecule and behavior. Their research found that mice exhibiting abnormal communication and repetitive behaviors, like obsessively burying marbles, were mollified when they were given one of two strains of the bacterium Bacteroides fragilis.

The study added to a working hypothesis in the field that microbes don’t just affect the permeability of the barrier around the brain but also influence the intestinal lining, which normally prevents certain bacteria from leaking out and others from getting in. When the intestinal barrier was compromised in his model, normally ‘‘beneficial’’ bacteria and the toxins they produce seeped into the bloodstream and raised the possibility they could slip past the blood-brain barrier. As one of his colleagues, Michael Fischbach, a microbiologist at the University of California, San Francisco, said: ‘‘The scientific community has a way of remaining skeptical until every last arrow has been drawn, until the entire picture is colored in. Other scientists drew the pencil outlines, and Sarkis is filling in a lot of the color.’’

Mazmanian knew the results offered only a provisional explanation for why restrictive diets and antibacterial treatments seemed to help some children with autism: Altering the microbial composition might be changing the permeability of the intestine. ‘‘The larger concept is, and this is pure speculation: Is a disease like autism really a disease of the brain or maybe a disease of the gut or some other aspect of physiology?’’ Mazmanian said. For any disease in which such a link could be proved, he saw a future in drugs derived from these small molecules found inside microbes. (A company he co-founded, Symbiotix Biotherapies, is developing a complex sugar called PSA, which is associated with Bacteroides fragilis, into treatments for intestinal disease and multiple sclerosis.) In his view, the prescriptive solutions probably involve more than increasing our exposure to environmental microbes in soil, dogs or even fermented foods; he believed there were wholesale failures in the way we shared our microbes and inoculated children with these bacteria. So far, though, the only conclusion he could draw was that disorders once thought to be conditions of the brain might be symptoms of microbial disruptions, and it was the careful defining of these disruptions that promised to be helpful in the coming decades.

The list of potential treatments incubating in labs around the world is startling. Several international groups have found that psychobiotics had subtle yet perceptible effects in healthy volunteers in a battery of brain-scanning and psychological tests. Another team in Arizona recently finished an open trial on fecal transplants in children with autism. (Simultaneously, at least two offshore clinics, in Australia and England, began offering fecal microbiota treatments to treat neurological disorders, like multiple sclerosis.) Mazmanian, however, cautions that this research is still in its infancy. ‘‘We’ve reached the stage where there’s a lot of, you know, ‘The microbiome is the cure for everything,’ ’’ he said. ‘‘I have a vested interest if it does. But I’d be shocked if it did.’’

Lyte issues the same caveat. ‘‘People are obviously desperate for solutions,’’ Lyte said when I visited him in Abilene. (He has since moved to Iowa State’s College of Veterinary Medicine.) ‘‘My main fear is the hype is running ahead of the science.’’ He knew that parents emailing him for answers meant they had exhausted every option offered by modern medicine. ‘‘It’s the Wild West out there,’’ he said. ‘‘You can go online and buy any amount of probiotics for any number of conditions now, and my paper is one of those cited. I never said go out and take probiotics.’’ He added, ‘‘We really need a lot more research done before we actually have people trying therapies out.’’

If the idea of psychobiotics had now, in some ways, eclipsed him, it was nevertheless a curious kind of affirmation, even redemption: an old-school microbiologist thrust into the midst of one of the most promising aspects of neuroscience. At the moment, he had a rough map in his head and a freezer full of monkey fecals that might translate, somehow, into telling differences between gregarious or shy monkeys later in life. I asked him if what amounted to a personality transplant still sounded a bit far-fetched. He seemed no closer to unlocking exactly what brain functions could be traced to the same organ that produced feces. ‘‘If you transfer the microbiota from one animal to another, you can transfer the behavior,’’ Lyte said. ‘‘What we’re trying to understand are the mechanisms by which the microbiota can influence the brain and development. If you believe that, are you now out on the precipice? The answer is yes. Do I think it’s the future? I think it’s a long way away.’’

Brain Cells Break Their Own DNA to Allow Memories to Form (IFL Science)

June 22, 2015 | by Justine Alford

photo credit: Courtesy of MIT Researchers 

Given the fundamental importance of our DNA, it is logical to assume that damage to it is undesirable and spells bad news; after all, we know that cancer can be caused by mutations that arise from such injury. But a surprising new study is turning that idea on its head, with the discovery that brain cells actually break their own DNA to enable us to learn and form memories.

While that may sound counterintuitive, it turns out that the damage is necessary to allow the expression of a set of genes, called early-response genes, which regulate various processes that are critical in the creation of long-lasting memories. These lesions are rectified pronto by repair systems, but interestingly, it seems that this ability deteriorates during aging, leading to a buildup of damage that could ultimately result in the degeneration of our brain cells.

This idea is supported by earlier work conducted by the same group, headed by Li-Huei Tsai, at the Massachusetts Institute of Technology (MIT) that discovered that the brains of mice engineered to develop a model of Alzheimer’s disease possessed a significant amount of DNA breaks, even before symptoms appeared. These lesions, which affected both strands of DNA, were observed in a region critical to learning and memory: the hippocampus.

To find out more about the possible consequences of such damage, the team grew neurons in a dish and exposed them to an agent that causes these so-called double strand breaks (DSBs), and then they monitored the gene expression levels. As described in Cellthey found that while the vast majority of genes that were affected by these breaks showed decreased expression, a small subset actually displayed increased expression levels. Importantly, these genes were involved in the regulation of neuronal activity, and included the early-response genes.

Since the early-response genes are known to be rapidly expressed following neuronal activity, the team was keen to find out whether normal neuronal stimulation could also be inducing DNA breaks. The scientists therefore applied a substance to the cells that is known to strengthen the tiny gap between neurons across which information flows – the synapse – mimicking what happens when an organism is exposed to a new experience.

“Sure enough, we found that the treatment very rapidly increased the expression of those early response genes, but it also caused DNA double strand breaks,” Tsai said in a statement.

So what is the connection between these breaks and the apparent boost in early-response gene expression? After using computers to scrutinize the DNA sequences neighboring these genes, the researchers found that they were enriched with a pattern targeted by an architectural protein that, upon binding, distorts the DNA strands by introducing kinks. By preventing crucial interactions between distant DNA regions, these bends therefore act as a barrier to gene expression. The breaks, however, resolve these constraints, allowing expression to ensue.

These findings could have important implications because earlier work has demonstrated that aging is associated with a decline in the expression of genes involved in the processes of learning and memory formation. It therefore seems likely that the DNA repair system deteriorates with age, but at this stage it is unclear how these changes occur, so the researchers plan to design further studies to find out more.

Problem: Your brain (Medium)

I will be talking mainly about development for the web.

Ilya Dorman, Feb 15, 2015

Our puny brain can handle a very limited amount of logic at a time. While programmers proclaim logic as their domain, they are only sometimes and slightly better at managing complexity than the rest of us, mortals. The more logic our app has, the harder it is to change it or introduce new people to it.

The most common mistake programmers do is assuming they write code for a machine to read. While technically that is true, this mindset leads to the hell that is other people’s code.

I have worked in several start-up companies, some of them even considered “lean.” In each, it took me between few weeks to few months to fully understand their code-base, and I have about 6 years of experience with JavaScript. This does not seem reasonable to me at all.

If the code is not easy to read, its structure is already a monument—you can change small things, but major changes—the kind every start-up undergoes on an almost monthly basis—are as fun as a root canal. Once the code reaches a state, that for a proficient programmer, it is harder to read than this article—doom and suffering is upon you.

Why does the code become unreadable? Let’s compare code to plain text: the longer a sentence is, the easier it is for our mind to forget the beginning of it, and once we reach the end, we forget what was the beginning and lose the meaning of the whole sentence. You had to read the previous sentence twice because it was too long to get in one grasp? Exactly! Same with code. Worse, actually—the logic of code can be way more complex than any sentence from a book or a blog post, and each programmer has his own logic which can be total gibberish to another. Not to mention that we also need to remember the logic. Sometimes we come back to it the same day and sometimes after two month. Nobody remembers anything about their code after not looking at it for two month.

To make code readable to other humans we rely on three things:

1. Conventions

Conventions are good, but they are very limited: enforce them too little and the programmer becomes coupled to the code—no one will ever understand what they meant once they are gone. Enforce too much and you will have hour-long debates about every space and colon (true story.) The “habitable zone” is very narrow and easy to miss.


They are probably the most helpful, if done right. Unfortunately many programmers write their comments in the same spirit they write their code—very idiosyncratic. I do not belong to the school claiming good code needs no comments, but even beautifully commented code can still be extremely complicated.

3. “Other people know this programming language as much as I do, so they must understand my writings.”

Well… This is JavaScript:


4. Tests

Tests are a devil in disguise. ”How do we make sure our code is good and readable? We write more code!” I know many of you might quit this post right here, but bear with me for a few more lines: regardless of their benefit, tests are another layer of logic. They are more code to be read and understood. Tests try to solve this exact problem: your code is too complicated to calculate it’s result in your brain? So you say “well, this is what should happen in the end.” And when it doesn’t, you go digging for the problem. Your code should be simple enough to read a function or a line and understand what should be the result of running it.

Your life as a programmer could be so much easier!

Solution: Radical Minimalism

I will break down this approach into practical points, but the main idea is: use LESS logic.

  • Cut 80% of your product’s features

Yes! Just like that. Simplicity, first of all, comes from the product. Make it easy for people to understand and use. Make it do one thing well, and only then add up (if there is still a need.)

  • Use nothing but what you absolutely must

Do not include a single line of code (especially from libraries) that you are not 100% sure you will use and that it is the simplest, most straightforward solution available. Need a simple chat app and use Angular.js because it’s nice with the two-way binding? You deserve those hours and days of debugging and debating about services vs. providers.

Side note: The JavaScript browser api is event-driven, it is made to respond when stuff (usually user input) happens. This means that events change data. Many new frameworks (Angular, Meteor) reverse this direction and make data changes trigger events. If your app is simple, you might live happily with the new mysterious layer, but if not — you get a whole new layer of complexity that you need to understand and your life will get exponentially more miserable. Unless your app constantly manages big amounts of data, Avoid those frameworks.

  • Use simplest logic possible

Say you need show different HTML on different occasions. You can use client-side routing with controllers and data passed to each controller that renders the HTML from a template. Or you can just use static HTML pages with normal browser navigation, and update manually the HTML. Use the second.

  • Make short Javascript files

Limit the length of your JS files to a single editor page, and make each file do one thing. Can’t cramp all your glorious logic into small modules? Good, that means you should have less of it, so that other humans will understand your code in reasonable time.

  • Avoid pre-compilers and task-runners like AIDS

The more layers there are between what you write and what you see, the more logic your mind needs to remember. You might think grunt or gulp help you to simplify stuff but then you have 30 tasks that you need to remember what they do to your code, how to use them, update them, and teach them to any new coder. Not to mention compiling.

Side note #1: CSS pre-compilers are OK because they have very little logic but they help a lot in terms of readable structure, compared to plain CSS. I barely used HTML pre-compilers so you’ll have to decide for yourself.

Side note #2: Task-runners could save you time, so if you do use them, do it wisely keeping the minimalistic mindset.

  • Use Javascript everywhere

This one is quite specific, and I am not absolutely sure about it, but having the same language in client and server can simplify the data management between them.

  • Write more human code

Give your non trivial variables (and functions) descriptive names. Make shorter lines but only if it does not compromise readability.

Treat your code like poetry and take it to the edge of the bare minimum.

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas (The Physics arXiv Blog)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time

The Physics arXiv Blog

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.

That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.

Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.

Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.

Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.

It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.

Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.

Clearly, there is so much room for improvement that allows for the performance of conscious systems.

Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.

Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.

This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.

The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.

So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?

Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.

Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.

Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.

“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.

That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!

And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.

At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.

Could it be that a similar revolution is currently underway at the beginning of the 21st century? Consciousness as a State of Matter

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.

Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

Denying problems when we don’t like the political solutions (Duke University)


Steve Hartsoe

Duke study sheds light on why conservatives, liberals disagree so vehemently

DURHAM, N.C. — There may be a scientific answer for why conservatives and liberals disagree so vehemently over the existence of issues like climate change and specific types of crime.

A new study from Duke University finds that people will evaluate scientific evidence based on whether they view its policy implications as politically desirable. If they don’t, then they tend to deny the problem even exists.

“Logically, the proposed solution to a problem, such as an increase in government regulation or an extension of the free market, should not influence one’s belief in the problem. However, we find it does,” said co-author Troy Campbell, a Ph.D. candidate at Duke’s Fuqua School of Business. “The cure can be more immediately threatening than the problem.”

The study, “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief,” appears in the November issue of the Journal of Personality and Social Psychology (viewable at

The researchers conducted three experiments (with samples ranging from 120 to 188 participants) on three different issues — climate change, air pollution that harms lungs, and crime.

“The goal was to test, in a scientifically controlled manner, the question: Does the desirability of a solution affect beliefs in the existence of the associated problem? In other words, does what we call ‘solution aversion’ exist?” Campbell said.

“We found the answer is yes. And we found it occurs in response to some of the most common solutions for popularly discussed problems.”

For climate change, the researchers conducted an experiment to examine why more Republicans than Democrats seem to deny its existence, despite strong scientific evidence that supports it.

One explanation, they found, may have more to do with conservatives’ general opposition to the most popular solution — increasing government regulation — than with any difference in fear of the climate change problem itself, as some have proposed.

Participants in the experiment, including both self-identified Republicans and Democrats, read a statement asserting that global temperatures will rise 3.2 degrees in the 21st century. They were then asked to evaluate a proposed policy solution to address the warming.

When the policy solution emphasized a tax on carbon emissions or some other form of government regulation, which is generally opposed by Republican ideology, only 22 percent of Republicans said they believed the temperatures would rise at least as much as indicated by the scientific statement they read.

But when the proposed policy solution emphasized the free market, such as with innovative green technology, 55 percent of Republicans agreed with the scientific statement.

For Democrats, the same experiment recorded no difference in their belief, regardless of the proposed solution to climate change.

“Recognizing this effect is helpful because it allows researchers to predict not just what problems people will deny, but who will likely deny each problem,” said co-author Aaron Kay, an associate professor at Fuqua. “The more threatening a solution is to a person, the more likely that person is to deny the problem.”

The researchers found liberal-leaning individuals exhibited a similar aversion to solutions they viewed as politically undesirable in an experiment involving violent home break-ins. When the proposed solution called for looser versus tighter gun-control laws, those with more liberal gun-control ideologies were more likely to downplay the frequency of violent home break-ins.

“We should not just view some people or group as anti-science, anti-fact or hyper-scared of any problems,” Kay said. “Instead, we should understand that certain problems have particular solutions that threaten some people and groups more than others. When we realize this, we understand those who deny the problem more and we improve our ability to better communicate with them.”

Campbell added that solution aversion can help explain why political divides become so divisive and intractable.

“We argue that the political divide over many issues is just that, it’s political,” Campbell said. “These divides are not explained by just one party being more anti-science, but the fact that in general people deny facts that threaten their ideologies, left, right or center.”

The researchers noted there are additional factors that can influence how people see the policy implications of science. Additional research using larger samples and more specific methods would provide an even clearer picture, they said.


The study was funded by The Fuqua School of Business.

CITATION: Troy Campbell, Aaron Kay, Duke University (2014). “Solution Aversion: On the Relation Between Ideology and Motivated Disbelief.” Journal of Personality and Social Psychology, 107(5), 809-824.

How learning to talk is in the genes (Science Daily)

Date: September 16, 2014

Source: University of Bristol

Summary: Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Scientists discovered a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.

Researchers have found evidence that genetic factors may contribute to the development of language during infancy. Credit: © witthaya / Fotolia

Researchers have found evidence that genetic factors may contribute to the development of language during infancy.

Scientists from the Medical Research Council (MRC) Integrative Epidemiology Unit at the University of Bristol worked with colleagues around the world to discover a significant link between genetic changes near the ROBO2 gene and the number of words spoken by children in the early stages of language development.

Children produce words at about 10 to 15 months of age and our range of vocabulary expands as we grow — from around 50 words at 15 to 18 months, 200 words at 18 to 30 months, 14,000 words at six-years-old and then over 50,000 words by the time we leave secondary school.

The researchers found the genetic link during the ages of 15 to 18 months when toddlers typically communicate with single words only before their linguistic skills advance to two-word combinations and more complex grammatical structures.

The results, published in Nature Communications today [16 Sept], shed further light on a specific genetic region on chromosome 3, which has been previously implicated in dyslexia and speech-related disorders.

The ROBO2 gene contains the instructions for making the ROBO2 protein. This protein directs chemicals in brain cells and other neuronal cell formations that may help infants to develop language but also to produce sounds.

The ROBO2 protein also closely interacts with other ROBO proteins that have previously been linked to problems with reading and the storage of speech sounds.

Dr Beate St Pourcain, who jointly led the research with Professor Davey Smith at the MRC Integrative Epidemiology Unit, said: “This research helps us to better understand the genetic factors which may be involved in the early language development in healthy children, particularly at a time when children speak with single words only, and strengthens the link between ROBO proteins and a variety of linguistic skills in humans.”

Dr Claire Haworth, one of the lead authors, based at the University of Warwick, commented: “In this study we found that results using DNA confirm those we get from twin studies about the importance of genetic influences for language development. This is good news as it means that current DNA-based investigations can be used to detect most of the genetic factors that contribute to these early language skills.”

The study was carried out by an international team of scientists from the EArly Genetics and Lifecourse Epidemiology Consortium (EAGLE) and involved data from over 10,000 children.

Journal Reference:
  1. Beate St Pourcain, Rolieke A.M. Cents, Andrew J.O. Whitehouse, Claire M.A. Haworth, Oliver S.P. Davis, Paul F. O’Reilly, Susan Roulstone, Yvonne Wren, Qi W. Ang, Fleur P. Velders, David M. Evans, John P. Kemp, Nicole M. Warrington, Laura Miller, Nicholas J. Timpson, Susan M. Ring, Frank C. Verhulst, Albert Hofman, Fernando Rivadeneira, Emma L. Meaburn, Thomas S. Price, Philip S. Dale, Demetris Pillas, Anneli Yliherva, Alina Rodriguez, Jean Golding, Vincent W.V. Jaddoe, Marjo-Riitta Jarvelin, Robert Plomin, Craig E. Pennell, Henning Tiemeier, George Davey Smith. Common variation near ROBO2 is associated with expressive vocabulary in infancy. Nature Communications, 2014; 5: 4831 DOI:10.1038/ncomms5831

Number-crunching could lead to unethical choices, says new study (Science Daily)

Date: September 15, 2014

Source: University of Toronto, Rotman School of Management

Summary: Calculating the pros and cons of a potential decision is a way of decision-making. But repeated engagement with numbers-focused calculations, especially those involving money, can have unintended negative consequences.

Calculating the pros and cons of a potential decision is a way of decision-making. But repeated engagement with numbers-focused calculations, especially those involving money, can have unintended negative consequences, including social and moral transgressions, says new study co-authored by a professor at the University of Toronto’s Rotman School of Management.

Based on several experiments, researchers concluded that people in a “calculative mindset” as a result of number-crunching are more likely to analyze non-numerical problems mathematically and not take into account social, moral or interpersonal factors.

“Performing calculations, whether related to money or not, seemed to encourage people to engage in unethical behaviors to better themselves,” says Chen-Bo Zhong, an associate professor of organizational behavior and human resource management at the Rotman School, who co-authored the study with Long Wang of City University of Hong Kong and J. Keith Murnighan from Northwestern University’s Kellogg School of Management.

Participants in a set of experiments displayed significantly more selfish behavior in games where they could opt to promote their self-interest over a stranger’s after exposure to a lesson on a calculative economics concept. Participants who were instead given a history lesson on the industrial revolution were less likely to behave selfishly in the subsequent games. A similar but lesser effect was found when participants were first asked to solve math problems instead of verbal problems before playing the games. Furthermore, the effect could potentially be reduced by making non-numerical values more prominent. The study showed less self-interested behavior when participants were shown pictures of families after calculations.

The results may provide further insight into why economics students have shown more self-interested behavior in previous studies examining whether business or economics education contributes to unethical corporate activity, the researchers wrote.

The study was published in Organizational Behavior and Human Decision Processes.

Journal Reference:

  1. Long Wang, Chen-Bo Zhong, J. Keith Murnighan. The social and ethical consequences of a calculative mindset. Organizational Behavior and Human Decision Processes, 2014; 125 (1): 39 DOI: 10.1016/j.obhdp.2014.05.004

Teoria quântica, múltiplos universos, e o destino da consciência humana após a morte (Biocentrismo, Robert Lanza)

[Nota do editor do blogue: o título da matéria em português não é fiel ao título original em inglês, e tem caráter sensacionalista. Por ser este blogue uma hemeroteca, não alterei o título.]

Cientistas comprovam a reencarnação humana (Duniverso)

s/d; acessado em 14 de setembro de 2014. Desde que o mundo é mundo discutimos e tentamos descobrir o que existe além da morte. Desta vez a ciência quântica explica e comprova que existe sim vida (não física) após a morte de qualquer ser humano. Um livro intitulado “O biocentrismo: Como a vida e a consciência são as chaves para entender a natureza do Universo” “causou” na Internet, porque continha uma noção de que a vida não acaba quando o corpo morre e que pode durar para sempre. O autor desta publicação o cientista Dr. Robert Lanza, eleito o terceiro mais importante cientista vivo pelo NY Times, não tem dúvidas de que isso é possível.

Além do tempo e do espaço

Lanza é um especialista em medicina regenerativa e diretor científico da Advanced Cell Technology Company. No passado ficou conhecido por sua extensa pesquisa com células-tronco e também por várias experiências bem sucedidas sobre clonagem de espécies animais ameaçadas de extinção. Mas não há muito tempo, o cientista se envolveu com física, mecânica quântica e astrofísica. Esta mistura explosiva deu à luz a nova teoria do biocentrismo que vem pregando desde então. O biocentrismo ensina que a vida e a consciência são fundamentais para o universo. É a consciência que cria o universo material e não o contrário. Lanza aponta para a estrutura do próprio universo e diz que as leis, forças e constantes variações do universo parecem ser afinadas para a vida, ou seja, a inteligência que existia antes importa muito. Ele também afirma que o espaço e o tempo não são objetos ou coisas mas sim ferramentas de nosso entendimento animal. Lanza diz que carregamos o espaço e o tempo em torno de nós “como tartarugas”, o que significa que quando a casca sai, espaço e tempo ainda existem. ciencia-quantica-comprova-reencarnacao

A teoria sugere que a morte da consciência simplesmente não existe. Ele só existe como um pensamento porque as pessoas se identificam com o seu corpo. Eles acreditam que o corpo vai morrer mais cedo ou mais tarde, pensando que a sua consciência vai desaparecer também. Se o corpo gera a consciência então a consciência morre quando o corpo morre. Mas se o corpo recebe a consciência da mesma forma que uma caixa de tv a cabo recebe sinais de satélite então é claro que a consciência não termina com a morte do veículo físico. Na verdade a consciência existe fora das restrições de tempo e espaço. Ele é capaz de estar em qualquer lugar: no corpo humano e no exterior de si mesma. Em outras palavras é não-local, no mesmo sentido que os objetos quânticos são não-local. Lanza também acredita que múltiplos universos podem existir simultaneamente. Em um universo o corpo pode estar morto e em outro continua a existir, absorvendo consciência que migraram para este universo. Isto significa que uma pessoa morta enquanto viaja através do mesmo túnel acaba não no inferno ou no céu, mas em um mundo semelhante a ele ou ela que foi habitado, mas desta vez vivo. E assim por diante, infinitamente, quase como um efeito cósmico vida após a morte.

Vários mundos

Não são apenas meros mortais que querem viver para sempre mas também alguns cientistas de renome têm a mesma opinião de Lanza. São os físicos e astrofísicos que tendem a concordar com a existência de mundos paralelos e que sugerem a possibilidade de múltiplos universos. Multiverso (multi-universo) é o conceito científico da teoria que eles defendem. Eles acreditam que não existem leis físicas que proibiriam a existência de mundos paralelos.


O primeiro a falar sobre isto foi o escritor de ficção científica HG Wells em 1895 com o livro “The Door in the Wall“. Após 62 anos essa ideia foi desenvolvida pelo Dr. Hugh Everett em sua tese de pós-graduação na Universidade de Princeton. Basicamente postula que, em determinado momento o universo se divide em inúmeros casos semelhantes e no momento seguinte, esses universos “recém-nascidos” dividem-se de forma semelhante. Então em alguns desses mundos que podemos estar presentes, lendo este artigo em um universo e assistir TV em outro. Na década de 1980 Andrei Linde cientista do Instituto de Física da Lebedev, desenvolveu a teoria de múltiplos universos. Agora como professor da Universidade de Stanford, Linde explicou: o espaço consiste em muitas esferas de insuflar que dão origem a esferas semelhantes, e aqueles, por sua vez, produzem esferas em números ainda maiores e assim por diante até o infinito. No universo eles são separados. Eles não estão cientes da existência do outro mas eles representam partes de um mesmo universo físico. A física Laura Mersini Houghton da Universidade da Carolina do Norte com seus colegas argumentam: as anomalias do fundo do cosmos existe devido ao fato de que o nosso universo é influenciado por outros universos existentes nas proximidades e que buracos e falhas são um resultado direto de ataques contra nós por universos vizinhos.


Assim, há abundância de lugares ou outros universos onde a nossa alma poderia migrar após a morte, de acordo com a teoria de neo biocentrismo. Mas será que a alma existe? Existe alguma teoria científica da consciência que poderia acomodar tal afirmação? Segundo o Dr. Stuart Hameroff uma experiência de quase morte acontece quando a informação quântica que habita o sistema nervoso deixa o corpo e se dissipa no universo. Ao contrário do que defendem os materialistas Dr. Hameroff oferece uma explicação alternativa da consciência que pode, talvez, apelar para a mente científica racional e intuições pessoais. A consciência reside, de acordo com Stuart e o físico britânico Sir Roger Penrose, nos microtúbulos das células cerebrais que são os sítios primários de processamento quântico. Após a morte esta informação é liberada de seu corpo, o que significa que a sua consciência vai com ele. Eles argumentaram que a nossa experiência da consciência é o resultado de efeitos da gravidade quântica nesses microtúbulos, uma teoria que eles batizaram Redução Objetiva Orquestrada. Consciência ou pelo menos proto consciência é teorizada por eles para ser uma propriedade fundamental do universo, presente até mesmo no primeiro momento do universo durante o Big Bang. “Em uma dessas experiências conscientes comprova-se que o proto esquema é uma propriedade básica da realidade física acessível a um processo quântico associado com atividade cerebral.” Nossas almas estão de fato construídas a partir da própria estrutura do universo e pode ter existido desde o início dos tempos. Nossos cérebros são apenas receptores e amplificadores para a proto-consciência que é intrínseca ao tecido do espaço-tempo. Então, há realmente uma parte de sua consciência que é não material e vai viver após a morte de seu corpo físico. ciencia-quantica-comprova-reencarnacao-3

Dr. Hameroff disse ao Canal Science através do documentário Wormhole: “Vamos dizer que o coração pare de bater, o sangue pare de fluir e os microtúbulos percam seu estado quântico. A informação quântica dentro dos microtúbulos não é destruída, não pode ser destruída, ele só distribui e se dissipa com o universo como um todo.” Robert Lanza acrescenta aqui que não só existem em um único universo, ela existe talvez, em outro universo. Se o paciente é ressuscitado, esta informação quântica pode voltar para os microtúbulos e o paciente diz: “Eu tive uma experiência de quase morte”. Ele acrescenta: “Se ele não reviveu e o paciente morre é possível que esta informação quântica possa existir fora do corpo talvez indefinidamente, como uma alma.” Esta conta de consciência quântica explica coisas como experiências de quase morte, projeção astral, experiências fora do corpo e até mesmo a reencarnação sem a necessidade de recorrer a ideologia religiosa. A energia de sua consciência potencialmente é reciclada de volta em um corpo diferente em algum momento e nesse meio tempo ela existe fora do corpo físico em algum outro nível de realidade e possivelmente, em outro universo.

E você o que acha? Concorda com Lanza?

Grande abraço!

Indicação: Pedro Lopes Martins Artigo publicado originalmente em inglês no site SPIRIT SCIENCE AND METAPHYSICS.

*   *   *

Scientists Claim That Quantum Theory Proves Consciousness Moves To Another Universe At Death


A book titled “Biocentrism: How Life and Consciousness Are the Keys to Understanding the Nature of the Universe“ has stirred up the Internet, because it contained a notion that life does not end when the body dies, and it can last forever. The author of this publication, scientist Dr. Robert Lanza who was voted the 3rd most important scientist alive by the NY Times, has no doubts that this is possible.

Lanza is an expert in regenerative medicine and scientific director of Advanced Cell Technology Company. Before he has been known for his extensive research which dealt with stem cells, he was also famous for several successful experiments on cloning endangered animal species. But not so long ago, the scientist became involved with physics, quantum mechanics and astrophysics. This explosive mixture has given birth to the new theory of biocentrism, which the professor has been preaching ever since.  Biocentrism teaches that life and consciousness are fundamental to the universe.  It is consciousness that creates the material universe, not the other way around. Lanza points to the structure of the universe itself, and that the laws, forces, and constants of the universe appear to be fine-tuned for life, implying intelligence existed prior to matter.  He also claims that space and time are not objects or things, but rather tools of our animal understanding.  Lanza says that we carry space and time around with us “like turtles with shells.” meaning that when the shell comes off (space and time), we still exist. The theory implies that death of consciousness simply does not exist.   It only exists as a thought because people identify themselves with their body. They believe that the body is going to perish, sooner or later, thinking their consciousness will disappear too.  If the body generates consciousness, then consciousness dies when the body dies.  But if the body receives consciousness in the same way that a cable box receives satellite signals, then of course consciousness does not end at the death of the physical vehicle. In fact, consciousness exists outside of constraints of time and space. It is able to be anywhere: in the human body and outside of it. In other words, it is non-local in the same sense that quantum objects are non-local. Lanza also believes that multiple universes can exist simultaneously.  In one universe, the body can be dead. And in another it continues to exist, absorbing consciousness which migrated into this universe.  This means that a dead person while traveling through the same tunnel ends up not in hell or in heaven, but in a similar world he or she once inhabited, but this time alive. And so on, infinitely.  It’s almost like a cosmic Russian doll afterlife effect.

Multiple worlds

This hope-instilling, but extremely controversial theory by Lanza has many unwitting supporters, not just mere mortals who want to live forever, but also some well-known scientists. These are the physicists and astrophysicists who tend to agree with existence of parallel worlds and who suggest the possibility of multiple universes. Multiverse (multi-universe) is a so-called scientific concept, which they defend. They believe that no physical laws exist which would prohibit the existence of parallel worlds. The first one was a science fiction writer H.G. Wells who proclaimed in 1895 in his story “The Door in the Wall”.  And after 62 years, this idea was developed by Dr. Hugh Everett in his graduate thesis at the Princeton University. It basically posits that at any given moment the universe divides into countless similar instances. And the next moment, these “newborn” universes split in a similar fashion. In some of these worlds you may be present: reading this article in one universe, or watching TV in another. The triggering factor for these multiplyingworlds is our actions, explained Everett. If we make some choices, instantly one universe splits into two with different versions of outcomes. In the 1980s, Andrei Linde, scientist from the Lebedev’s Institute of physics, developed the theory of multiple universes. He is now a professor at Stanford University.  Linde explained: Space consists of many inflating spheres, which give rise to similar spheres, and those, in turn, produce spheres in even greater numbers, and so on to infinity. In the universe, they are spaced apart. They are not aware of each other’s existence. But they represent parts of the same physical universe. The fact that our universe is not alone is supported by data received from the Planck space telescope. Using the data, scientists have created the most accurate map of the microwave background, the so-called cosmic relic background radiation, which has remained since the inception of our universe. They also found that the universe has a lot of dark recesses represented by some holes and extensive gaps. Theoretical physicist Laura Mersini-Houghton from the North Carolina University with her colleagues argue: the anomalies of the microwave background exist due to the fact that our universe is influenced by other universes existing nearby. And holes and gaps are a direct result of attacks on us by neighboring universes.


So, there is abundance of places or other universes where our soul could migrate after death, according to the theory of neo-biocentrism. But does the soul exist?  Is there any scientific theory of consciousness that could accommodate such a claim?  According to Dr. Stuart Hameroff, a near-death experience happens when the quantum information that inhabits the nervous system leaves the body and dissipates into the universe.  Contrary to materialistic accounts of consciousness, Dr. Hameroff offers an alternative explanation of consciousness that can perhaps appeal to both the rational scientific mind and personal intuitions. Consciousness resides, according to Stuart and British physicist Sir Roger Penrose, in the microtubules of the brain cells, which are the primary sites of quantum processing.  Upon death, this information is released from your body, meaning that your consciousness goes with it. They have argued that our experience of consciousness is the result of quantum gravity effects in these microtubules, a theory which they dubbed orchestrated objective reduction (Orch-OR). Consciousness, or at least proto-consciousness is theorized by them to be a fundamental property of the universe, present even at the first moment of the universe during the Big Bang. “In one such scheme proto-conscious experience is a basic property of physical reality accessible to a quantum process associated with brain activity.” Our souls are in fact constructed from the very fabric of the universe – and may have existed since the beginning of time.  Our brains are just receivers and amplifiers for the proto-consciousness that is intrinsic to the fabric of space-time. So is there really a part of your consciousness that is non-material and will live on after the death of your physical body? Dr Hameroff told the Science Channel’s Through the Wormhole documentary: “Let’s say the heart stops beating, the blood stops flowing, the microtubules lose their quantum state. The quantum information within the microtubules is not destroyed, it can’t be destroyed, it just distributes and dissipates to the universe at large”.  Robert Lanza would add here that not only does it exist in the universe, it exists perhaps in another universe. If the patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says “I had a near death experience”‘

He adds: “If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.”

This account of quantum consciousness explains things like near-death experiences, astral projection, out of body experiences, and even reincarnation without needing to appeal to religious ideology.  The energy of your consciousness potentially gets recycled back into a different body at some point, and in the mean time it exists outside of the physical body on some other level of reality, and possibly in another universe. Robert Lanza on Biocentrism:


– See more at:

Physicists, alchemists, and ayahuasca shamans: A study of grammar and the body (Cultural Admixtures)

Posted on by

Are there any common denominators that may underlie the practices of leading physicists and scientists, Renaissance alchemists, and indigenous Amazonian ayahuasca healers? There are obviously a myriad of things that these practices do not have in common. Yet through an analysis of the body and the senses and styles of grammar and social practice, these seemingly very different modes of existence may be triangulated to reveal a curious set of logics at play. Ways in which practitioners identify their subjectivities (or ‘self’) with nonhuman entities and ‘natural’ processes are detailed in the three contexts. A logic of identification illustrates similarities, and also differences, in the practices of advanced physics, Renaissance alchemy, and ayahuasca healing.

Physics and the “I” and “You” of experimentation


A small group of physicists at a leading American university in the early 1990s are investigating magnetic temporality and atomic spins in a crystalline lattice; undertaking experiments within the field of condensed matter physics. The scientists collaborate together, presenting experimental or theoretical findings on blackboards, overhead projectors, printed pages and various other forms of visual media. Miguel, a researcher, describes to a colleague the experiments he has just conducted. He points down and then up across a visual representation of the experiment while describing an aspect of the experiment, “We lowered the field [and] raised the field”. In response, his collaborator Ron replies using what is a common type of informal scientific language. The language-style identifies, conflates, or brings-together the researcher with the object being researched. In the following reply, the pronoun ‘he’ refers to both Miguel and the object or process under investigation: Ron asks, “Is there a possibility that he hasn’t seen anything real? I mean is there a [he points to the diagram]“. Miguel sharply interjects “I-, i-, it is possible… I am amazed by his measurement because when I come down I’m in the domain state”. Here Miguel is referring to a physical process of temperature change; a cooling that moves ‘down’ to the ‘domain state’. Ron replies, “You quench from five to two tesla, a magnet, a superconducting magnet”.  What is central here in regards to the common denominators explored in this paper is the way in which the scientists collaborate with certain figurative styles of language that blur the borders between physicist and physical process or state.

The collaboration between Miguel and Ron was filmed and examined by linguistic ethnographers Elinor Ochs, Sally Jacoby, and Patrick Gonzales (1994, 1996:328).  In the experiment, the physicists, Ochs et al illustrate, refer to ‘themselves as the thematic agents and experiencers of [the physical] phenomena’ (Osch et al 1996:335). By employing the pronouns ‘you’, ‘he’, and ‘I’ to refer to the physical processes and states under investigation, the physicists identify their own subjectivities, bodies, and investigations with the objects they are studying.

In the physics laboratory, members are trying to understand physical worlds that are not directly accessible by any of their perceptual abilities. To bridge this gap, it seems, they take embodied interpretive journeys across and through see-able, touchable two-dimensional artefacts that conventionally symbolize those worlds… Their sensory-motor gesturing is a means not only of representing (possible) worlds but also of imagining or vicariously experiencing them… Through verbal and gestural (re)enactments of constructed physical processes, physicist and physical entity are conjoined in simultaneous, multiple constructed worlds: the here-and-now interaction, the visual representation, and the represented physical process. The indeterminate grammatical constructions, along with gestural journeys through visual displays, constitute physicist and physical entity as coexperiencers of dynamic processes and, therefore, as coreferents of the personal pronoun. (Ochs et al 1994:163,164)

When Miguel says “I am in the domain state” he is using a type of ‘private, informal scientific discourse’  that has been observed in many other types of scientific practice (Latour & Woolgar 1987; Gilbert & Mulkay 1984 ). This style of erudition and scientific collaboration obviously has become established in state-of-the-art universities given the utility that it provides in regards to empirical problems and the development of scientific ideas.

What could this style of practice have in common with the healing practices of Amazonian shamans drinking the powerful psychoactive brew ayahuasca? Before moving on to an analysis of grammar and the body in types of ayahuasca use, the practice of Renaissance alchemy is introduced given the bridge or resemblance it offers between these scientific practices and certain notions of healing.

Renaissance alchemy, “As above so below”


Heinrich Khunrath: 1595 engraving Amphitheatre

Graduating from the Basel Medical Academy in 1588, the physician Heinrich Khunrath defended his thesis that concerns a particular development of the relationship between alchemy and medicine. Inspired by the works of key figures in Roman and Greek medicine, key alchemists and practitioners of the hermetic arts, and key botanists, philosophers and others, Khunrath went on to produced innovative and influential texts and illustrations that informed various trajectories in medical and occult practice.

Alchemy flourished in the Renaissance period and was draw upon by elites such as Queen Elizabeth I and the Holy Emperor of Rome, Rudolf II . Central to the practices of Renaissance alchemists was a belief that all metals sprang from one source deep within the earth and that this process may be reversed and every metal be potentially turned into gold. The process of ‘transmutation’ or reversal of nature, it was claimed, could also lead to the elixir of life, the philosopher’s stone, or eternal youth and immortality. It was a spiritual pursuit of purification and regeneration which depended heavily on natural science experimentation.

Alchemical experiments were typically undertaken in a laboratory and alchemists were often contracted by elites for pragmatic purposes related to mining, medical services, and the production of chemicals, metals, and gemstones (Nummedal 2007). Allison Coudert describes and distills the practice of Renaissance alchemy with a basic overview of the relationship between an alchemist and the ‘natural entities’ of his practice.

All the ingredients mentioned in alchemical recipes—the minerals, metals, acids, compounds, and mixtures—were in truth only one, the alchemist himself. He was the base matter in need of purification from the fire; and the acid needed to accomplish this transformation came from his own spiritual malaise and longing for wholeness and peace. The various alchemical processes… were steps in the mysterious process of spiritual regeneration. (cited in Hanegraaff 1996:395)

The physician-alchemist Khunrath worked within a laboratory/oratory that included various alchemical apparatuses, including ‘smelting equipment for the extraction of metal from ore… glass vessels, ovens… [a] furnace or athanor… [and] a mirror’. Khunrath spoke of using the mirror as a ‘physico-magical instrument for setting a coal or lamp-fire alight by the heat of the sun’ (Forshaw 2005:205). Urszula Szulakowska argues that this use of the mirror embodies the general alchemical process and purpose of Khunruth’s practice. The functions of his practice and his alchemical illustrations and glyphs (such as his engraving Amphitheatre above) are aimed towards various outcomes of transmutation or reversal of nature. Khunruth’s engravings and illustrations,  Szulakowska (2000:9) argues:

are intended to excite the imagination of the viewer so that a mystic alchemy can take place through the act of visual contemplation… Khunrath’s theatre of images, like a mirror, catoptrically reflects the celestial spheres to the human mind, awakening the empathetic faculty of the human spirit which unites, through the imagination, with the heavenly realms. Thus, the visual imagery of Khunrath’s treatises has become the alchemical quintessence, the spiritualized matter of the philosopher’s stone.

Khunrath called himself a ‘lover of both medicines’, referring to the inseparability of material and spiritual forms of medicine.  Illustrating the centrality of alchemical practice in his medical approach, he described his ‘down-to-earth Physical-Chemistry of Nature’ as:

[T]he art of chemically dissolving, purifying and rightly reuniting Physical Things by Nature’s method; the Universal (Macro-Cosmically, the Philosopher’s Stone; Micro-Cosmically, the parts of the human body…) and ALL the particulars of the inferior globe. (cited in Forshaw 2005:205).

In Renaissance alchemy there is a certain kind of laboratory visionary mixing that happens between the human body and the human temperaments and ‘entities’ and processes of the natural world. This is condensed in the hermetic dictum “As above, so below” where the signatures of nature (‘above’) may be found in the human body (‘below’). The experiments involved certain practices of perception, contemplation, and language, that were undertaken in laboratory settings.

The practice of Renaissance alchemy, illustrated in recipes, glyphs, and instructional texts, includes styles of grammar in which minerals, metals, and other natural entities are animated with subjectivity and human temperaments. Lead “wants” or “desires” to transmute into gold; antimony feels a wilful “attraction” to silver (Kaiser 2010; Waite 1894). This form of grammar is entailed in the doctrine of medico-alchemical practice described by Khunrath above. Under certain circumstances and conditions, minerals, metals, and other natural entities may embody aspects of ‘Yourself’, or the subjectivity of the alchemist, and vice versa.

Renaissance alchemical language and practice bares a certain level of resemblance to the contemporary practices of physicists and scientists and the ways in which they identify themselves with the objects and processes of their experiments. The methods of physicists appear to differ considerably insofar as they use metaphors and trade spiritual for figurative approaches when ‘journeying through’ cognitive tasks, embodied gestures, and visual representations of empirical or natural processes. It is no coincidence that contemporary state-of-the-art scientists are employing forms of alchemical language and practice in advanced types of experimentation. Alchemical and hermetic thought and practice were highly influential in the emergence of modern forms of science (Moran 2006; Newman 2006; Hanegraaff 2013).

Ayahuasca shamanism and shapeshifting


Pablo Amaringo

In the Amazon jungle a radically different type of practice to the Renaissance alchemical traditions exists. Yet, as we will see, the practices of indigenous Amazonian shamans and Renaissance alchemists appear to include certain similarities — particularly in terms of the way in which ‘natural entities’ and the subjectivity of the practitioner may merge or swap positions — this is evidenced in the grammar and language of shamanic healing songs and in Amazonian cosmologies more generally.

In the late 1980s, Cambridge anthropologist Graham Townsley was undertaking PhD fieldwork with the indigenous Amazonian Yaminahua on the Yurua river. His research was focused on ways in which forms of social organisation are embedded in cosmology and the practice of everyday life. Yaminahua healing practices are embedded in broad animistic cosmological frames and at the centre of these healing practices is song. ‘What Yaminahua shamans do, above everything else, is sing’, Townsley explains, and this ritual singing is typically done while under the effects of the psychoactive concoction ayahuasca.

The psychoactive drink provides shamans with a means of drawing upon the healing assistance of benevolent spirit persons of the natural world (such as plant-persons, animal-persons, sun-persons etc.) and of banishing malevolent spirit persons that are affecting the wellbeing of a patient. The Yaminahua practice of ayahuasca shamanism resembles broader types of Amazonian shamanism. Shapeshifting, or the metamorphosis of human persons into nonhuman persons (such as jaguar-persons and anaconda-persons) is central to understandings of illness and to practices of healing in various types of Amazonian shamanism (Chaumeil 1992; Praet 2009; Riviere 1994).

The grammatical styles and sensory experiences of indigenous ayahuasca curing rituals and songs bare some similarities with the logic of identification noted in the sections on physics and alchemy above. Townsley (1993) describes a Yaminahua ritual where a shaman attempts to heal a patient that was still bleeding several days after giving birth. The healing songs that the shaman sings (called wai which also means ‘path’ and ‘myth’ orabodes of the spirits) make very little reference to the illness in which they are aimed to heal. The shaman’s songs do not communicate meanings to the patient but they embody complex metaphors and analogies, or what Yaminahua call ‘twisted language’; a language only comprehensible to shamans. There are ‘perceptual resemblances’ that inform the logic of Yaminahua twisted language. For example, “white-collared peccaries” becomes fish given the similarities between the gills of the fish and designs on the peccaries neck. The use of visual or sensory resonance in shamanic song metaphors is not arbitrary but central to the practice Yaminahua ayahuasca healing.

Ayahuasca typically produces a powerful visionary experience. The shaman’s use of complex metaphors in ritual song helps him shape his visions and bring a level of control to the visionary content. Resembling the common denominators and logic of identification explored above, the songs allow the shaman to perceive from the various perspectives that the meanings of the metaphors (or the spirits) afford.

Everything said about shamanic songs points to the fact that as they are sung the shaman actively visualizes the images referred to by the external analogy of the song, but he does this through a carefully controlled “seeing as” the different things actually named by the internal metaphors of his song. This “seeing as” in some way creates a space in which powerful visionary experience can occur. (Townsley 1993:460)

The use of analogies and metaphors provides a particularly powerful means of navigating the visionary experience of ayahuasca. There appears to be a kind of pragmatics involved in the use of metaphor over literal meanings. For instance, a shaman states, “twisted language brings me close but not too close [to the meanings of the metaphors]–with normal words I would crash into things–with twisted ones I circle around them–I can see them clearly” (Townsley 1993:460). Through this method of “seeing as”, the shaman embodies a variety of animal and nature spirits, or yoshi in Yaminahua, including anaconda-yoshi, jaguar-yoshi and solar or sun-yoshi, in order to perform acts of healing and various other shamanic activities.

While Yaminahua shamans use metaphors to control visions and shapeshift (or “see as”), they, and Amazonians more generally, reportedly understand shapeshifting in literal terms. For example, Lenaerts describes this notion of ‘seeing like the spirits’, and the ‘physical’ or literal view that the Ashéninka hold in regards to the practice of ayahuasca-induced shapeshifting.

What is at stake here is a temporary bodily process, whereby a human being assumes the embodied point of view of another species… There is no need to appeal to any sort of metaphoric sense here. A literal interpretation of this process of disembodiment/re-embodiment is absolutely consistent with all what an Ashéninka knowns and directly feels during this experience, in a quite physical sense. (2006, 13)

The practices of indigenous ayahuasca shamans are centred on an ability to shapeshift and ‘see nonhumans as they [nonhumans] see themselves’ (Viveiros de Castro 2004:468). Practitioners not only identify with nonhuman persons or ‘natural entities’ but they embody their point of view with the help of psychoactive plants and  ‘twisted language’ in song.

Some final thoughts

Through a brief exploration of techniques employed by advanced physicists, Renaissance alchemists, and Amazonian ayahuasca shamans, a logic of identification may be observed in which practitioners embody different means of transcending themselves and becoming the objects or spirits of their respective practices. While the physicists tend to embody secular principles and relate to this logic of identification in a purely figurative or metaphorical sense, Renaissance alchemists and Amazonian shamans embody epistemological stances that afford much more weight to the existential qualities and ‘persons’ or ‘spirits’ of their respective practices. A cognitive value in employing forms of language and sensory experience that momentarily take the practitioner beyond him or herself is evidenced by these three different practices. However, there is arguably more at stake here than values confined to cogito. The boundaries of bodies, subjectivities and humanness in each of these practices become porous, blurred, and are transcended while the contours of various forms of possibility are exposed, defined, and acted upon — possibilities that inform the outcomes of the practices and the definitions of the human they imply.


Chaumeil, Jean-Pierre 1992, ‘Varieties of Amazonian shamanism’. Diogenes. Vol. 158 p.101
Forshaw, P. 2008 ‘”Paradoxes, Absurdities, and Madness”: Conflicts over Alchemy, Magic and Medicine in the Works of Andreas Libavius and Heinrich Khunrath. Early Science and Medicine. Vol. 1 pp.53
Forshaw, P. 2006 ‘Alchemy in the Amphitheatre: Some considerations of the alchemical content of the engravings in Heinrich Khunrath’s Amphitheatre of Eternal Wisdom’ in Jacob Wamberg Art and Alchemy. p.195-221
Gilbert, G. N. & Mulkay, M. 1984 Opening Bandora’s Box: A sociological analysis of scientists’ discourse. Cambridge, Cambridge University Press 
Hanegraaff, W. 2012 Esotericism and the Academy: Rejected knowledge in Western culture. Cambridge, Cambridge University Press
Hanegraaff, W. 1996 New Age Religion and Western Culture: Esotericism in the Mirror of Secular Thought. New York: SUNY Press
Latour, B. & Woolgar, S. 1987 Laboratory Life: The social construction of scientific facts. Cambridge, Harvard University Press
Lenaerts, M. 2006, ‘Substance, relationships and the omnipresence of the body: an overview of Ashéninka ethnomedicine (Western Amazonia)’ Journal of Ethnobiology and Ethnomedicine, Vol. 2, (1) 49
Moran, B. 2006 Distilling Knowledge: Alchemy, Chemistry, and the Scientific Revolution. Harvard, Harvard University Press
Newman, W. 2006 Atoms and Alchemy: Chymistry and the Experimental Origins of the Scientific Revolution. Chicago, Chicago University Press
Nummedal, T. 2007 Alchemy and Authroity in the Holy Roman Empire. Chicago, Chicago University Press
Ochs, E. Gonzales, P., Jacoby, S. 1996 ‘”When I come down I’m in the domain state”: grammar and graphic representation in the interpretive activities of physicists’ in Ochs, E., Schegloff, E. & Thompson, S (ed.)Interaction and Grammar. Cambridge, Cambridge University Press
Ochs, E. Gonzales, P., Jacoby, S 1994 ‘Interpretive Journeys: How Physicists Talk and Travel through Graphic Space’ Configurations. (1) p.151
Praet, I. 2009, ‘Shamanism and ritual in South America: an inquiry into Amerindian shape-shifting’. Journal of the Royal Anthropological Institute. Vol. 15 pp.737-754
Riviere, P. 1994, ‘WYSINWYG in Amazonia’. Journal of the Anthropological Society of Oxford. Vol. 25
Szulakowska, U. 2000 The Alchemy of Light: Geometry and Optics in Late Renaissance Alchemical Illustration. Leiden, Brill Press
Townsley, G. 1993 ‘Song Paths: The ways and means of Yaminahua shamanic knowledge’. L’Hommee. Vol. 33 p. 449
Viveiros de Castro, E. 2004, ‘Exchanging perspectives: The Transformation of Objects into Subjects in Amerindian Ontologies’.Common Knowledge. Vol. 10 (3) pp.463-484
Waite, A. 1894 The Hermetic and Alchemical Writings of Aureolus Philippus Theophrastrus Bombast, of Hohenheim, called Paracelcus the Great. Cornell University Library, ebook

Nudge: The gentle science of good governance (New Scientist)

25 June 2013

Magazine issue 2922

NOT long before David Cameron became UK prime minister, he famously prescribed some holiday reading for his colleagues: a book modestly entitled Nudge.

Cameron wasn’t the only world leader to find it compelling. US president Barack Obama soon appointed one of its authors, Cass Sunstein, a social scientist at the University of Chicago, to a powerful position in the White House. And thus the nudge bandwagon began rolling. It has been picking up speed ever since (see “Nudge power: Big government’s little pushes“).

So what’s the big idea? We don’t always do what’s best for ourselves, thanks to cognitive biases and errors that make us deviate from rational self-interest. The premise of Nudge is that subtly offsetting or exploiting these biases can help people to make better choices.

If you live in the US or UK, you’re likely to have been nudged towards a certain decision at some point. You probably didn’t notice. That’s deliberate: nudging is widely assumed to work best when people aren’t aware of it. But that stealth breeds suspicion: people recoil from the idea that they are being stealthily manipulated.

There are other grounds for suspicion. It sounds glib: a neat term for a slippery concept. You could argue that it is a way for governments to avoid taking decisive action. Or you might be concerned that it lets them push us towards a convenient choice, regardless of what we really want.

These don’t really hold up. Our distaste for being nudged is understandable, but is arguably just another cognitive bias, given that our behaviour is constantly being discreetly influenced by others. What’s more, interventions only qualify as nudges if they don’t create concrete incentives in any particular direction. So the choice ultimately remains a free one.

Nudging is a less blunt instrument than regulation or tax. It should supplement rather than supplant these, and nudgers must be held accountable. But broadly speaking, anyone who believes in evidence-based policy should try to overcome their distaste and welcome governance based on behavioural insights and controlled trials, rather than carrot-and-stick wishful thinking. Perhaps we just need a nudge in the right direction.

Inside the teenage brain: New studies explain risky behavior (Science Daily)

Date: August 27, 2014

Source: Florida State University

Summary: It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Young man (stock image). “Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general. Credit: © iko / Fotolia

It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Florida State University College of Medicine Neuroscientist Pradeep Bhide brought together some of the world’s foremost researchers in a quest to explain why teenagers — boys, in particular — often behave erratically.

The result is a series of 19 studies that approached the question from multiple scientific domains, including psychology, neurochemistry, brain imaging, clinical neuroscience and neurobiology. The studies are published in a special volume of Developmental Neuroscience, “Teenage Brains: Think Different?”

“Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general.

“The emotional and economic burdens of such behaviors are quite huge. The research described in this book offers clues to what may cause such maladaptive behaviors and how one may be able to devise methods of countering, avoiding or modifying these behaviors.”

An example of findings published in the book that provide new insights about the inner workings of a teenage boy’s brain:

• Unlike children or adults, teenage boys show enhanced activity in the part of the brain that controls emotions when confronted with a threat. Magnetic resonance scanner readings in one study revealed that the level of activity in the limbic brain of adolescent males reacting to threat, even when they’ve been told not to respond to it, was strikingly different from that in adult men.

• Using brain activity measurements, another team of researchers found that teenage boys were mostly immune to the threat of punishment but hypersensitive to the possibility of large gains from gambling. The results question the effectiveness of punishment as a deterrent for risky or deviant behavior in adolescent boys.

• Another study demonstrated that a molecule known to be vital in developing fear of dangerous situations is less active in adolescent male brains. These findings point towards neurochemical differences between teenage and adult brains, which may underlie the complex behaviors exhibited by teenagers.

“The new studies illustrate the neurobiological basis of some of the more unusual but well-known behaviors exhibited by our teenagers,” Bhide said. “Stress, hormonal changes, complexities of psycho-social environment and peer-pressure all contribute to the challenges of assimilation faced by teenagers.

“These studies attempt to isolate, examine and understand some of these potential causes of a teenager’s complex conundrum. The research sheds light on how we may be able to better interact with teenagers at home or outside the home, how to design educational strategies and how best to treat or modify a teenager’s maladaptive behavior.”

Bhide conceived and edited “Teenage Brains: Think Different?” His co-editors were Barry Kasofsky and B.J. Casey, both of Weill Medical College at Cornell University. The book was published by Karger Medical and Scientific Publisher of Basel, Switzerland. More information on the book can be found at:

The table of contents to the special journal volume can be found at:

The Climate Swerve (The New York Times)

CreditRobert Frank Hunter


AMERICANS appear to be undergoing a significant psychological shift in our relation to global warming. I call this shift a climate “swerve,” borrowing the term used recently by the Harvard humanities professor Stephen Greenblatt to describe a major historical change in consciousness that is neither predictable nor orderly.

The first thing to say about this swerve is that we are far from clear about just what it is and how it might work. But we can make some beginning observations which suggest, in Bob Dylan’s words, that “something is happening here, but you don’t know what it is.” Experience, economics and ethics are coalescing in new and important ways. Each can be examined as a continuation of my work comparing nuclear and climate threats.

The experiential part has to do with a drumbeat of climate-related disasters around the world, all actively reported by the news media: hurricanes and tornadoes, droughts and wildfires, extreme heat waves and equally extreme cold, rising sea levels and floods. Even when people have doubts about the causal relationship of global warming to these episodes, they cannot help being psychologically affected. Of great importance is the growing recognition that the danger encompasses the entire earth and its inhabitants. We are all vulnerable.

This sense of the climate threat is represented in public opinion polls and attitude studies. A recent Yale survey, for instance, concluded that “Americans’ certainty that the earth is warming has increased over the past three years,” and “those who think global warming is not happening have become substantially less sure of their position.”

Falsification and denial, while still all too extensive, have come to require more defensive psychic energy and political chicanery.

But polls don’t fully capture the complex collective process occurring.

The most important experiential change has to do with global warming and time. Responding to the climate threat — in contrast to the nuclear threat, whose immediate and grotesque destructiveness was recorded in Hiroshima and Nagasaki — has been inhibited by the difficulty of imagining catastrophic future events. But climate-related disasters and intense media images are hitting us now, and providing partial models for a devastating climate future.

At the same time, economic concerns about fossil fuels have raised the issue of value. There is a wonderfully evocative term, “stranded assets,” to characterize the oil, coal and gas reserves that are still in the ground. Trillions of dollars in assets could remain “stranded” there. If we are serious about reducing greenhouse gas emissions and sustaining the human habitat, between 60 percent and 80 percent of those assets must remain in the ground, according to the Carbon Tracker Initiative, an organization that analyzes carbon investment risk. In contrast, renewable energy sources, which only recently have achieved the status of big business, are taking on increasing value, in terms of returns for investors, long-term energy savings and relative harmlessness to surrounding communities.

Pragmatic institutions like insurance companies and the American military have been confronting the consequences of climate change for some time. But now, a number of leading financial authorities are raising questions about the viability of the holdings of giant carbon-based fuel corporations. In a world fueled by oil and coal, it is a truly stunning event when investors are warned that the market may end up devaluing those assets. We are beginning to see a bandwagon effect in which the overall viability of fossil-fuel economics is being questioned.

Can we continue to value, and thereby make use of, the very materials most deeply implicated in what could be the demise of the human habitat? It is a bit like the old Jack Benny joke, in which an armed robber offers a choice, “Your money or your life!” And Benny responds, “I’m thinking it over.” We are beginning to “think over” such choices on a larger scale.

This takes us to the swerve-related significance of ethics. Our reflections on stranded assets reveal our deepest contradictions. Oil and coal company executives focus on the maximum use of their product in order to serve the interests of shareholders, rather than the humane, universal ethics we require to protect the earth. We may well speak of those shareholder-dominated principles as “stranded ethics,” which are better left buried but at present are all too active above ground.

Such ethical contradictions are by no means entirely new in historical experience. Consider the scientists, engineers and strategists in the United States and the Soviet Union who understood their duty as creating, and possibly using, nuclear weapons that could destroy much of the earth. Their conscience could be bound up with a frequently amorphous ethic of “national security.” Over the course of my work I have come to the realization that it is very difficult to endanger or kill large numbers of people except with a claim to virtue.

The climate swerve is mostly a matter of deepening awareness. When exploring the nuclear threat I distinguished between fragmentary awareness, consisting of images that come and go but remain tangential, and formed awareness, which is more structured, part of a narrative that can be the basis for individual and collective action.

In the 1980s there was a profound worldwide shift from fragmentary awareness to formed awareness in response to the potential for a nuclear holocaust. Millions of people were affected by that “nuclear swerve.” And even if it is diminished today, the nuclear swerve could well have helped prevent the use of nuclear weapons.

With both the nuclear and climate threats, the swerve in awareness has had a crucial ethical component. People came to feel that it was deeply wrong, perhaps evil, to engage in nuclear war, and are coming to an awareness that it is deeply wrong, perhaps evil, to destroy our habitat and create a legacy of suffering for our children and grandchildren.

Social movements in general are energized by this kind of ethical passion, which enables people to experience the more active knowledge associated with formed awareness. That was the case in the movement against nuclear weapons. Emotions related to individual conscience were pooled into a shared narrative by enormous numbers of people.

In earlier movements there needed to be an overall theme, even a phrase, that could rally people of highly divergent political and intellectual backgrounds. The idea of a “nuclear freeze” mobilized millions of people with the simple and clear demand that the United States and the Soviet Union freeze the testing, production and deployment of nuclear weapons.

Could the climate swerve come to include a “climate freeze,” defined by a transnational demand for cutting back on carbon emissions in steps that could be systematically outlined?

With or without such a rallying phrase, the climate swerve provides no guarantees of more reasonable collective behavior. But with human energies that are experiential, economic and ethical it could at least provide — and may already be providing — the psychological substrate for action on behalf of our vulnerable habitat and the human future.

Unindo ciências humanas à neurociência (Faperj)


Vilma Homero

4O filósofo Carlos Eduardo Batista de Sousa: estudos sobre o pensamento humano

O homem é um animal puramente biológico ou um ser sociocultural? A pergunta vem dividindo especialistas das neurociências e das ciências humanas. Especialmente depois que estudos recentes visam identificar as bases neurais que possibilitam ou estão correlacionadas com o pensamento consciente. “A intencionalidade, o conteúdo do pensamento consciente, está associada às nossas ações. E este assunto se relaciona diretamente com o nosso contexto cultural e a nossa época, e com o entendimento sobre nós mesmos. O que significa dizer que a neurociência agora estuda um objeto típico das ciências humanas?”, pergunta o filósofo da ciência Carlos Eduardo Batista de Sousa, que contou com o apoio de um Auxílio à Pesquisa (APQ 1) para estudar as dimensões que compõem a humanidade em projeto intitulado “Intencionalidade e Comportamento: Definindo a Natureza Humana”. Como ele mesmo pondera, é possível formular uma resposta plausível, integrando o conhecimento das duas ciências.  

“Tento acomodar os estudos nesses dois campos, das humanidades e dasneurociências, vendo como a questão da intencionalidade está vinculada à neurobiologia humana e ao aspecto sociocultural”, acrescenta o pesquisador. Ele explica que o tipo de pensamento que o ser humano tem acontece também em virtude de nossa história evolutiva. Ou seja, tanto a nossa neurobiologia quanto as interações sociais, nosso contexto cultural e a época, devem ser considerados na tentativa de entender a natureza humana. Diferentemente dos animais, o ser humano conta com uma estrutura intencional específica: “Pensar implica pensar em alguma coisa, é preciso ter um objeto em mente, ter uma representação desse objeto no pensamento que é sobre algo. De modo bem direto, isso é o que os filósofos descobriram há certo tempo. Esse conteúdo intencional emerge da neurobiologia e da interação social, influenciando nosso comportamento.”

Descobertas recentes das neurociências indicam que o pensamento consciente está associado a certas regiões no cérebro, como o lobo  frontal, que se divide em córtex frontal e pré-frontal. A partir de tecnologias, como neuroimageamento e eletrofisiologia, que nos permitem identificar as áreas e mapear o que acontece durante o pensamento consciente, novos estudos estão se tornando possíveis de ser implementados, como por exemplo, investigar o cérebro em ação. “Mas ainda é prematuro dizer que partes do cérebro são responsáveis por cada coisa”, admite o pesquisador.

Para De Sousa, estudar a natureza humana também implica estudar sua natureza biológica e sociocultural, por meio do trabalho científico e do trabalho crítico de tentar unificar as duas vertentes. “Entender tanto a biologia quanto a cultura a partir do problema da intencionalidade pode unir essas duas áreas aparentemente opostas, e isso significa reconhecer que o pensamento consciente-intencional se baseia na neurobiologia e na interação social, dando origem às nossas ações.

Mas nosso cérebro precisa estar em condições favoráveis, sob a ação de certos hormônios, como a dopamina – relacionada, por exemplo, com à tomada de decisão, cálculo de riscos, etc. Caso haja alguma anomalia no cérebro, a ação será diferente. Isso significa que a biologia precisa ser reconhecida como condição primeira, porém ela não determina o conteúdo, isto é, como vou formar meus pensamentos…”, diz De Sousa.

Como De Sousa faz questão de frisar, apenas uma ciência, seja a neurociência ou a sociologia, não pode garantir explicações plausíveis sobre o comportamento humano. “Em vez de uma briga de conhecimento, como vem sendo vivenciado hoje, é preciso conciliar ciências humanas e neurociências num contexto mais amplo pela integração dos estudos”, destaca De Sousa, que tem formação em filosofia e doutorado na Universidade de Constança, Alemanha. “Em vez de fornecer respostas, a filosofia aponta problema e possíveis caminhos. Minha proposta consiste em acomodar ambas as explicações de forma a dar conta dos vários fatores e aspectos que influenciam o conteúdo do pensamento humano, as intenções que levam o sujeito a agir de determinado modo e não de outro.”

O próximo passo para De Sousa é dar continuidade a seu trabalho, procurando unificar os estudos sobre a natureza humana numa área transdisciplinar, já que o homem é um animal complexo. “Foi na Alemanha que dei início a essa pesquisa, durante o doutorado em neurofilosofia. Lá, esse tipo de pensamento integrador estava começando. Hoje, o assunto já avançou, permitindo um maior entendimento sobre o que nós somos a partir das neurociências e da perspectiva das ciências humanas que tem longa tradição de estudos na área. Sabendo como o cérebro aprende, se organiza e se deteriora, podemos entender por que agimos como agimos e encarar a realidade de outra forma, repensando inclusive o processo de educação. Assim, futuramente, poderemos até propor novas estratégias educacionais levando em consideração esse novo conhecimento. Com isso, poderemos também estabelecer uma nova visão de humanidade, mais completa, que inclua não apenas a neurobiologia, mas também a dimensão sociocultural”, conclui.

Why Anesthesia Is One of the Greatest Medical Mysteries of Our Time (IO9)


Why Anesthesia Is One of the Greatest Medical Mysteries of Our Time

Anesthesia was a major medical breakthrough, allowing us to lose consciousness during surgery and other painful procedures. Trouble is, we’re not entirely sure how it works. But now we’re getting closer to solving its mystery — and with it, the mystery of consciousness itself.

When someone goes under, their cognition and brain activity continue, but consciousness gets shut down. For example, it has been shown that rats can ‘remember’ odor experiences while under general anesthesia. This is why anesthesiologists, like the University of Arizona’s Stuart Hameroff, are so fascinated by the whole thing.

“Anesthetics are fairly selective, erasing consciousness while sparing non-conscious brain activity,” Hameroff told io9. “So the precise mechanism of anesthetic action should point to the mechanism for consciousness.”

The Perils of Going Under

The odds of something bad happening while under anesthetic are exceedingly low. But this hasn’t always been the case.

Indeed, anesthesiology has come a long way since that historic moment back in 1846 when a physician at Massachusetts General Hospital held a flask near a patient’s face until he fell unconscious.

But as late as the 1940s, anesthesia still remained a dicey proposition. Back then, one in every 1,500 perioperative deaths were attributed to anesthesia. That number has improved dramatically since that time, mostly on account of improved techniques and chemicals, modern safety standards, and an influx of accredited anesthesiologists. Today, the chances of a healthy patient suffering an intraoperative death owing to anesthesia is less than 1 in 200,000. That’s a 0.0005% chance of a fatality — which are pretty good odds if you ask me (especially if you consider the alternative, which is to be awake during a procedure).

It should be pointed out, however, that “healthy patient” is the operative term (so to speak). In actuality, anesthesia-related deaths are on the rise, and the aging population has a lot to do with it. After decades of decline, the worldwide death rate during anesthesia has risen to about 1.4 deaths per 200,000. Alarmingly, the number of deaths within a year after general anesthesia is disturbingly high — about one in every 20. For people above the age of 65, it’s one in 10. The reason, says anesthesiologist André Gottschalk, is that there are more older patients being operated on. Anesthesia can be stressful for older patients with heart problems or high blood pressure.

Why Anesthesia Is One of the Greatest Medical Mysteries of Our Time

(Tyler Olson/Shutterstock)

But there are other dangers associated with anesthesia. It can induce a condition known as postoperative delirium, a state of serious confusion and memory loss. Following surgery, some patients complain about hallucinations, have trouble responding to questions, speak gibberish, and forget why they’re in the hospital. Studies have shown that roughly half of all patients age 60 and over suffer from this sort of delirium. This condition usually resolves after a day or two. But for some people, typically those over the age of 70 and who have a history of mental deficits, a high enough dose of anesthesia can result in lingering problems for months and even years afterward, including attention and memory problems.

Researchers speculate that it’s not the quality of the anesthetics, but rather the quantity; the greater the amount, the greater the delerium. This is not an easy problem to resolve; not enough anesthesia can leave a patient awake, but too much can kill. It’s a challenging balance to achieve because, as science writer Maggie Koerth-Baker has pointed out, “Consciousness is not something we can measure.”

Rots the Brain

Deep anesthesia has also been linked to other cognitive problems. New Scientist reports:

Patients received either propofol or one of several anesthetic gases. The morning after surgery, 16 percent of patients who had received light anesthesia displayed confusion, compared with 24 percent of the routine care group. Likewise, 15 percent of patients who received typical anesthesia had postoperative mental setbacks that lingered for at least three months—they performed poorly on word-recall tests, for example—but only 10 percent of those in the light anesthesia group had such difficulties.

To help alleviate these effects, doctors are encouraged to talk to their patients during regional anesthesia, and to make sure their patients are well hydrated and nourished before surgery to improve blood flow to the brain.

But just to be clear, the risks are slight. According to the Mayo Clinic:

Most healthy people don’t have any problems with general anesthesia. Although many people may have mild, temporary symptoms, general anesthesia itself is exceptionally safe, even for the sickest patients. The risk of long-term complications, much less death, is very small. In general, the risk of complications is more closely related to the type of procedure you’re undergoing, and your general physical health, than to the anesthesia itself.

The Neural Correlates of Consciousness

Typically, anesthesia is initiated with the injection of a drug called propofol, which gives a quick and smooth transition into unconsciousness. For longer operations, an inhaled anesthetic, like isoflurane, is added to give better control of the depth of anesthesia.

Here’s a chart showing the most common applications for anesthesia (via University of Toronto):

Why Anesthesia Is One of the Greatest Medical Mysteries of Our Time

It should really come as no surprise that neuroscientists aren’t entirely sure how chemicals like propofol work. We won’t truly understand anesthesia until we fully understand consciousness itself — a so-called hard problem in science. But the neuroscience of anesthesia may shed light on this mystery.

Researchers need to chart the neural correlates of consciousness (NCCs) — changes in brain function that can be observed when a person transitions from being conscious to unconscious. These NCCs can be certain brain waves, physical responses, sensitivity to pain — whatever. They just need to be correlated directly to conscious awareness.

As an aside, we’ll eventually need to identify NCCs in an artificial intelligence to prove that it’s sentient. And in fact, this could serve as a viable substitute to the now-outdated Turing Test.

Scientists have known for quite some time that anesthetic potency correlates with solubility in an olive-oil like environment. The going theory is that they make it difficult for certain neurons to fire; they bind to and incapacitate several different proteins on the surface of neurons that are essential for regulating sleep, attention, learning, and memory. But more than that, by interrupting the normal activity of neurons, anesthetics disrupt communications between the various regions of the brain which, together, triggers unconsciousness.

Cognitive Dissonance

But neuroscientists haven’t been able to figure out which region or regions of the brain are responsible for this effect. And indeed, there may be no single switch, particularly if the “global workspace” theory of consciousness continues to hold sway. This school of thought holds that consciousness is a widely distributed phenomenon where initial incoming sensory information gets processed in separate regions of the brain without us being aware of it. Subjectivity only happens when these signals are broadcast to a network of neurons disbursed throughout the brain, which then start firing in synchrony.

Why Anesthesia Is One of the Greatest Medical Mysteries of Our Time

(New Scientist)

But the degree of synchrony is a very carefully calibrated thing — and anesthetics disrupt this finely tuned harmony.

Indeed, anesthetics may be eliciting unconsciousness by blocking the brain’s ability to properly integrate information. Synchrony between different areas of the cortex (the part of the brain responsible for attention, awareness, thought, and memory), gets scrambled as consciousness fades. According to researcher Andres Engels, long-distance communication gets blocked, so the brain can’t build the global workspace. He says “It’s like the message is reaching the mailbox, but no one is picking it up.” Propofol in particular appears to cause abnormally strong synchrony between the primary cortex and other brain regions — and when too many neurons fire in a strongly synchronized rhythm, there’s no room for exchange of specific messages.

Rebooting the Global Workspace

There’s also the science of coming out of unconsciousness to consider. A new study shows it’s not simply a matter of the anesthetic “wearing off.”

Researchers from UCLA say the return of conscious brain activity occurs in discrete clumps, or clusters — and that the brain does not jump between all of the clusters uniformly. In fact, some of these activity patterns serve as “hubs” on the way back to consciousness.

“Recovery from anesthesia, is not simply the result of the anesthetic ‘wearing off’ but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience,” noted researcher Andrew Hudson in a statement. “Put simply, the brain reboots itself.”

Relatedly, a separate study from 2012 suggested that post-surgery confusion is the brain reverting to a more primitive evolutionary state as it goes through the “boot-up” process.

Quantum Vibrations in Microtubules?

There’s also the work of Stuart Hameroff to consider, though his approach to consciousness is still considered speculative at this point.

He pointed me to the work of the University of Pennsylvania’s Rod Eckenhoff, who has shown that anesthetics act on microtubules — extremely tiny cylindrically shaped protein polymers that are part of the cellular cytoskeleton.

Why Anesthesia Is One of the Greatest Medical Mysteries of Our Time

Jeffrey81/Wikimedia Commons

“That suggests consciousness derives from microtubules,” Hameroff told io9.

Along with Travis Craddock, he also thinks that anesthetics bind to and affect cytoskeletal microtubules — and that anesthesia-related cognitive dysfunction is linked to microtubule instability. Craddock has found ‘quantum channels’ of aromatic amino acids in a microtubule subunit protein which regulates large scale quantum states and bind anesthetics.

I asked Hameroff where neuroscientists should focus their efforts as they work to understand the nature of consciousness.

“More studies like those of Anirban Bandyopadhyay at NIMS in Tsukuba, Japan (and now at MIT) showing megahertz and kilohertz vibrations in microtubules inside neurons,” he replied. “EEG may be the tip of an iceberg of deeper level, faster, smaller scale activities in microtubules. But they’re quantum, so though smaller, are non-local, and entangled through large regions of brain or more.”

Indeed, brain scans of various sorts are definitely the way to go, and not just for this particular line of inquiry. It will be through the ongoing discovery of NCCs that we may eventually get to the bottom of this thing called consciousness.


The history of anesthesiaBite Down on a Stick: The History of AnesthesiaThere was a time when all the pain alleviation involved in surgery was a little cotton wool in the…Read more

Anesthesia unlocks a more primitive level of consciousness – If you’ve ever been put under anesthesia, you might recall a disoriented, almost delirious…Read more

‘Free choice’ in primates altered through brain stimulation (Science Daily)

Date: May 29, 2014

Source: KU Leuven

Summary: When electrical pulses are applied to the ventral tegmental area of their brain, macaques presented with two images change their preference from one image to the other. The study is the first to confirm a causal link between activity in the ventral tegmental area and choice behavior in primates.

The study is the first to show a causal link between activity in ventral tegmental area and choice behaviour.. Credit: Image courtesy of KU Leuven

When electrical pulses are applied to the ventral tegmental area of their brain, macaques presented with two images change their preference from one image to the other. The study by researchers Wim Vanduffel and John Arsenault (KU Leuven and Massachusetts General Hospital) is the first to confirm a causal link between activity in the ventral tegmental area and choice behaviour in primates.

The ventral tegmental area is located in the midbrain and helps regulate learning and reinforcement in the brain’s reward system. It produces dopamine, a neurotransmitter that plays an important role in positive feelings, such as receiving a reward. “In this way, this small area of the brain provides learning signals,” explains Professor Vanduffel. “If a reward is larger or smaller than expected, behavior is reinforced or discouraged accordingly.”

Causal link

This effect can be artificially induced: “In one experiment, we allowed macaques to choose multiple times between two images — a star or a ball, for example. This told us which of the two visual stimuli they tended to naturally prefer. In a second experiment, we stimulated the ventral tegmental area with mild electrical currents whenever they chose the initially nonpreferred image. This quickly changed their preference. We were also able to manipulate their altered preference back to the original favorite.”

The study, which will be published online in the journal Current Biology on 16 June, is the first to confirm a causal link between activity in the ventral tegmental area and choice behaviour in primates. “In scans we found that electrically stimulating this tiny brain area activated the brain’s entire reward system, just as it does spontaneously when a reward is received. This has important implications for research into disorders relating to the brain’s reward network, such as addiction or learning disabilities.”

Could this method be used in the future to manipulate our choices? “Theoretically, yes. But the ventral tegmental area is very deep in the brain. At this point, stimulating it can only be done invasively, by surgically placing electrodes — just as is currently done for deep brain stimulation to treat Parkinson’s or depression. Once non-invasive methods — light or ultrasound, for example — can be applied with a sufficiently high level of precision, they could potentially be used for correcting defects in the reward system, such as addiction and learning disabilities.”

 Journal Reference:
  1. John T. Arsenault, Samy Rima, Heiko Stemmann, Wim Vanduffel. Role of the Primate Ventral Tegmental Area in Reinforcement and MotivationCurrent Biology, 2014; DOI: 10.1016/j.cub.2014.04.044

The Change Within: The Obstacles We Face Are Not Just External (The Nation)

The climate crisis has such bad timing, confronting it not only requires a new economy but a new way of thinking.

Naomi Klein

April 21, 2014

(Reuters/China Daily)

This is a story about bad timing.

One of the most disturbing ways that climate change is already playing out is through what ecologists call “mismatch” or “mistiming.” This is the process whereby warming causes animals to fall out of step with a critical food source, particularly at breeding times, when a failure to find enough food can lead to rapid population losses.

The migration patterns of many songbird species, for instance, have evolved over millennia so that eggs hatch precisely when food sources such as caterpillars are at their most abundant, providing parents with ample nourishment for their hungry young. But because spring now often arrives early, the caterpillars are hatching earlier too, which means that in some areas they are less plentiful when the chicks hatch, threatening a number of health and fertility impacts. Similarly, in West Greenland, caribou are arriving at their calving grounds only to find themselves out of sync with the forage plants they have relied on for thousands of years, now growing earlier thanks to rising temperatures. That is leaving female caribou with less energy for lactation, reproduction and feeding their young, a mismatch that has been linked to sharp decreases in calf births and survival rates.

Scientists are studying cases of climate-related mistiming among dozens of species, from Arctic terns to pied flycatchers. But there is one important species they are missing—us. Homosapiens. We too are suffering from a terrible case of climate-related mistiming, albeit in a cultural-historical, rather than a biological, sense. Our problem is that the climate crisis hatched in our laps at a moment in history when political and social conditions were uniquely hostile to a problem of this nature and magnitude—that moment being the tail end of the go-go ’80s, the blastoff point for the crusade to spread deregulated capitalism around the world. Climate changeis a collective problem demanding collective action the likes of which humanity has never actually accomplished. Yet it entered mainstream consciousness in the midst of an ideological war being waged on the very idea of the collective sphere.

This deeply unfortunate mistiming has created all sorts of barriers to our ability to respond effectively to this crisis. It has meant that corporate power was ascendant at the very moment when we needed to exert unprecedented controls over corporate behavior in order to protect life on earth. It has meant that regulation was a dirty word just when we needed those powers most. It has meant that we are ruled by a class of politicians who know only how to dismantle and starve public institutions, just when they most need to be fortified and reimagined. And it has meant that we are saddled with an apparatus of “free trade” deals that tie the hands of policy-makers just when they need maximum flexibility to achieve a massive energy transition.

Confronting these various structural barriers to the next economy is the critical work of any serious climate movement. But it’s not the only task at hand. We also have to confront how the mismatch between climate change and market domination has created barriers within our very selves, making it harder to look at this most pressing of humanitarian crises with anything more than furtive, terrified glances. Because of the way our daily lives have been altered by both market and technological triumphalism, we lack many of the observational tools necessary to convince ourselves that climate change is real—let alone the confidence to believe that a different way of living is possible.

And little wonder: just when we needed to gather, our public sphere was disintegrating; just when we needed to consume less, consumerism took over virtually every aspect of our lives; just when we needed to slow down and notice, we sped up; and just when we needed longer time horizons, we were able to see only the immediate present.

This is our climate change mismatch, and it affects not just our species, but potentially every other species on the planet as well.

The good news is that, unlike reindeer and songbirds, we humans are blessed with the capacity for advanced reasoning and therefore the ability to adapt more deliberately—to change old patterns of behavior with remarkable speed. If the ideas that rule our culture are stopping us from saving ourselves, then it is within our power to change those ideas. But before that can happen, we first need to understand the nature of our personal climate mismatch.

› Climate change demands that we consume less, but being consumers is all we know.Climate change is not a problem that can be solved simply by changing what we buy—a hybrid instead of an SUV, some carbon offsets when we get on a plane. At its core, it is a crisis born of overconsumption by the comparatively wealthy, which means the world’s most manic consumers are going to have to consume less.

The problem is not “human nature,” as we are so often told. We weren’t born having to shop this much, and we have, in our recent past, been just as happy (in many cases happier) consuming far less. The problem is the inflated role that consumption has come to play in our particular era.

Late capitalism teaches us to create ourselves through our consumer choices: shopping is how we form our identities, find community and express ourselves. Thus, telling people that they can’t shop as much as they want to because the planet’s support systems are overburdened can be understood as a kind of attack, akin to telling them that they cannot truly be themselves. This is likely why, of the original “Three Rs”—reduce, reuse, recycle—only the third has ever gotten any traction, since it allows us to keep on shopping as long as we put the refuse in the right box. The other two, which require that we consume less, were pretty much dead on arrival.

› Climate change is slow, and we are fast. When you are racing through a rural landscape on a bullet train, it looks as if everything you are passing is standing still: people, tractors, cars on country roads. They aren’t, of course. They are moving, but at a speed so slow compared with the train that they appear static.

So it is with climate change. Our culture, powered by fossil fuels, is that bullet train, hurtling forward toward the next quarterly report, the next election cycle, the next bit of diversion or piece of personal validation via our smartphones and tablets. Our changing climate is like the landscape out the window: from our racy vantage point, it can appear static, but it is moving, its slow progress measured in receding ice sheets, swelling waters and incremental temperature rises. If left unchecked, climate change will most certainly speed up enough to capture our fractured attention—island nations wiped off the map, and city-drowning superstorms, tend to do that. But by then, it may be too late for our actions to make a difference, because the era of tipping points will likely have begun.

› Climate change is place-based, and we are everywhere at once. The problem is not just that we are moving too quickly. It is also that the terrain on which the changes are taking place is intensely local: an early blooming of a particular flower, an unusually thin layer of ice on a lake, the late arrival of a migratory bird. Noticing those kinds of subtle changes requires an intimate connection to a specific ecosystem. That kind of communion happens only when we know a place deeply, not just as scenery but also as sustenance, and when local knowledge is passed on with a sense of sacred trust from one generation to the next.

But that is increasingly rare in the urbanized, industrialized world. We tend to abandon our homes lightly—for a new job, a new school, a new love. And as we do so, we are severed from whatever knowledge of place we managed to accumulate at the previous stop, as well as from the knowledge amassed by our ancestors (who, at least in my case, migrated repeatedly themselves).

Even for those of us who manage to stay put, our daily existence can be disconnected from the physical places where we live. Shielded from the elements as we are in our climate-controlled homes, workplaces and cars, the changes unfolding in the natural world easily pass us by. We might have no idea that a historic drought is destroying the crops on the farms that surround our urban homes, since the supermarkets still display miniature mountains of imported produce, with more coming in by truck all day. It takes something huge—like a hurricane that passes all previous high-water marks, or a flood destroying thousands of homes—for us to notice that something is truly amiss. And even then we have trouble holding on to that knowledge for long, since we are quickly ushered along to the next crisis before these truths have a chance to sink in.

Climate change, meanwhile, is busily adding to the ranks of the rootless every day, as natural disasters, failed crops, starving livestock and climate-fueled ethnic conflicts force yet more people to leave their ancestral homes. And with every human migration, more crucial connections to specific places are lost, leaving yet fewer people to listen closely to the land.

› Climate pollutants are invisible, and we have stopped believing in what we cannot see.When BP’s Macondo well ruptured in 2010, releasing torrents of oil into the Gulf of Mexico, one of the things we heard from company CEO Tony Hayward was that “the Gulf of Mexico is a very big ocean. The amount of volume of oil and dispersant we are putting into it is tiny in relation to the total water volume.” The statement was widely ridiculed at the time, and rightly so, but Hayward was merely voicing one of our culture’s most cherished beliefs: that what we can’t see won’t hurt us and, indeed, barely exists.

So much of our economy relies on the assumption that there is always an “away” into which we can throw our waste. There’s the away where our garbage goes when it is taken from the curb, and the away where our waste goes when it is flushed down the drain. There’s the away where the minerals and metals that make up our goods are extracted, and the away where those raw materials are turned into finished products. But the lesson of the BP spill, in the words of ecological theorist Timothy Morton, is that ours is “a world in which there is no ‘away.’”

When I published No Logo a decade and a half ago, readers were shocked to learn of the abusive conditions under which their clothing and gadgets were manufactured. But we have since learned to live with it—not to condone it, exactly, but to be in a state of constant forgetfulness. Ours is an economy of ghosts, of deliberate blindness.

Air is the ultimate unseen, and the greenhouse gases that warm it are our most elusive ghosts. Philosopher David Abram points out that for most of human history, it was precisely this unseen quality that gave the air its power and commanded our respect. “Called Sila, the wind-mind of the world, by the Inuit; Nilch’i, or Holy Wind, by the Navajo; Ruach, or rushing-spirit, by the ancient Hebrews,” the atmosphere was “the most mysterious and sacred dimension of life.” But in our time, “we rarely acknowledge the atmosphere as it swirls between two persons.” Having forgotten the air, Abram writes, we have made it our sewer, “the perfect dump site for the unwanted by-products of our industries…. Even the most opaque, acrid smoke billowing out of the pipes will dissipate and disperse, always and ultimately dissolving into the invisible. It’s gone. Out of sight, out of mind.”

* * *

Another part of what makes climate change so very difficult for us to grasp is that ours is a culture of the perpetual present, one that deliberately severs itself from the past that created us as well as the future we are shaping with our actions. Climate change is about how what we did generations in the past will inescapably affect not just the present, but generations in the future. These time frames are a language that has become foreign to most of us.

This is not about passing individual judgment, nor about berating ourselves for our shallowness or rootlessness. Rather, it is about recognizing that we are products of an industrial project, one intimately, historically linked to fossil fuels.

And just as we have changed before, we can change again. After listening to the great farmer-poet Wendell Berry deliver a lecture on how we each have a duty to love our “homeplace” more than any other, I asked him if he had any advice for rootless people like me and my friends, who live in our computers and always seem to be shopping for a home. “Stop somewhere,” he replied. “And begin the thousand-year-long process of knowing that place.”

That’s good advice on lots of levels. Because in order to win this fight of our lives, we all need a place to stand.

Read more of The Nation’s special #MyClimateToo coverage:

Mark Hertsgaard: Why Today Is All About Climate
Christopher Hayes: The New Abolitionism
Dani McClain: The ‘Environmentalists’ Who Scapegoat Immigrants and Women on Climate Change
Mychal Denzel Smith: Racial and Environmental Justice Are Two Sides of the Same Coin
Katrina vanden Heuvel: Earth Day’s Founding Father
Wen Stephenson: Let This Earth Day Be The Last
Katha Pollitt: Climate Change is the Tragedy of the Global Commons
Michelle Goldberg: Fighting Despair to Fight Climate Change
George Zornick: We’re the Fossil Fuel Industry’s Cheap Date
Dan Zegart: Want to Stop Climate Change? Take the Fossil Fuel Industry to Court
Jeremy Brecher: ‘Jobs vs. the Environment’: How to Counter the Divisive Big Lie
Jon Wiener: Elizabeth Kolbert on Species Extinction and Climate Change
Dave Zirin: Brazil’s World Cup Will Kick the Environment in the Teeth
Steven Hsieh: People of Color Are Already Getting Hit the Hardest by Climate Change
John Nichols: If Rick Weiland Can Say “No” to Keystone, So Can Barack Obama
Michelle Chen: Where Have All the Green Jobs Gone?
Peter Rothberg: Why I’m Not Totally Bummed Out This Earth Day
Leslie Savan: This Is My Brain on Paper Towels

Cientistas identificam gene que relaciona estrutura cerebral à inteligência (O Globo)

JC e-mail 4892, de 11 de fevereiro de 2014

Descoberta pode ter implicações importantes para a compreensão de transtornos psiquiátricos como esquizofrenia e autismo

Cientistas do King’s College London identificaram, pela primeira vez, um gene que relaciona a espessura da massa cinzenta do cérebro à inteligência. O estudo foi publicado nesta terça-feira na revista “Molecular Psychiatry” e pode ajudar a entender os mecanismos biológicos por trás de determinados danos intelectuais.

Até agora já se sabia que a massa cinzenta tinha um papel importante para a memória, atenção, pensamento, linguagem e consciência. Estudos anteriores também já mostravam que a espessura do córtex cerebral tinha a ver com a habilidade intelectual, mas nenhum gene tinha sido identificado.

Um time internacional de cientistas, liderado pelo King´s College, analisou amostras de DNA e exames de ressonância magnética por imagem de 1.583 adolescentes saudáveis de 14 anos, que também se submeteram a uma série de testes para determinar inteligência verbal e não verbal.

– Queríamos descobrir como diferenças estruturais no cérebro tinham a ver com diferenças na habilidade intelectual. Identificamos uma variação genética relacionada à plasticidade sináptica, de como os neurônios se comunicam – explica Sylvane Desrivières, principal autora do estudo, pelo Instituto de Psiquiatria do King’s College London. – Isto pode nos ajudar a entender o que acontece em nível neuronal com certas formas de comprometimento intelectual, onde a habilidade de comunicação dos neurônios é, de alguma forma, comprometida.

Ela acrescenta que é importante apontar que a inteligência é influenciada por muitos fatores genéticos e ambientais. O gene que identificamos só explica uma pequena proporção das diferenças nas habilidades intelectuais e não é, de forma alguma, “o gene da inteligência”.

Os pesquisadores observaram 54 mil possíveis variações envolvidas no desenvolvimento cerebral. Em média, adolescentes com uma variante genética particular tinham um córtex mais fino no hemisfério cerebral esquerdo, particularmente nos lobos frontal e temporal, e executavam bem testes de capacidade intelectual. A variação genética afeta a expressão do gene NPTN, que codifica uma proteína que atua nas sinapses neuronais e, portanto, afeta a forma como as células do cérebro se comunicam.

Para confirmar as suas conclusões, os pesquisadores estudaram o gene NPTN em células de camundongo e do cérebro humano. Os pesquisadores verificaram que o gene NPTN tinha uma atividade diferente nos hemisférios esquerdo e direito do cérebro, o que pode fazer com que o hemisfério esquerdo seja mais sensível aos efeitos das mutações NPTN. Os resultados sugerem que algumas diferenças na capacidade intelectual podem resultar da diminuição da função do gene NPTN em determinadas regiões do hemisfério esquerdo do cérebro.

A variação genética identificada neste estudo representa apenas uma estimativa de 0,5% da variação total em inteligência. No entanto, as descobertas podem ter implicações importantes para a compreensão dos mecanismos biológicos subjacentes de vários transtornos psiquiátricos, como esquizofrenia e autismo, nas quais a capacidade cognitiva é uma característica fundamental da doença.