Arquivo da tag: Neurofisiologia

Becoming a centaur (Aeon)

Rounding up wild horses on the edge of the Gobi desert in Mongolia, 1964. Photo by Philip Jones Griffiths/Magnum
The horse is a prey animal, the human a predator. Our shared trust and athleticism is a neurobiological miracle

Janet Jones – 14 January 2022

Horse-and-human teams perform complex manoeuvres in competitions of all sorts. Together, we can gallop up to obstacles standing 8 feet (2.4 metres) high, leave the ground, and fly blind – neither party able to see over the top until after the leap has been initiated. Adopting a flatter trajectory with greater speed, horse and human sail over broad jumps up to 27 feet (more than 8 metres) long. We run as one at speeds of 44 miles per hour (nearly 70 km/h), the fastest velocity any land mammal carrying a rider can achieve. In freestyle dressage events, we dance in place to the rhythm of music, trot sideways across the centre of an arena with huge leg-crossing steps, and canter in pirouettes with the horse’s front feet circling her hindquarters. Galloping again, the best horse-and-human teams can slide 65 feet (nearly 20 metres) to a halt while resting all their combined weight on the horse’s hind legs. Endurance races over extremely rugged terrain test horses and riders in journeys that traverse up to 500 miles (805 km) of high-risk adventure.

Charlotte Dujardin on Valegro, a world-record dressage freestyle at London Olympia, 2014: an example of high-precision brain-to-brain communication between horse and rider. Every step the horse takes is determined in conjunction with many invisible cues from his human rider, using a feedback loop between predator brain and prey brain. Note the horse’s beautiful physical condition and complete willingness to perform these extremely difficult manoeuvres.

No one disputes the athleticism fuelling these triumphs, but few people comprehend the mutual cross-species interaction that is required to accomplish them. The average horse weighs 1,200 pounds (more than 540 kg), makes instantaneous movements, and can become hysterical in a heartbeat. Even the strongest human is unable to force a horse to do anything she doesn’t want to do. Nor do good riders allow the use of force in training our magnificent animals. Instead, we hold ourselves to the higher standard of motivating horses to cooperate freely with us in achieving the goals of elite sports as well as mundane chores. Under these conditions, the horse trained with kindness, expertise and encouragement is a willing, equal participant in the action.

That action is rooted in embodied perception and the brain. In mounted teams, horses, with prey brains, and humans, with predator brains, share largely invisible signals via mutual body language. These signals are received and transmitted through peripheral nerves leading to each party’s spinal cord. Upon arrival in each brain, they are interpreted, and a learned response is generated. It, too, is transmitted through the spinal cord and nerves. This collaborative neural action forms a feedback loop, allowing communication from brain to brain in real time. Such conversations allow horse and human to achieve their immediate goals in athletic performance and everyday life. In a very real sense, each species’ mind is extended beyond its own skin into the mind of another, with physical interaction becoming a kind of neural dance.

Horses in nature display certain behaviours that tempt observers to wonder whether competitive manoeuvres truly require mutual communication with human riders. For example, the feral horse occasionally hops over a stream to reach good food or scrambles up a slope of granite to escape predators. These manoeuvres might be thought the precursors to jumping or rugged trail riding. If so, we might imagine that the performance horse’s extreme athletic feats are innate, with the rider merely a passenger steering from above. If that were the case, little requirement would exist for real-time communication between horse and human brains.

In fact, though, the feral hop is nothing like the trained leap over a competition jump, usually commenced from short distances at high speed. Today’s Grand Prix jump course comprises about 15 obstacles set at sharp angles to each other, each more than 5 feet high and more than 6 feet wide (1.5 x 1.8 metres). The horse-and-human team must complete this course in 80 or 90 seconds, a time allowance that makes for acute turns, diagonal flight paths and high-speed exits. Comparing the wilderness hop with the show jump is like associating a flintstone with a nuclear bomb. Horses and riders undergo many years of daily training to achieve this level of performance, and their brains share neural impulses throughout each experience.

These examples originate in elite levels of horse sport, but the same sort of interaction occurs in pastures, arenas and on simple trails all over the world. Any horse-and-human team can develop deep bonds of mutual trust, and learn to communicate using body language, knowledge and empathy.

Like it or not, we are the horse’s evolutionary enemy, yet they behave toward us as if inclined to become a friend

The critical component of the horse in nature, and her ability to learn how to interact so precisely with a human rider, is not her physical athleticism but her brain. The first precise magnetic resonance image of a horse’s brain appeared only in 2019, allowing veterinary neurologists far greater insight into the anatomy underlying equine mental function. As this new information is disseminated to horse trainers and riders for practical application, we see the beginnings of a revolution in brain-based horsemanship. Not only will this revolution drive competition to higher summits of success, and animal welfare to more humane levels of understanding, it will also motivate scientists to research the unique compatibility between prey and predator brains. Nowhere else in nature do we see such intense and intimate collaboration between two such disparate minds.

Three natural features of the equine brain are especially important when it comes to mind-melding with humans. First, the horse’s brain provides astounding touch detection. Receptor cells in the horse’s skin and muscles transduce – or convert – external pressure, temperature and body position to neural impulses that the horse’s brain can understand. They accomplish this with exquisite sensitivity: the average horse can detect less pressure against her skin than even a human fingertip can.

Second, horses in nature use body language as a primary medium of daily communication with each other. An alpha mare has only to flick an ear toward a subordinate to get him to move away from her food. A younger subordinate, untutored in the ear flick, receives stronger body language – two flattened ears and a bite that draws blood. The notion of animals in nature as kind, gentle creatures who never hurt each other is a myth.

Third, by nature, the equine brain is a learning machine. Untrammelled by the social and cognitive baggage that human brains carry, horses learn in a rapid, pure form that allows them to be taught the meanings of various human cues that shape equine behaviour in the moment. Taken together, the horse’s exceptional touch sensitivity, natural reliance on body language, and purity of learning form the tripod of support for brain-to-brain communication that is so critical in extreme performance.

One of the reasons for budding scientific fascination with neural horse-and-human communication is the horse’s status as a prey animal. Their brains and bodies evolved to survive completely different pressures than our human physiologies. For example, horse eyes are set on either side of their head for a panoramic view of the world, and their horizontal pupils allow clear sight along the horizon but fuzzy vision above and below. Their eyes rotate to maintain clarity along the horizon when their heads lie sideways to reach grass in odd locations. Equine brains are also hardwired to stream commands directly from the perception of environmental danger to the motor cortex where instant evasion is carried out. All of these features evolved to allow the horse to survive predators.

Conversely, human brains evolved in part for the purpose of predation – hunting, chasing, planning… yes, even killing – with front-facing eyes, superb depth perception, and a prefrontal cortex for strategy and reason. Like it or not, we are the horse’s evolutionary enemy, yet they behave toward us as if inclined to become a friend.

The fact that horses and humans can communicate neurally without the external mediation of language or equipment is critical to our ability to initiate the cellular dance between brains. Saddles and bridles are used for comfort and safety, but bareback and bridleless competitions prove they aren’t necessary for highly trained brain-to-brain communication. Scientific efforts to communicate with predators such as dogs and apes have often been hobbled by the use of artificial media including human speech, sign language or symbolic lexigram. By contrast, horses allow us to apply a medium of communication that is completely natural to their lives in the wild and in captivity.

The horse’s prey brain is designed to notice and evade predators. How ironic, and how riveting, then, that this prey brain is the only one today that shares neural communication with a predator brain. It offers humanity a rare view into a prey animal’s world, almost as if we were wolves riding elk or coyotes mind-melding with cottontail bunnies.

Highly trained horses and riders send and receive neural signals using subtle body language. For example, a rider can apply invisible pressure with her left inner calf muscle to move the horse laterally to the right. That pressure is felt on the horse’s side, in his skin and muscle, via proprioceptive receptor cells that detect body position and movement. Then the signal is transduced from mechanical pressure to electrochemical impulse, and conducted up peripheral nerves to the horse’s spinal cord. Finally, it reaches the somatosensory cortex, the region of the brain responsible for interpreting sensory information.

Riders can sometimes guess that an invisible object exists by detecting subtle equine reactions

This interpretation is dependent on the horse’s knowledge that a particular body signal – for example, inward pressure from a rider’s left calf – is associated with a specific equine behaviour. Horse trainers spend years teaching their mounts these associations. In the present example, the horse has learned that this particular amount of pressure, at this speed and location, under these circumstances, means ‘move sideways to the right’. If the horse is properly trained, his motor cortex causes exactly that movement to occur.

By means of our human motion and position sensors, the rider’s brain now senses that the horse has changed his path rightward. Depending on the manoeuvre our rider plans to complete, she will then execute invisible cues to extend or collect the horse’s stride as he approaches a jump that is now centred in his vision, plant his right hind leg and spin in a tight fast circle, push hard off his hindquarters to chase a cow, or any number of other movements. These cues are combined to form that mutual neural dance, occurring in real time, and dependent on natural body language alone.

The example of a horse moving a few steps rightward off the rider’s left leg is extremely simplistic. When you imagine a horse and rider clearing a puissance wall of 7.5 feet (2.4 metres), think of the countless receptor cells transmitting bodily cues between both brains during approach, flight and exit. That is mutual brain-to-brain communication. Horse and human converse via body language to such an extreme degree that they are able to accomplish amazing acts of understanding and athleticism. Each of their minds has extended into the other’s, sending and receiving signals as if one united brain were controlling both bodies.

Franke Sloothaak on Optiebeurs Golo, a world-record puissance jump at Chaudfontaine in Belgium, 1991. This horse-and-human team displays the gentle encouragement that brain-to-brain communication requires. The horse is in perfect condition and health. The rider offers soft, light hands, and rides in perfect balance with the horse. He carries no whip, never uses his spurs, and employs the gentlest type of bit – whose full acceptance is evidenced by the horse’s foamy mouth and flexible neck. The horse is calm but attentive before and after the leap, showing complete willingness to approach the wall without a whiff of coercion. The first thing the rider does upon landing is pat his equine teammate. He strokes or pats the horse another eight times in the next 30 seconds, a splendid example of true horsemanship.

Analysis of brain-to-brain communication between horses and humans elicits several new ideas worthy of scientific notice. Because our minds interact so well using neural networks, horses and humans might learn to borrow neural signals from the party whose brain offers the highest function. For example, horses have a 340-degree range of view when holding their heads still, compared with a paltry 90-degree range in humans. Therefore, horses can see many objects that are invisible to their riders. Yet riders can sometimes guess that an invisible object exists by detecting subtle equine reactions.

Specifically, neural signals from the horse’s eyes carry the shape of an object to his brain. Those signals are transferred to the rider’s brain by a well-established route: equine receptor cells in the retina lead to equine detector cells in the visual cortex, which elicits an equine motor reaction that is then sensed by the rider’s human body. From there, the horse’s neural signals are transmitted up the rider’s spinal cord to the rider’s brain, and a perceptual communication loop is born. The rider’s brain can now respond neurally to something it is incapable of seeing, by borrowing the horse’s superior range of vision.

These brain-to-brain transfers are mutual, so the learning equine brain should also be able to borrow the rider’s vision, with its superior depth perception and focal acuity. This kind of neural interaction results in a horse-and-human team that can sense far more together than either party can detect alone. In effect, they share effort by assigning labour to the party whose skills are superior at a given task.

There is another type of skillset that requires a particularly nuanced cellular dance: sharing attention and focus. Equine vigilance allowed horses to survive 56 million years of evolution – they had to notice slight movements in tall grasses or risk becoming some predator’s dinner. Consequently, today it’s difficult to slip even a tiny change past a horse, especially a young or inexperienced animal who has not yet been taught to ignore certain sights, sounds and smells.

By contrast, humans are much better at concentration than vigilance. The predator brain does not need to notice and react instantly to every stimulus in the environment. In fact, it would be hampered by prey vigilance. While reading this essay, your brain sorts away the sound of traffic past your window, the touch of clothing against your skin, the sight of the masthead that says ‘Aeon’ at the top of this page. Ignoring these distractions allows you to focus on the content of this essay.

Horses and humans frequently share their respective attentional capacities during a performance. A puissance horse galloping toward an enormous wall cannot waste vigilance by noticing the faces of each person in the audience. Likewise, the rider cannot afford to miss a loose dog that runs into the arena outside her narrow range of vision and focus. Each party helps the other through their primary strengths.

Such sharing becomes automatic with practice. With innumerable neural contacts over time, the human brain learns to heed signals sent by the equine brain that say, in effect: ‘Hey, what’s that over there?’ Likewise, the equine brain learns to sense human neural signals that counter: ‘Let’s focus on this gigantic wall right here.’ Each party sends these messages by body language and receives them by body awareness through two spinal cords, then interprets them inside two brains, millisecond by millisecond.

The rider’s physical cues are transmitted by neural activation from the horse’s surface receptors to the horse’s brain

Finally, it is conceivable that horse and rider can learn to share features of executive function – the human brain’s ability to set goals, plan steps to achieve them, assess alternatives, make decisions and evaluate outcomes. Executive function occurs in the prefrontal cortex, an area that does not exist in the equine brain. Horses are excellent at learning, remembering and communicating – but they do not assess, decide, evaluate or judge as humans do.

Shying is a prominent equine behaviour that might be mediated by human executive function in well-trained mounts. When a horse of average size shies away from an unexpected stimulus, riders are sitting on top of 1,200 pounds of muscle that suddenly leaps sideways off all four feet and lands five yards away. It’s a frightening experience, and often results in falls that lead to injury or even death. The horse’s brain causes this reaction automatically by direct connection between his sensory and motor cortices.

Though this possibility must still be studied by rigorous science, brain-to-brain communication suggests that horses might learn to borrow small glimmers of executive function through neural interaction with the human’s prefrontal cortex. Suppose that a horse shies from an umbrella that suddenly opens. By breathing steadily, relaxing her muscles, and flexing her body in rhythm with the horse’s gait, the rider calms the animal using body language. Her physical cues are transmitted by neural activation from his surface receptors to his brain. He responds with body language in which his muscles relax, his head lowers, and his frightened eyes return to their normal size. The rider feels these changes with her body, which transmits the horse’s neural signals to the rider’s brain.

From this point, it’s only a very short step – but an important one – to the transmission and reception of neural signals between the rider’s prefrontal cortex (which evaluates the unexpected umbrella) and the horse’s brain (which instigates the leap away from that umbrella). In practice, to reduce shying, horse trainers teach their young charges to slow their reactions and seek human guidance.

Brain-to-brain communication between horses and riders is an intricate neural dance. These two species, one prey and one predator, are living temporarily in each other’s brains, sharing neural information back and forth in real time without linguistic or mechanical mediation. It is a partnership like no other. Together, a horse-and-human team experiences a richer perceptual and attentional understanding of the world than either member can achieve alone. And, ironically, this extended interspecies mind operates well not because the two brains are similar to each other, but because they are so different.

Janet Jones applies brain research to training horses and riders. She has a PhD from the University of California, Los Angeles, and for 23 years taught the neuroscience of perception, language, memory, and thought. She trained horses at a large stable early in her career, and later ran a successful horse-training business of her own. Her most recent book, Horse Brain, Human Brain (2020), is currently being translated into seven languages.

Edited by Pam Weintraub

Crows are self-aware just like us, says new study (Big Think)

Neuropsych — September 29, 2020

Crows have their own version of the human cerebral cortex.
Credit: Amarnath Tade/ Unsplash

Robby Berman Share Crows are self-aware just like us, says new study on Facebook Share Crows are self-aware just like us, says new study on Twitter Share Crows are self-aware just like us, says new study on LinkedIn Crows and the rest of the corvid family keep turning out to be smarter and smarter. New research observes them thinking about what they’ve just seen and associating it with an appropriate response. A corvid’s pallium is packed with more neurons than a great ape’s.


It’s no surprise that corvids — the “crow family” of birds that also includes ravens, jays, magpies, and nutcrackers — are smart. They use tools, recognize faces, leave gifts for people they like, and there’s even a video on Facebook showing a crow nudging a stubborn little hedgehog out of traffic. Corvids will also drop rocks into water to push floating food their way.

What is perhaps surprising is what the authors of a new study published last week in the journal Science have found: Crows are capable of thinking about their own thoughts as they work out problems. This is a level of self-awareness previously believed to signify the kind of higher intelligence that only humans and possibly a few other mammals possess. A crow knows what a crow knows, and if this brings the word sentience to your mind, you may be right.

Credit: Neoplantski/Alexey Pushkin/Shutterstock/Big Think

It’s long been assumed that higher intellectual functioning is strictly the product of a layered cerebral cortex. But bird brains are different. The authors of the study found crows’ unlayered but neuron-dense pallium may play a similar role for the avians. Supporting this possibility, another study published last week in Science finds that the neuroanatomy of pigeons and barn owls may also support higher intelligence.

“It has been a good week for bird brains!” crow expert John Marzluff of the University of Washington tells Stat. (He was not involved in either study.)

Corvids are known to be as mentally capable as monkeys and great apes. However, bird neurons are so much smaller that their palliums actually contain more of them than would be found in an equivalent-sized primate cortex. This may constitute a clue regarding their expansive mental capabilities.

In any event, there appears to be a general correspondence between the number of neurons an animal has in its pallium and its intelligence, says Suzana Herculano-Houzel in her commentary on both new studies for Science. Humans, she says, sit “satisfyingly” atop this comparative chart, having even more neurons there than elephants, despite our much smaller body size. It’s estimated that crow brains have about 1.5 billion neurons.

Ozzie and Glenn not pictured. Credit: narubono/Unsplash

The kind of higher intelligence crows exhibited in the new research is similar to the way we solve problems. We catalog relevant knowledge and then explore different combinations of what we know to arrive at an action or solution.

The researchers, led by neurobiologist Andreas Nieder of the University of Tübingen in Germany, trained two carrion crows (Corvus corone), Ozzie and Glenn.

The crows were trained to watch for a flash — which didn’t always appear — and then peck at a red or blue target to register whether or not a flash of light was seen. Ozzie and Glenn were also taught to understand a changing “rule key” that specified whether red or blue signified the presence of a flash with the other color signifying that no flash occurred.

In each round of a test, after a flash did or didn’t appear, the crows were presented a rule key describing the current meaning of the red and blue targets, after which they pecked their response.

This sequence prevented the crows from simply rehearsing their response on auto-pilot, so to speak. In each test, they had to take the entire process from the top, seeing a flash or no flash, and then figuring out which target to peck.

As all this occurred, the researchers monitored their neuronal activity. When Ozzie or Glenn saw a flash, sensory neurons fired and then stopped as the bird worked out which target to peck. When there was no flash, no firing of the sensory neurons was observed before the crow paused to figure out the correct target.

Nieder’s interpretation of this sequence is that Ozzie or Glenn had to see or not see a flash, deliberately note that there had or hadn’t been a flash — exhibiting self-awareness of what had just been experienced — and then, in a few moments, connect that recollection to their knowledge of the current rule key before pecking the correct target.

During those few moments after the sensory neuron activity had died down, Nieder reported activity among a large population of neurons as the crows put the pieces together preparing to report what they’d seen. Among the busy areas in the crows’ brains during this phase of the sequence was, not surprisingly, the pallium.

Overall, the study may eliminate the layered cerebral cortex as a requirement for higher intelligence. As we learn more about the intelligence of crows, we can at least say with some certainty that it would be wise to avoid angering one.

Is everything in the world a little bit conscious? (MIT Technology Review)

technologyreview.com

Christof Koch – August 25, 2021

The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can.

Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism.

As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. 

IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none.

A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. 

IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. 

Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated, but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.)

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences.

Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems. 

The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. 

Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. 

Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. 

Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness?

Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI).

In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. 

This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. 

Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed.

Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle.

The Mind issue

This story was part of our September 2021 issue

Nobody understands what consciousness is or how it works. Nobody understands quantum mechanics either. Could that be more than coincidence? (BBC)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

What is going on in our brains? (Credit: Mehau Kulyk/Science Photo Library)

Quantum mechanics is the best theory we have for describing the world at the nuts-and-bolts level of atoms and subatomic particles. Perhaps the most renowned of its mysteries is the fact that the outcome of a quantum experiment can change depending on whether or not we choose to measure some property of the particles involved.

When this “observer effect” was first noticed by the early pioneers of quantum theory, they were deeply troubled. It seemed to undermine the basic assumption behind all science: that there is an objective world out there, irrespective of us. If the way the world behaves depends on how – or if – we look at it, what can “reality” really mean?

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”

Some of those researchers felt forced to conclude that objectivity was an illusion, and that consciousness has to be allowed an active role in quantum theory. To others, that did not make sense. Surely, Albert Einstein once complained, the Moon does not exist only when we look at it!

Today some physicists suspect that, whether or not consciousness influences quantum mechanics, it might in fact arise because of it. They think that quantum theory might be needed to fully understand how the brain works.

Might it be that, just as quantum objects can apparently be in two places at once, so a quantum brain can hold onto two mutually-exclusive ideas at the same time?

These ideas are speculative, and it may turn out that quantum physics has no fundamental role either for or in the workings of the mind. But if nothing else, these possibilities show just how strangely quantum theory forces us to think.

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The famous double-slit experiment (Credit: Victor de Schwanberg/Science Photo Library)

The most famous intrusion of the mind into quantum mechanics comes in the “double-slit experiment”. Imagine shining a beam of light at a screen that contains two closely-spaced parallel slits. Some of the light passes through the slits, whereupon it strikes another screen.

Light can be thought of as a kind of wave, and when waves emerge from two slits like this they can interfere with each other. If their peaks coincide, they reinforce each other, whereas if a peak and a trough coincide, they cancel out. This wave interference is called diffraction, and it produces a series of alternating bright and dark stripes on the back screen, where the light waves are either reinforced or cancelled out.

The implication seems to be that each particle passes simultaneously through both slits

This experiment was understood to be a characteristic of wave behaviour over 200 years ago, well before quantum theory existed.

The double slit experiment can also be performed with quantum particles like electrons; tiny charged particles that are components of atoms. In a counter-intuitive twist, these particles can behave like waves. That means they can undergo diffraction when a stream of them passes through the two slits, producing an interference pattern.

Now suppose that the quantum particles are sent through the slits one by one, and their arrival at the screen is likewise seen one by one. Now there is apparently nothing for each particle to interfere with along its route – yet nevertheless the pattern of particle impacts that builds up over time reveals interference bands.

The implication seems to be that each particle passes simultaneously through both slits and interferes with itself. This combination of “both paths at once” is known as a superposition state.

But here is the really odd thing.

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

The double-slit experiment (Credit: GIPhotoStock/Science Photo Library)

If we place a detector inside or just behind one slit, we can find out whether any given particle goes through it or not. In that case, however, the interference vanishes. Simply by observing a particle’s path – even if that observation should not disturb the particle’s motion – we change the outcome.

The physicist Pascual Jordan, who worked with quantum guru Niels Bohr in Copenhagen in the 1920s, put it like this: “observations not only disturb what has to be measured, they produce it… We compel [a quantum particle] to assume a definite position.” In other words, Jordan said, “we ourselves produce the results of measurements.”

If that is so, objective reality seems to go out of the window.

And it gets even stranger.

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

Particles can be in two states (Credit: Victor de Schwanberg/Science Photo Library)

If nature seems to be changing its behaviour depending on whether we “look” or not, we could try to trick it into showing its hand. To do so, we could measure which path a particle took through the double slits, but only after it has passed through them. By then, it ought to have “decided” whether to take one path or both.

The sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse

An experiment for doing this was proposed in the 1970s by the American physicist John Wheeler, and this “delayed choice” experiment was performed in the following decade. It uses clever techniques to make measurements on the paths of quantum particles (generally, particles of light, called photons) after they should have chosen whether to take one path or a superposition of two.

It turns out that, just as Bohr confidently predicted, it makes no difference whether we delay the measurement or not. As long as we measure the photon’s path before its arrival at a detector is finally registered, we lose all interference.

It is as if nature “knows” not just if we are looking, but if we are planning to look.

(Credit: Emilio Segre Visual Archives/American Institute Physics/Science Photo Library)

Eugene Wigner (Credit: Emilio Segre Visual Archives/American Institute of Physics/Science Photo Library)

Whenever, in these experiments, we discover the path of a quantum particle, its cloud of possible routes “collapses” into a single well-defined state. What’s more, the delayed-choice experiment implies that the sheer act of noticing, rather than any physical disturbance caused by measuring, can cause the collapse. But does this mean that true collapse has only happened when the result of a measurement impinges on our consciousness?

It is hard to avoid the implication that consciousness and quantum mechanics are somehow linked

That possibility was admitted in the 1930s by the Hungarian physicist Eugene Wigner. “It follows that the quantum description of objects is influenced by impressions entering my consciousness,” he wrote. “Solipsism may be logically consistent with present quantum mechanics.”

Wheeler even entertained the thought that the presence of living beings, which are capable of “noticing”, has transformed what was previously a multitude of possible quantum pasts into one concrete history. In this sense, Wheeler said, we become participants in the evolution of the Universe since its very beginning. In his words, we live in a “participatory universe.”

To this day, physicists do not agree on the best way to interpret these quantum experiments, and to some extent what you make of them is (at the moment) up to you. But one way or another, it is hard to avoid the implication that consciousness and quantum mechanics are somehow linked.

Beginning in the 1980s, the British physicist Roger Penrosesuggested that the link might work in the other direction. Whether or not consciousness can affect quantum mechanics, he said, perhaps quantum mechanics is involved in consciousness.

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

Physicist and mathematician Roger Penrose (Credit: Max Alexander/Science Photo Library)

What if, Penrose asked, there are molecular structures in our brains that are able to alter their state in response to a single quantum event. Could not these structures then adopt a superposition state, just like the particles in the double slit experiment? And might those quantum superpositions then show up in the ways neurons are triggered to communicate via electrical signals?

Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception, but a real quantum effect.

Perhaps quantum mechanics is involved in consciousness

After all, the human brain seems able to handle cognitive processes that still far exceed the capabilities of digital computers. Perhaps we can even carry out computational tasks that are impossible on ordinary computers, which use classical digital logic.

Penrose first proposed that quantum effects feature in human cognition in his 1989 book The Emperor’s New Mind. The idea is called Orch-OR, which is short for “orchestrated objective reduction”. The phrase “objective reduction” means that, as Penrose believes, the collapse of quantum interference and superposition is a real, physical process, like the bursting of a bubble.

Orch-OR draws on Penrose’s suggestion that gravity is responsible for the fact that everyday objects, such as chairs and planets, do not display quantum effects. Penrose believes that quantum superpositions become impossible for objects much larger than atoms, because their gravitational effects would then force two incompatible versions of space-time to coexist.

Penrose developed this idea further with American physician Stuart Hameroff. In his 1994 book Shadows of the Mind, he suggested that the structures involved in this quantum cognition might be protein strands called microtubules. These are found in most of our cells, including the neurons in our brains. Penrose and Hameroff argue that vibrations of microtubules can adopt a quantum superposition.

But there is no evidence that such a thing is remotely feasible.

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

Microtubules inside a cell (Credit: Dennis Kunkel Microscopy/Science Photo Library)

It has been suggested that the idea of quantum superpositions in microtubules is supported by experiments described in 2013, but in fact those studies made no mention of quantum effects.

Besides, most researchers think that the Orch-OR idea was ruled out by a study published in 2000. Physicist Max Tegmark calculated that quantum superpositions of the molecules involved in neural signaling could not survive for even a fraction of the time needed for such a signal to get anywhere.

Other researchers have found evidence for quantum effects in living beings

Quantum effects such as superposition are easily destroyed, because of a process called decoherence. This is caused by the interactions of a quantum object with its surrounding environment, through which the “quantumness” leaks away.

Decoherence is expected to be extremely rapid in warm and wet environments like living cells.

Nerve signals are electrical pulses, caused by the passage of electrically-charged atoms across the walls of nerve cells. If one of these atoms was in a superposition and then collided with a neuron, Tegmark showed that the superposition should decay in less than one billion billionth of a second. It takes at least ten thousand trillion times as long for a neuron to discharge a signal.

As a result, ideas about quantum effects in the brain are viewed with great skepticism.

However, Penrose is unmoved by those arguments and stands by the Orch-OR hypothesis. And despite Tegmark’s prediction of ultra-fast decoherence in cells, other researchers have found evidence for quantum effects in living beings. Some argue that quantum mechanics is harnessed by migratory birds that use magnetic navigation, and by green plants when they use sunlight to make sugars in photosynthesis.

Besides, the idea that the brain might employ quantum tricks shows no sign of going away. For there is now another, quite different argument for it.

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

Could phosphorus sustain a quantum state? (Credit: Phil Degginger/Science Photo Library)

In a study published in 2015, physicist Matthew Fisher of the University of California at Santa Barbara argued that the brain might contain molecules capable of sustaining more robust quantum superpositions. Specifically, he thinks that the nuclei of phosphorus atoms may have this ability.

Phosphorus atoms are everywhere in living cells. They often take the form of phosphate ions, in which one phosphorus atom joins up with four oxygen atoms.

Such ions are the basic unit of energy within cells. Much of the cell’s energy is stored in molecules called ATP, which contain a string of three phosphate groups joined to an organic molecule. When one of the phosphates is cut free, energy is released for the cell to use.

Cells have molecular machinery for assembling phosphate ions into groups and cleaving them off again. Fisher suggested a scheme in which two phosphate ions might be placed in a special kind of superposition called an “entangled state”.

Phosphorus spins could resist decoherence for a day or so, even in living cells

The phosphorus nuclei have a quantum property called spin, which makes them rather like little magnets with poles pointing in particular directions. In an entangled state, the spin of one phosphorus nucleus depends on that of the other.

Put another way, entangled states are really superposition states involving more than one quantum particle.

Fisher says that the quantum-mechanical behaviour of these nuclear spins could plausibly resist decoherence on human timescales. He agrees with Tegmark that quantum vibrations, like those postulated by Penrose and Hameroff, will be strongly affected by their surroundings “and will decohere almost immediately”. But nuclear spins do not interact very strongly with their surroundings.

All the same, quantum behaviour in the phosphorus nuclear spins would have to be “protected” from decoherence.

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

Quantum particles can have different spins (Credit: Richard Kail/Science Photo Library)

This might happen, Fisher says, if the phosphorus atoms are incorporated into larger objects called “Posner molecules”. These are clusters of six phosphate ions, combined with nine calcium ions. There is some evidence that they can exist in living cells, though this is currently far from conclusive.

I decided… to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions

In Posner molecules, Fisher argues, phosphorus spins could resist decoherence for a day or so, even in living cells. That means they could influence how the brain works.

The idea is that Posner molecules can be swallowed up by neurons. Once inside, the Posner molecules could trigger the firing of a signal to another neuron, by falling apart and releasing their calcium ions.

Because of entanglement in Posner molecules, two such signals might thus in turn become entangled: a kind of quantum superposition of a “thought”, you might say. “If quantum processing with nuclear spins is in fact present in the brain, it would be an extremely common occurrence, happening pretty much all the time,” Fisher says.

He first got this idea when he started thinking about mental illness.

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

A capsule of lithium carbonate (Credit: Custom Medical Stock Photo/Science Photo Library)

“My entry into the biochemistry of the brain started when I decided three or four years ago to explore how on earth the lithium ion could have such a dramatic effect in treating mental conditions,” Fisher says.

At this point, Fisher’s proposal is no more than an intriguing idea

Lithium drugs are widely used for treating bipolar disorder. They work, but nobody really knows how.

“I wasn’t looking for a quantum explanation,” Fisher says. But then he came across a paper reporting that lithium drugs had different effects on the behaviour of rats, depending on what form – or “isotope” – of lithium was used.

On the face of it, that was extremely puzzling. In chemical terms, different isotopes behave almost identically, so if the lithium worked like a conventional drug the isotopes should all have had the same effect.

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

Nerve cells are linked at synapses (Credit: Sebastian Kaulitzki/Science Photo Library)

But Fisher realised that the nuclei of the atoms of different lithium isotopes can have different spins. This quantum property might affect the way lithium drugs act. For example, if lithium substitutes for calcium in Posner molecules, the lithium spins might “feel” and influence those of phosphorus atoms, and so interfere with their entanglement.

We do not even know what consciousness is

If this is true, it would help to explain why lithium can treat bipolar disorder.

At this point, Fisher’s proposal is no more than an intriguing idea. But there are several ways in which its plausibility can be tested, starting with the idea that phosphorus spins in Posner molecules can keep their quantum coherence for long periods. That is what Fisher aims to do next.

All the same, he is wary of being associated with the earlier ideas about “quantum consciousness”, which he sees as highly speculative at best.

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Consciousness is a profound mystery (Credit: Sciepro/Science Photo Library)

Physicists are not terribly comfortable with finding themselves inside their theories. Most hope that consciousness and the brain can be kept out of quantum theory, and perhaps vice versa. After all, we do not even know what consciousness is, let alone have a theory to describe it.

We all know what red is like, but we have no way to communicate the sensation

It does not help that there is now a New Age cottage industrydevoted to notions of “quantum consciousness“, claiming that quantum mechanics offers plausible rationales for such things as telepathy and telekinesis.

As a result, physicists are often embarrassed to even mention the words “quantum” and “consciousness” in the same sentence.

But setting that aside, the idea has a long history. Ever since the “observer effect” and the mind first insinuated themselves into quantum theory in the early days, it has been devilishly hard to kick them out. A few researchers think we might never manage to do so.

In 2016, Adrian Kent of the University of Cambridge in the UK, one of the most respected “quantum philosophers”, speculated that consciousness might alter the behaviour of quantum systems in subtle but detectable ways.

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

We do not understand how thoughts work (Credit: Andrzej Wojcicki/Science Photo Library)

Kent is very cautious about this idea. “There is no compelling reason of principle to believe that quantum theory is the right theory in which to try to formulate a theory of consciousness, or that the problems of quantum theory must have anything to do with the problem of consciousness,” he admits.

Every line of thought on the relationship of consciousness to physics runs into deep trouble

But he says that it is hard to see how a description of consciousness based purely on pre-quantum physics can account for all the features it seems to have.

One particularly puzzling question is how our conscious minds can experience unique sensations, such as the colour red or the smell of frying bacon. With the exception of people with visual impairments, we all know what red is like, but we have no way to communicate the sensation and there is nothing in physics that tells us what it should be like.

Sensations like this are called “qualia”. We perceive them as unified properties of the outside world, but in fact they are products of our consciousness – and that is hard to explain. Indeed, in 1995 philosopher David Chalmers dubbed it “the hard problem” of consciousness.

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

How does our consciousness work? (Credit: Victor Habbick Visions/Science Photo Library)

“Every line of thought on the relationship of consciousness to physics runs into deep trouble,” says Kent.

This has prompted him to suggest that “we could make some progress on understanding the problem of the evolution of consciousness if we supposed that consciousnesses alters (albeit perhaps very slightly and subtly) quantum probabilities.”

“Quantum consciousness” is widely derided as mystical woo, but it just will not go away

In other words, the mind could genuinely affect the outcomes of measurements.

It does not, in this view, exactly determine “what is real”. But it might affect the chance that each of the possible actualities permitted by quantum mechanics is the one we do in fact observe, in a way that quantum theory itself cannot predict. Kent says that we might look for such effects experimentally.

He even bravely estimates the chances of finding them. “I would give credence of perhaps 15% that something specifically to do with consciousness causes deviations from quantum theory, with perhaps 3% credence that this will be experimentally detectable within the next 50 years,” he says.

If that happens, it would transform our ideas about both physics and the mind. That seems a chance worth exploring.

Large human brain evolved as a result of ‘sizing each other up’ (Science Daily)

Date:
August 12, 2016
Source:
Cardiff University
Summary:
Humans have evolved a disproportionately large brain as a result of sizing each other up in large cooperative social groups, researchers have proposed.

The brains of humans enlarged over time thanks to our sizing up the competition, say scientists. Credit: © danheighton / Fotolia

Humans have evolved a disproportionately large brain as a result of sizing each other up in large cooperative social groups, researchers have proposed.

A team led by computer scientists at Cardiff University suggest that the challenge of judging a person’s relative standing and deciding whether or not to cooperate with them has promoted the rapid expansion of human brain size over the last 2 million years.

In a study published in Scientific Reports, the team, which also includes leading evolutionary psychologist Professor Robin Dunbar from the University of Oxford, specifically found that evolution favors those who prefer to help out others who are at least as successful as themselves.

Lead author of the study Professor Roger Whitaker, from Cardiff University’s School of Computer Science and Informatics, said: “Our results suggest that the evolution of cooperation, which is key to a prosperous society, is intrinsically linked to the idea of social comparison — constantly sizing each up and making decisions as to whether we want to help them or not.

“We’ve shown that over time, evolution favors strategies to help those who are at least as successful as themselves.”

In their study, the team used computer modelling to run hundreds of thousands of simulations, or ‘donation games’, to unravel the complexities of decision-making strategies for simplified humans and to establish why certain types of behaviour among individuals begins to strengthen over time.

In each round of the donation game, two simulated players were randomly selected from the population. The first player then made a decision on whether or not they wanted to donate to the other player, based on how they judged their reputation. If the player chose to donate, they incurred a cost and the receiver was given a benefit. Each player’s reputation was then updated in light of their action, and another game was initiated.

Compared to other species, including our closest relatives, chimpanzees, the brain takes up much more body weight in human beings. Humans also have the largest cerebral cortex of all mammals, relative to the size of their brains. This area houses the cerebral hemispheres, which are responsible for higher functions like memory, communication and thinking.

The research team propose that making relative judgements through helping others has been influential for human survival, and that the complexity of constantly assessing individuals has been a sufficiently difficult task to promote the expansion of the brain over many generations of human reproduction.

Professor Robin Dunbar, who previously proposed the social brain hypothesis, said: “According to the social brain hypothesis, the disproportionately large brain size in humans exists as a consequence of humans evolving in large and complex social groups.

“Our new research reinforces this hypothesis and offers an insight into the way cooperation and reward may have been instrumental in driving brain evolution, suggesting that the challenge of assessing others could have contributed to the large brain size in humans.”

According to the team, the research could also have future implications in engineering, specifically where intelligent and autonomous machines need to decide how generous they should be towards each other during one-off interactions.

“The models we use can be executed as short algorithms called heuristics, allowing devices to make quick decisions about their cooperative behaviour,” Professor Whitaker said.

“New autonomous technologies, such as distributed wireless networks or driverless cars, will need to self-manage their behaviour but at the same time cooperate with others in their environment.”


Journal Reference:

  1. Roger M. Whitaker, Gualtiero B. Colombo, Stuart M. Allen, Robin I. M. Dunbar. A Dominant Social Comparison Heuristic Unites Alternative Mechanisms for the Evolution of Indirect ReciprocityScientific Reports, 2016; 6: 31459 DOI: 10.1038/srep31459

Efeitos bifásicos da ayahuasca (Plantando Consciência)

30 de setembro de 2015

Efeitos bifásicos da Ayahuasca

Foi publicado hoje na revista científica PLOS ONE artigo com os resultados de nosso estudo neurocientífico sobre a ayahuasca. Fruto de pouco mais de quatro anos de intenso e dedicado trabalho, a pesquisa foi conduzida na UNIFESP com financiamento da FAPESP, com cooperações na USP, UFABC, Louisiana State University (EUA) e da University of Auckland (Nova Zelândia). Além da colaboração da União do Vegetal que nos forneceu Hoasca para fins de pesquisa, e de 20 bravos(as) psiconautas experientes no uso da bebida amazônica. Nossos(as) voluntários(as) se disponibilizaram a participar de um processo em um ambiente e com uma proposta que difere em muito dos usos tradicionais, e era bastante desafiadora. Beberam ayahuasca num laboratório universitário, sem canto nem palo santo, sem reza, dança ou fogueira, no meio da conturbada metrópole paulista. E tiveram que usar uma touca que gravava a atividade elétrica de seus cérebros continuamente num notebook próximo a elas. Sentadas em uma poltrona confortável, doaram pequenas quantidades de sangue a cada 25 minutos. Apesar de não ter a fundamental condução dos guias, curandeiros, mestres ou maestros, que fazem trabalhos tão importantes quanto a bebida em si, e de tomarem ayahuasca uma pessoa por vez, foram acompanhados com carinho e cuidado pela equipe científica, nunca sendo deixados sozinhos ou desamparados, e sempre com os baldinhos à disposição… Tudo isso em prol da colaboração dos saberes tradicionais com os saberes científicos e tecnológicos.Uma pesquisa desse tipo se justifica por várias razões, desde um entendimento mais profundo sobre nossa resposta fisiológica aos compostos químicos presentes na ayahuasca, que nos fornece dados cruciais sobre potenciais terapêuticos e segurança de uso; até informações mais sofisticadas sobre as relações entre cérebro e consciência, o chamado “hard-problem”. Com os resultados dessa jornada aprofundamos e expandimos o conhecimento sobre os efeitos dos componentes moleculares da bebida sagrada, sobre como nossos corpos recebem estas moléculas e que efeitos elas ajudam a desencadear, especialmente no cérebro. Ao minimizarmos as intervenções biomédicas somente ao estritamente necessário e ao adotarmos uma postura observacional, deixando e encorajando que os voluntários passassem a maior parte do tempo de olhos fechados em estado introspectivo, pudemos revelar uma imagem fascinante sobre os efeitos da ayahuasca no cérebro. Este efeito ocorre em duas fases qualitativamente distintas e este perfil bifásico ajuda a explicar contradições de estudos semelhantes feitos anteriormente por outras equipes. Com isso abrimos mais portas para fascinantes investigações futuras sobre os diversos estados de consciência que podem ser alcançados com a bebida amazônica.

Cerca de uma hora após a ingestão da ayahuasca, ocorreram diminuições das ondas alfa (8 a 12 ciclos por segundo), especialmente no córtex temporo-parietal, com uma certa tendência de lateralização para o hemisfério esquerdo. A segunda fase ocorre cerca de uma hora depois (ou seja, cerca de duas horas após a ingestão) e enquanto as ondas alfa foram retornando a um padrão parecido com o que estava antes da ingestão da ayahuasca, os ritmos gama, de frequências muito altas (30 a 100 ciclos por segundo), se intensificaram por quase todo o córtex cerebral, incluindo o córtex frontal. Estas oscilações elétricas em distintas frequências, que ocorrem perpetuamente e simultaneamente em todo o cérebro, são resultado da complexa interação da atividade de bilhões de células cerebrais. E estão relacionadas com todas as funções do cérebro, inclusive os aspectos psicológicos e os estados de consciência. Por exemplo, durante o sono profundo predomina no córtex cerebral uma frequência lenta, de 1 a 4 ciclos por segundo, chamada delta. Enquanto durante a maioria dos sonhos, predomina a frequência teta (4 a 8 ciclos por segundo). Ao caracterizar as principais mudanças nestas frequências de oscilações neurais avançamos na criação de um mapa neurocientífico sobre o estado de consciência desencadeado pela ingestão de ayahuasca.

Há variadas nuances de interpretação para estes dados (e muitos estudos posteriores que podem ser feitos de acordo com cada interpretação, para testas hipóteses específicas). Mas a minha favorita e que discutimos no artigo é de que o ritmo alfa é resultado de atividades inibitórias no cérebro, e o ritmo gama representa atividade neural crucial para a consciência. Quando fechamos os olhos e temos a sensacao de um campo visual escuro, sem imagens, o ritmo alfa se fortalece nas regiões do cérebro que recebem estímulos vindos dos olhos. Ou seja, quando estamos de olhos fechados não apenas a informação que chega dos olhos está ausente, mas as áreas visuais são inibidas por “centros superiores” do córtex, capazes de modular a atividade de áreas sensoriais. E nós temos a experiência subjetiva de um mundo escuro e de ausência de visão. No caso da ayahuasca, encontramos um enfraquecimento dessa inibição em áreas multisensoriais. Ou seja, regiões que estão envolvidas não só com visão, mas com audição, tato, paladar, olfato e também com sensações corpóreas das mais diversas. Faz sentido portanto que esta diminuição de alfa esteja relacionada com o efeito tão comum de experiência de mais sensações e mais estímulos durante o efeito da ayahuasca quando comparado com o estado ordinário de consciência, incluindo as famosas visões de olhos fechados. Já o acelerado gama está relacionado com o que se chama na neurociência de integração. Enquanto áreas diversas do cérebro estão relacionadas a percepções subjetivas distintas, como os cinco sentidos mencionados acima, nossa experiência consciente é unificada. Essa unificação de atividades neurais em áreas anatomicamente distintas ocorre nas oscilações rápidas na frequência gama, que permitem ao cérebro temporariamente juntar as peças de um complexo quebra cabeças de atividade neural. Esse aumento de gama pode ajudar a explicar porque durante a ayahuasca a percepção de sons e imagens, por exemplo, parece se fundir e criar relações peculiares, não perceptíveis durante a consciência ordinária, quando o cérebro tende a organizar a atividade neural relacionada aos cinco sentidos de maneira parcialmente independente. Essa função do gama em unificar ou integrar informações no cérebro é conhecida de longa data, pelo menos desde a obra pioneira do cientista Chileno Francisco Varela. E foi observada em dois indíviduos após tomarem ayahuasca em trabalho do antropólogo Luis Eduardo Luna e colaboradores há uma década. Ao confirmarmos os dados de Luna e colaboradores com nova e mais rigorosa metodologia, com mais pessoas e ao detectarmos a combinação destes efeitos com as reduções em alfa, abrimos portas importantíssimas no entendimento não só dos estados não-ordinários de consciência, mas da teoria neurocientifica sobre a consciência como um todo. Um exemplo é uma teoria proposta recentemente sobre a ação dos psicodélicos que sugere que uma das características principais do cérebro durante o efeito de psicodélicos sejam intensificações do gama. Para Andrew Gallimore, do Japão, que se baseia na influente teoria da informacao integrada, ou IIT (integrated information theory), a mais promissora teoria neurocientífica sobre a consciência, a expansão da consciência com psicodélicos é mesmo possível dentro de uma perspectiva neurocientífica, e provavelmente depende do ritmo gama. Esta expansão da consciência inclui a percepção subjetiva de mais conteúdo, de maior intensidade, incluindo fusões entre os sentidos e possivelmente a experiência subjetiva de intensidades e qualidades não perceptíveis durante a consciência ordinária, como cores mais vívidas e brilhantes e estados emocionais mais intensos do que jamais experienciados fora do estado psicodélico. O gama também tem papel fundamental na teoria da consciência proposta pelo matemático Sir Roger Penrose e pelo anestesiologista Stuart Hameroff. Segundo a teoria deles, oscilações na faixa de 40 ciclos por segundo seriam importantes ao permitir reverberações menores e muito mais aceleradas nos microtúbulos, uma rede de fibras e filamentos que percorre todas as células do nosso corpo – e do cérebro.

Ademais de caracterizar as oscilações e regiões corticais mais importantes no processo neural relacionado à modificação da consciência durante a ayahuasca, fizemos coletas periódicas de sangue para quantificar os princípios ativos da ayahuasca e seus metabólitos. E encontramos que durante a primeira fase a concentração da DMT e da harmina estavam próximas do máximo, sendo que na segunda fase acontecem os picos de harmalina e tetrahidroharmina. Com uma análise estatística sofisticada e inédita, desenvolvida especialmente para este estudo, demonstramos que este efeito bifásico no cérebro esta relacionado à concentração sanguínea de vários componentes do chá. Isto expande a visão científica predominante que foca apenas na famosa DMT. De acordo com este modelo, o papel do cipó é apenas de inibir a digestão da DMT. Mas “ayahuasca” é um dos muitos nomes não só da bebida, mas do cipó jagube ou mariri, catalogado nos anais científicos como Banisteriopsis caapi. Isto revela que, para os povos tradicionais, é o cipó a planta mais importante. E de fato há preparações de ayahuasca feitas somente com o cipó, sem qualquer outra planta. Mas na farmacologia esse quadro foi invertido, dando-se ênfase na psicoatividade da DMT apenas, que não vem do cipó, mas de outras plantas que frequentemente são adicionadas no preparo da bebida, como a rainha no Brasil e Peru (Psychotria viridis) ou a chagropanga na Colômbia (Dyplopteris cabrerana). Mas nossa análise com 10 moléculas (DMT, NMT e DMT-NO; Harmina e harmol; Harmalina e harmalol; THH e THH-OH e também o metabólito serotonérgico IAA) revelou associações importantes entre níveis plasmáticos de DMT, harmina, harmalina e tetraidroharmina, bem como alguns metabólitos como a DMT-NO, e os efeitos cerebrais em alfa e gama em momentos distintos da experiência. Revelamos portanto que a psicoatividade da ayahuasca não pode ser totalmente explicada apenas pelas concentrações de DMT, dando um passo importante para reaproximar o saber científico dos saberes tradicionais.

novo infografico pt_br FB

Descobrimos ainda que a concentração de harmalina (e apenas de harmalina) está correlacionada com o momento em que os voluntários(as) vomitaram. Ou seja, a harmalina desempenha um papel fundamental tanto no cérebro, estando relacionada a intensificação das ondas gama, mas também nos efeitos periféricos da ayahuasca, como o vômito. Isso reforça a idéia de que o vômito tem relações importantes com a experiência psicológica, sendo talvez mais apropriado chamá-lo de purga, termo que reforça a idéia de que ocorre uma associação entre físico e psicológico neste momento da experiência. Esses resultados sobre a harmalina também dão nova importância para as pesquisas pioneiras de Claudio Naranjo, terapeuta Chileno que foi um dos primeiros a estudar ayahuasca desde um ponto de vista médico-científico, nos anos 60. A proposta de Naranjo, de que a harmalina era o principal componente psicoativo da ayahuasca foi, entretanto, quase que totalmente esquecida em prol do foco na DMT a partir dos anos 80. Outro fator importante contra a proposta de Naranjo é que as concentrações de harmalina na ayahuasca são em geral abaixo das doses de harmalina que, sozinha, desencadeiam efeitos psicoativos nítidos, conforme relato subjetivo das pessoas que ingeriram harmalina nos estudos de Naranjo. Mas nunca foi testado o efeito da harmalina combinada com a harmina e a tetraidroharmina, como ocorre na ayahuasca. E então nossos resultados reforçam a idéia de que a harmalina também pode ter contribuições importantes no efeito psicoativo da ayahuasca quando em combinação com as outras beta-carbolinas vindas do cipó. Interessantemente, em quase todos os casos a purga ocorreu após a primeira fase, quando os níveis de DMT estão próximos do máximo que atingem no sangue. Como a elevação da concentração de harmalina no sangue é mais lenta que da DMT e da harmina, vomitar pouco interfere nos efeitos da primeira fase e nas concentrações destas duas moléculas, e ajuda a explicar porque mesmo quem vomita rápido pode ter experiências fortes e profundas. Mas vomitar potencialmente interfere nas concentrações de tetraidroharmina, que é a molécula cujas concentrações sobem mais lentamente, e pode permanecer em circulação por alguns dias, dependendo da capacidade de metabolização de cada indivíduo.

Importante notar ainda que o perfil bifásico foi observado com ingestão de apenas um copo (mas com uma dose grande). Mas sabemos que nos usos rituais é muito frequente os participantes tomarem mais de uma dose, com intervalo de uma hora ou mais. É possível então que nestes casos ocorram variadas combinações de efeitos, como por exemplo a segunda fase de uma primeira dose (aumento de gama) coincidir com a primeira fase de uma segunda dose (diminuição de alfa). Isso potencialmente geraria estados cerebrais (e por correlação, estados de consciência) não observados na pesquisa com apenas uma dose. Isto ajuda a entender porque muitas pessoas relatam que a segunda dose é sempre uma “caixinha de surpresas”, e não apenas a intensificação ou prolongação dos efeitos da primeira toma. Ao depender do perfil metabólico de cada pessoa, do tamanho de cada dose, da proporção destas moléculas na bebida e do intervalo entre elas, pode-se atingir outros estados mesclados entre as duas fases observadas na pesquisa. Some-se a isto as influências ambientais, psicológicas, motivacionais e espirituais e temos uma prática de exploração da consciência que não cabe numa resposta simples e singular sobre qual “o efeito” da ayahuasca.

Do ponto de vista neurocientífico, estas possíveis combinações são muito intrigantes, porque relações entre as frequências alfa e gama no córtex parietal e no frontal estão envolvidas em processos de reavaliação psicológica e emocional. Ou seja, quando fazemos certas formas de introspecção que resultam em ressignificação de eventos emocionais de nossas vidas, estas áreas do cérebro se comunicam através de oscilações elétricas nestas duas faixas de frequência. E estas mesmas frequências e áreas cerebrais estão envolvidas em processos criativos de resolução de problemas. Ou seja, através de nossa pesquisa, a neurociência começa a convergir com o saber ancestral ao reafirmar o potencial da ayahuasca em nutrir a criatividade e o autoconhecimento, facilitando formas de terapia focadas no potencial de cada indíviduo em crescer e se desenvolver de maneira consciente.

Para saber mais, confira abaixo minha palestra na World Ayahuasca Confrence, em Ibiza ano passado (disponível com legendas em português e inglês). Ou ainda a mais antiga “Ayahuasca e as ondas cerebrais“, realizada no Brasil no início deste projeto. Ou se você quer mesmo mergulhar fundo, acesse gratuitamente o artigo científico na íntegra.

Referência: Schenberg EE, Alexandre JFM, Filev R, Cravo AM, Sato JR, Muthukumaraswamy SD, et al. (2015) Acute Biphasic Effects of Ayahuasca. PLoS ONE 10(9): e0137202. doi:10.1371/journal.pone.0137202

 

Brain Cells Break Their Own DNA to Allow Memories to Form (IFL Science)

June 22, 2015 | by Justine Alford

photo credit: Courtesy of MIT Researchers 

Given the fundamental importance of our DNA, it is logical to assume that damage to it is undesirable and spells bad news; after all, we know that cancer can be caused by mutations that arise from such injury. But a surprising new study is turning that idea on its head, with the discovery that brain cells actually break their own DNA to enable us to learn and form memories.

While that may sound counterintuitive, it turns out that the damage is necessary to allow the expression of a set of genes, called early-response genes, which regulate various processes that are critical in the creation of long-lasting memories. These lesions are rectified pronto by repair systems, but interestingly, it seems that this ability deteriorates during aging, leading to a buildup of damage that could ultimately result in the degeneration of our brain cells.

This idea is supported by earlier work conducted by the same group, headed by Li-Huei Tsai, at the Massachusetts Institute of Technology (MIT) that discovered that the brains of mice engineered to develop a model of Alzheimer’s disease possessed a significant amount of DNA breaks, even before symptoms appeared. These lesions, which affected both strands of DNA, were observed in a region critical to learning and memory: the hippocampus.

To find out more about the possible consequences of such damage, the team grew neurons in a dish and exposed them to an agent that causes these so-called double strand breaks (DSBs), and then they monitored the gene expression levels. As described in Cellthey found that while the vast majority of genes that were affected by these breaks showed decreased expression, a small subset actually displayed increased expression levels. Importantly, these genes were involved in the regulation of neuronal activity, and included the early-response genes.

Since the early-response genes are known to be rapidly expressed following neuronal activity, the team was keen to find out whether normal neuronal stimulation could also be inducing DNA breaks. The scientists therefore applied a substance to the cells that is known to strengthen the tiny gap between neurons across which information flows – the synapse – mimicking what happens when an organism is exposed to a new experience.

“Sure enough, we found that the treatment very rapidly increased the expression of those early response genes, but it also caused DNA double strand breaks,” Tsai said in a statement.

So what is the connection between these breaks and the apparent boost in early-response gene expression? After using computers to scrutinize the DNA sequences neighboring these genes, the researchers found that they were enriched with a pattern targeted by an architectural protein that, upon binding, distorts the DNA strands by introducing kinks. By preventing crucial interactions between distant DNA regions, these bends therefore act as a barrier to gene expression. The breaks, however, resolve these constraints, allowing expression to ensue.

These findings could have important implications because earlier work has demonstrated that aging is associated with a decline in the expression of genes involved in the processes of learning and memory formation. It therefore seems likely that the DNA repair system deteriorates with age, but at this stage it is unclear how these changes occur, so the researchers plan to design further studies to find out more.

Brain researchers pinpoint gateway to human memory (Science Daily)

Date:

November 26, 2014

Source:

DZNE – German Center for Neurodegenerative Diseases

Summary:

An international team of researchers has successfully determined the location, where memories are generated with a level of precision never achieved before. To this end the scientists used a particularly accurate type of magnetic resonance imaging technology.

141126111215-large

Magnetic resonance imaging provides insights into the brain. Credit: DZNE/Guido Hennes

The human brain continuously collects information. However, we have only basic knowledge of how new experiences are converted into lasting memories. Now, an international team led by researchers of the University of Magdeburg and the German Center for Neurodegenerative Diseases (DZNE) has successfully determined the location, where memories are generated with a level of precision never achieved before. The team was able to pinpoint this location down to specific circuits of the human brain. To this end the scientists used a particularly accurate type of magnetic resonance imaging (MRI) technology. The researchers hope that the results and method of their study might be able to assist in acquiring a better understanding of the effects Alzheimer’s disease has on the brain.

The findings are reported in Nature Communications.

For the recall of experiences and facts, various parts of the brain have to work together. Much of this interdependence is still undetermined, however, it is known that memories are stored primarily in the cerebral cortex and that the control center that generates memory content and also retrieves it, is located in the brain’s interior. This happens in the hippocampus and in the adjacent entorhinal cortex.

“It is been known for quite some time that these areas of the brain participate in the generation of memories. This is where information is collected and processed. Our study has refined our view of this situation,” explains Professor Emrah Düzel, site speaker of the DZNE in Magdeburg and director of the Institute of Cognitive Neurology and Dementia Research at the University of Magdeburg. “We have been able to locate the generation of human memories to certain neuronal layers within the hippocampus and the entorhinal cortex. We were able to determine which neuronal layer was active. This revealed if information was directed into the hippocampus or whether it traveled from the hippocampus into the cerebral cortex. Previously used MRI techniques were not precise enough to capture this directional information. Hence, this is the first time we have been able to show where in the brain the doorway to memory is located.”

For this study, the scientists examined the brains of persons who had volunteered to participate in a memory test. The researchers used a special type of magnetic resonance imaging technology called “7 Tesla ultra-high field MRI.” This enabled them to determine the activity of individual brain regions with unprecedented accuracy.

A Precision method for research on Alzheimer’s

“This measuring technique allows us to track the flow of information inside the brain and examine the areas that are involved in the processing of memories in great detail,” comments Düzel. “As a result, we hope to gain new insights into how memory impairments arise that are typical for Alzheimer’s. Concerning dementia, is the information still intact at the gateway to memory? Do troubles arise later on, when memories are processed? We hope to answer such questions.”

Story Source:

The above story is based on materials provided by DZNE – German Center for Neurodegenerative Diseases. Note: Materials may be edited for content and length.

Journal Reference:

  1. Anne Maass, Hartmut Schütze, Oliver Speck, Andrew Yonelinas, Claus Tempelmann, Hans-Jochen Heinze, David Berron, Arturo Cardenas-Blanco, Kay H. Brodersen, Klaas Enno Stephan, Emrah Düzel. Laminar activity in the hippocampus and entorhinal cortex related to novelty and episodic encoding. Nature Communications, 2014; 5: 5547 DOI: 10.1038/ncomms6547

Ghost illusion created in the lab (Science Daily)

Date: November 6, 2014

Source: Ecole Polytechnique Fédérale de Lausanne

Summary: Patients suffering from neurological or psychiatric conditions have often reported ‘feeling a presence’ watching over them. Now, researchers have succeeded in recreating these ghostly illusions in the lab.

This image depicts a person experiencing the ghost illusion in the lab. Credit: Alain Herzog/EPFL

Ghosts exist only in the mind, and scientists know just where to find them, an EPFL study suggests. Patients suffering from neurological or psychiatric conditions have often reported feeling a strange “presence.” Now, EPFL researchers in Switzerland have succeeded in recreating this so-called ghost illusion in the laboratory.

On June 29, 1970, mountaineer Reinhold Messner had an unusual experience. Recounting his descent down the virgin summit of Nanga Parbat with his brother, freezing, exhausted, and oxygen-starved in the vast barren landscape, he recalls, “Suddenly there was a third climber with us… a little to my right, a few steps behind me, just outside my field of vision.”

It was invisible, but there. Stories like this have been reported countless times by mountaineers, explorers, and survivors, as well as by people who have been widowed, but also by patients suffering from neurological or psychiatric disorders. They commonly describe a presence that is felt but unseen, akin to a guardian angel or a demon. Inexplicable, illusory, and persistent.

Olaf Blanke’s research team at EPFL has now unveiled this ghost. The team was able to recreate the illusion of a similar presence in the laboratory and provide a simple explanation. They showed that the “feeling of a presence” actually results from an alteration of sensorimotor brain signals, which are involved in generating self-awareness by integrating information from our movements and our body’s position in space.

In their experiment, Blanke’s team interfered with the sensorimotor input of participants in such a way that their brains no longer identified such signals as belonging to their own body, but instead interpreted them as those of someone else. The work is published in Current Biology.

Generating a “Ghost”

The researchers first analyzed the brains of 12 patients with neurological disorders — mostly epilepsy — who have experienced this kind of “apparition.” MRI analysis of the patients’s brains revealed interference with three cortical regions: the insular cortex, parietal-frontal cortex, and the temporo-parietal cortex. These three areas are involved in self-awareness, movement, and the sense of position in space (proprioception). Together, they contribute to multisensory signal processing, which is important for the perception of one’s own body.

The scientists then carried out a “dissonance” experiment in which blindfolded participants performed movements with their hand in front of their body. Behind them, a robotic device reproduced their movements, touching them on the back in real time. The result was a kind of spatial discrepancy, but because of the synchronized movement of the robot, the participant’s brain was able to adapt and correct for it.

Next, the neuroscientists introduced a temporal delay between the participant’s movement and the robot’s touch. Under these asynchronous conditions, distorting temporal and spatial perception, the researchers were able to recreate the ghost illusion.

An “Unbearable” Experience

The participants were unaware of the experiment’s purpose. After about three minutes of the delayed touching, the researchers asked them what they felt. Instinctively, several subjects reported a strong “feeling of a presence,” even counting up to four “ghosts” where none existed. “For some, the feeling was even so strong that they asked to stop the experiment,” said Giulio Rognini, who led the study.

“Our experiment induced the sensation of a foreign presence in the laboratory for the first time. It shows that it can arise under normal conditions, simply through conflicting sensory-motor signals,” explained Blanke. “The robotic system mimics the sensations of some patients with mental disorders or of healthy individuals under extreme circumstances. This confirms that it is caused by an altered perception of their own bodies in the brain.”

A Deeper Understanding of Schizophrenia

In addition to explaining a phenomenon that is common to many cultures, the aim of this research is to better understand some of the symptoms of patients suffering from schizophrenia. Such patients often suffer from hallucinations or delusions associated with the presence of an alien entity whose voice they may hear or whose actions they may feel. Many scientists attribute these perceptions to a malfunction of brain circuits that integrate sensory information in relation to our body’s movements.

“Our brain possesses several representations of our body in space,” added Giulio Rognini. “Under normal conditions, it is able to assemble a unified self-perception of the self from these representations. But when the system malfunctions because of disease — or, in this case, a robot — this can sometimes create a second representation of one’s own body, which is no longer perceived as ‘me’ but as someone else, a ‘presence’.”

It is unlikely that these findings will stop anyone from believing in ghosts. However, for scientists, it’s still more evidence that they only exist in our minds.

Watch the video: http://youtu.be/GnusbO8QjbE


Journal Reference:

  1. Olaf Blanke, Polona Pozeg, Masayuki Hara, Lukas Heydrich, Andrea Serino, Akio Yamamoto, Toshiro Higuchi, Roy Salomon, Margitta Seeck, Theodor Landis, Shahar Arzy, Bruno Herbelin, Hannes Bleuler, Giulio Rognini. Neurological and Robot-Controlled Induction of an Apparition. Current Biology, 2014; DOI:10.1016/j.cub.2014.09.049

Direct brain interface between humans (Science Daily)

Date: November 5, 2014

Source: University of Washington

Summary: Researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.Mary Levin, U of Wash. Credit: Image courtesy of University of Washington

Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.

The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.

Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.

The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.

Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.

Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.

Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.

Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.

Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.

With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.

They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.

The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.

“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.

Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.

The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.


Journal Reference:

  1. Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332

How the brain leads us to believe we have sharp vision (Science Daily)

Date: October 17, 2014

Source: Bielefeld University

Summary: We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists have been investigating how the brain fools us into believing that we see in sharp detail.

The thumbnail at the end of an outstretched arm: This is the area that the eye actually can see in sharp detail. Researchers have investigated why the rest of the world also appears to be uniformly detailed. Credit: Bielefeld University

We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists at Bielefeld University have been investigating how the brain fools us into believing that we see in sharp detail. The results have been published in the scientific magazine Journal of Experimental Psychology: General. Its central finding is that our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail.

“In our study we are dealing with the question of why we believe that we see the world uniformly detailed,” says Dr. Arvid Herwig from the Neuro-Cognitive Psychology research group of the Faculty of Psychology and Sports Science. The group is also affiliated to the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University and is led by Professor Dr. Werner X. Schneider.

Only the fovea, the central area of the retina, can process objects precisely. We should therefore only be able to see a small area of our environment in sharp detail. This area is about the size of a thumb nail at the end of an outstretched arm. In contrast, all visual impressions which occur outside the fovea on the retina become progressively coarse. Nevertheless, we commonly have the impression that we see large parts of our environment in sharp detail.

Herwig and Schneider have been getting to the bottom of this phenomenon with a series of experiments. Their approach presumes that people learn through countless eye movements over a lifetime to connect the coarse impressions of objects outside the fovea to the detailed visual impressions after the eye has moved to the object of interest. For example, the coarse visual impression of a football (blurred image of a football) is connected to the detailed visual impression after the eye has moved. If a person sees a football out of the corner of her eye, her brain will compare this current blurred picture with memorised images of blurred objects. If the brain finds an image that fits, it will replace the coarse image with a precise image from memory. This blurred visual impression is replaced before the eye moves. The person thus thinks that she already sees the ball clearly, although this is not the case.

The psychologists have been using eye-tracking experiments to test their approach. Using the eye-tracking technique, eye movements are measured accurately with a specific camera which records 1000 images per second. In their experiments, the scientists have recorded fast balistic eye movements (saccades) of test persons. Though most of the participants did not realise it, certain objects were changed during eye movement. The aim was that the test persons learn new connections between visual stimuli from inside and outside the fovea, in other words from detailed and coarse impressions. Afterwards, the participants were asked to judge visual characteristics of objects outside the area of the fovea. The result showed that the connection between a coarse and detailed visual impression occurred after just a few minutes. The coarse visual impressions became similar to the newly learnt detailed visual impressions.

“The experiments show that our perception depends in large measure on stored visual experiences in our memory,” says Arvid Herwig. According to Herwig and Schneider, these experiences serve to predict the effect of future actions (“What would the world look like after a further eye movement”). In other words: “We do not see the actual world, but our predictions.”


Journal Reference:

  1. Arvid Herwig, Werner X. Schneider. Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 2014; 143 (5): 1903 DOI: 10.1037/a0036781

Your Brain on Metaphors (The Chronicle of Higher Education)

September 1, 2014

Neuroscientists test the theory that your body shapes your ideas

Your Brain  on Metaphors 1

Chronicle Review illustration by Scott Seymour

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

The verbs are the same.
The syntax is identical.
Does the brain notice, or care,
that the first is literal, the second
metaphorical, the third idiomatic?

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. “Love is a rose, but you better not pick it,” sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

But in their 1980 book, Metaphors We Live By,the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote.

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness.

Metaphors do differ across languages, but that doesn’t affect the theory. For example, in Aymara, spoken in Bolivia and Chile, speakers refer to past experiences as being in front of them, on the theory that past events are “visible” and future ones are not. However, the difference between behind and ahead is relatively unimportant compared with the central fact that space is being used as a metaphor for time. Lakoff argues that it isimpossible—not just difficult, but impossible—for humans to talk about time and many other fundamental aspects of life without using metaphors to do it.

Lakoff and Johnson’s program is as anti-Platonic as it’s possible to get. It undermines the argument that human minds can reveal transcendent truths about reality in transparent language. They argue instead that human cognition is embodied—that human concepts are shaped by the physical features of human brains and bodies. “Our physiology provides the concepts for our philosophy,” Lakoff wrote in his introduction to Benjamin Bergen’s 2012 book, Louder Than Words: The New Science of How the Mind Makes Meaning. Marianna Bolognesi, a linguist at the International Center for Intercultural Exchange, in Siena, Italy, puts it this way: “The classical view of cognition is that language is an independent system made with abstract symbols that work independently from our bodies. This view has been challenged by the embodied account of cognition which states that language is tightly connected to our experience. Our bodily experience.”

Modern brain-scanning technologies make it possible to test such claims empirically. “That would make a connection between the biology of our bodies on the one hand, and thinking and meaning on the other hand,” says Gerard Steen, a professor of linguistics at VU University Amsterdam. Neuroscientists have been stuffing volunteers into fMRI scanners and having them read sentences that are literal, metaphorical, and idiomatic.

Neuroscientists agree on what happens with literal sentences like “The player kicked the ball.” The brain reacts as if it were carrying out the described actions. This is called “simulation.” Take the sentence “Harry picked up the glass.” “If you can’t imagine picking up a glass or seeing someone picking up a glass,” Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, “then you can’t understand that sentence.” Lakoff argues that the brain understands sentences not just by analyzing syntax and looking up neural dictionaries, but also by igniting its memories of kicking and picking up.

But what about metaphorical sentences like “The patient kicked the habit”? An addiction can’t literally be struck with a foot. Does the brain simulate the action of kicking anyway? Or does it somehow automatically substitute a more literal verb, such as “stopped”? This is where functional MRI can help, because it can watch to see if the brain’s motor cortex lights up in areas related to the leg and foot.

The evidence says it does. “When you read action-related metaphors,” says Valentina Cuccio, a philosophy postdoc at the University of Palermo, in Italy, “you have activation of the motor area of the brain.” In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

But idioms are a major sticking point. Idioms are usually thought of as dead metaphors, that is, as metaphors that are so familiar that they have become clichés. What does the brain do with “The villain kicked the bucket” (“The villain died”)? What about “The students toed the line” (“The students conformed to the rules”)? Does the brain simulate the verb phrases, or does it treat them as frozen blocks of abstract language? And if it simulates them, what actions does it imagine? If the brain understands language by simulating it, then it should do so even when sentences are not literal.

The findings so far have been contradictory. Lisa Aziz-Zadeh, of the University of Southern California, and her colleagues reported in 2006 that idioms such as “biting off more than you can chew” did not activate the motor cortex. So did Ana Raposo, then at the University of Cambridge, and her colleagues in 2009. On the other hand, Véronique Boulenger, of the Laboratoire Dynamique du Langage, in Lyon, France, reported in the same year that they did, at least for leg and arm verbs.

In 2013, Desai and his colleagues tried to settle the problem of idioms. They first hypothesized that the inconsistent results come from differences of methodology. “Imaging studies of embodiment in figurative language have not compared idioms and metaphors,” they wrote in a report. “Some have mixed idioms and metaphors together, and in some cases, ‘idiom’ is used to refer to familiar metaphors.” Lera Boroditsky, an associate professor of psychology at the University of California at San Diego, agrees. “The field is new. The methods need to stabilize,” she says. “There are many different kinds of figurative language, and they may be importantly different from one another.”

Not only that, the nitty-gritty differences of procedure may be important. “All of these studies are carried out with different kinds of linguistic stimuli with different procedures,” Cuccio says. “So, for example, sometimes you have an experiment in which the person can read the full sentence on the screen. There are other experiments in which participants read the sentence just word by word, and this makes a difference.”

To try to clear things up, Desai and his colleagues presented subjects inside fMRI machines with an assorted set of metaphors and idioms. They concluded that in a sense, everyone was right. The more idiomatic the metaphor was, the less the motor system got involved: “When metaphors are very highly conventionalized, as is the case for idioms, engagement of sensory-motor systems is minimized or very brief.”

But George Lakoff thinks the problem of idioms can’t be settled so easily. The people who do fMRI studies are fine neuroscientists but not linguists, he says. “They don’t even know what the problem is most of the time. The people doing the experiments don’t know the linguistics.”

That is to say, Lakoff explains, their papers assume that every brain processes a given idiom the same way. Not true. Take “kick the bucket.” Lakoff offers a theory of what it means using a scene from Young Frankenstein. “Mel Brooks is there and they’ve got the patient dying,” he says. “The bucket is a slop bucket at the edge of the bed, and as he dies, his foot goes out in rigor mortis and the slop bucket goes over and they all hold their nose. OK. But what’s interesting about this is that the bucket starts upright and it goes down. It winds up empty. This is a metaphor—that you’re full of life, and life is a fluid. You kick the bucket, and it goes over.”

That’s a useful explanation of a rather obscure idiom. But it turns out that when linguists ask people what they think the metaphor means, they get different answers. “You say, ‘Do you have a mental image? Where is the bucket before it’s kicked?’ ” Lakoff says. “Some people say it’s upright. Some people say upside down. Some people say you’re standing on it. Some people have nothing. You know! There isn’t a systematic connection across people for this. And if you’re averaging across subjects, you’re probably not going to get anything.”

Similarly, Lakoff says, when linguists ask people to write down the idiom “toe the line,” half of them write “tow the line.” That yields a different mental simulation. And different mental simulations will activate different areas of the motor cortex—in this case, scrunching feet up to a line versus using arms to tow something heavy. Therefore, fMRI results could show different parts of different subjects’ motor cortexes lighting up to process “toe the line.” In that case, averaging subjects together would be misleading.

Furthermore, Lakoff questions whether functional MRI can really see what’s going on with language at the neural level. “How many neurons are there in one pixel or one voxel?” he says. “About 125,000. They’re one point in the picture.” MRI lacks the necessary temporal resolution, too. “What is the time course of that fMRI? It could be between one and five seconds. What is the time course of the firing of the neurons? A thousand times faster. So basically, you don’t know what’s going on inside of that voxel.” What it comes down to is that language is a wretchedly complex thing and our tools aren’t yet up to the job.

Nonetheless, the work supports a radically new conception of how a bunch of pulsing cells can understand anything at all. In a 2012 paper, Lakoff offered an account of how metaphors arise out of the physiology of neural firing, based on the work of a student of his, Srini Narayanan, who is now a faculty member at Berkeley. As children grow up, they are repeatedly exposed to basic experiences such as temperature and affection simultaneously when, for example, they are cuddled. The neural structures that record temperature and affection are repeatedly co-activated, leading to an increasingly strong neural linkage between them.

However, since the brain is always computing temperature but not always computing affection, the relationship between those neural structures is asymmetric. When they form a linkage, Lakoff says, “the one that spikes first and most regularly is going to get strengthened in its direction, and the other one is going to get weakened.” Lakoff thinks the asymmetry gives rise to a metaphor: Affection is Warmth. Because of the neural asymmetry, it doesn’t go the other way around: Warmth is not Affection. Feeling warm during a 100-degree day, for example, does not make one feel loved. The metaphor originates from the asymmetry of the neural firing. Lakoff is now working on a book on the neural theory of metaphor.

If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

On the other hand, roboticists such as Rodney Brooks, an emeritus professor at the Massachusetts Institute of Technology, have suggested that computers could be provided with bodies. For example, they could be given control of robots stuffed with sensors and actuators. Brooks pondered Lakoff’s ideas in his 2002 book, Flesh and Machines, and supposed, “For anything to develop the same sorts of conceptual understanding of the world as we do, it will have to develop the same sorts of metaphors, rooted in a body, that we humans do.”

But Lera Boroditsky wonders if giving computers humanlike bodies would only reproduce human limitations. “If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system,” she says. “I don’t know why we have to replicate our physical limitations in other systems.”

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there. And so may be the ability to create asymmetric neural linkages that say this is like (but not identical to) that. In an age of brain scanning as well as poetry, that’s where metaphor gets you.

Michael Chorost is the author of Rebuilt: How Becoming Part Computer Made Me More Human (Houghton Mifflin, 2005) and World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011).

Inside the teenage brain: New studies explain risky behavior (Science Daily)

Date: August 27, 2014

Source: Florida State University

Summary: It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.


Young man (stock image). “Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general. Credit: © iko / Fotolia

It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Florida State University College of Medicine Neuroscientist Pradeep Bhide brought together some of the world’s foremost researchers in a quest to explain why teenagers — boys, in particular — often behave erratically.

The result is a series of 19 studies that approached the question from multiple scientific domains, including psychology, neurochemistry, brain imaging, clinical neuroscience and neurobiology. The studies are published in a special volume of Developmental Neuroscience, “Teenage Brains: Think Different?”

“Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general.

“The emotional and economic burdens of such behaviors are quite huge. The research described in this book offers clues to what may cause such maladaptive behaviors and how one may be able to devise methods of countering, avoiding or modifying these behaviors.”

An example of findings published in the book that provide new insights about the inner workings of a teenage boy’s brain:

• Unlike children or adults, teenage boys show enhanced activity in the part of the brain that controls emotions when confronted with a threat. Magnetic resonance scanner readings in one study revealed that the level of activity in the limbic brain of adolescent males reacting to threat, even when they’ve been told not to respond to it, was strikingly different from that in adult men.

• Using brain activity measurements, another team of researchers found that teenage boys were mostly immune to the threat of punishment but hypersensitive to the possibility of large gains from gambling. The results question the effectiveness of punishment as a deterrent for risky or deviant behavior in adolescent boys.

• Another study demonstrated that a molecule known to be vital in developing fear of dangerous situations is less active in adolescent male brains. These findings point towards neurochemical differences between teenage and adult brains, which may underlie the complex behaviors exhibited by teenagers.

“The new studies illustrate the neurobiological basis of some of the more unusual but well-known behaviors exhibited by our teenagers,” Bhide said. “Stress, hormonal changes, complexities of psycho-social environment and peer-pressure all contribute to the challenges of assimilation faced by teenagers.

“These studies attempt to isolate, examine and understand some of these potential causes of a teenager’s complex conundrum. The research sheds light on how we may be able to better interact with teenagers at home or outside the home, how to design educational strategies and how best to treat or modify a teenager’s maladaptive behavior.”

Bhide conceived and edited “Teenage Brains: Think Different?” His co-editors were Barry Kasofsky and B.J. Casey, both of Weill Medical College at Cornell University. The book was published by Karger Medical and Scientific Publisher of Basel, Switzerland. More information on the book can be found at: http://www.karger.com/Book/Home/261996

The table of contents to the special journal volume can be found at: http://www.karger.com/Journal/Issue/261977

Treating mental illness by changing memories of things past (Science Daily)

Date: August 12, 2014

Source: Elsevier

Summary: Author Marcel Proust makes a compelling case that our identities and decisions are shaped in profound and ongoing ways by our memories. This truth is powerfully reflected in mental illnesses, like posttraumatic stress disorder (PTSD) and addictions. In PTSD, memories of traumas intrude vividly upon consciousness, causing distress, driving people to avoid reminders of their traumas, and increasing risk for addiction and suicide. In addiction, memories of drug use influence reactions to drug-related cues and motivate compulsive drug use.

 

“Memory reconsolidation is probably among the most exciting phenomena in cognitive neuroscience today. It assumes that memories may be modified once they are retrieved which may give us the great opportunity to change seemingly robust, unwanted memories,” explains Dr. Lars Schwabe of Ruhr-University Bochum in Germany. Credit: Image courtesy of Elsevier

In the novel À la recherche du temps perdu (translated into English as Remembrance of Things Past), Marcel Proust makes a compelling case that our identities and decisions are shaped in profound and ongoing ways by our memories.

This truth is powerfully reflected in mental illnesses, like posttraumatic stress disorder (PTSD) and addictions. In PTSD, memories of traumas intrude vividly upon consciousness, causing distress, driving people to avoid reminders of their traumas, and increasing risk for addiction and suicide. In addiction, memories of drug use influence reactions to drug-related cues and motivate compulsive drug use.

What if one could change these dysfunctional memories? Although we all like to believe that our memories are reliable and permanent, it turns out that memories may indeed be plastic.

The process for modifying memories, depicted in the graphic, is called memory reconsolidation. After memories are formed and stored, subsequent retrieval may make them unstable. In other words, when a memory is activated, it also becomes open to revision and reconsolidation in a new form.

“Memory reconsolidation is probably among the most exciting phenomena in cognitive neuroscience today. It assumes that memories may be modified once they are retrieved which may give us the great opportunity to change seemingly robust, unwanted memories,” explains Dr. Lars Schwabe of Ruhr-University Bochum in Germany. He and his colleagues have authored a review paper on the topic, published in the current issue of Biological Psychiatry.

The idea of memory reconsolidation was initially discovered and demonstrated in rodents.

The first evidence of reconsolidation in humans was reported in a study in 2003, and the findings have since continued to accumulate. The current report summarizes the most recent findings on memory reconsolidation in humans and poses additional questions that must be answered by future studies.

“Reconsolidation appears to be a fundamental process underlying cognitive and behavioral therapies. Identifying its roles and mechanisms is an important step forward to fully harnessing the reconsolidation process in psychotherapy,” said Dr. John Krystal, Editor of Biological Psychiatry.

The translation of the animal data to humans is a vital step for the potential application of memory reconsolidation in the context of mental disorders. Memory reconsolidation could open the door to novel treatment approaches for disorders such as PTSD or drug addiction.


Journal Reference:

  1. Lars Schwabe, Karim Nader, Jens C. Pruessner. Reconsolidation of Human Memory: Brain Mechanisms and Clinical Relevance. Biological Psychiatry, 2014; 76 (4): 274 DOI: 10.1016/j.biopsych.2014.03.008

Stefano Mancuso, pionero en el estudio de la neurobiología de las plantas (La Vanguardia)

Victor-M Amela, Ima Sanchís, Lluís Amiguet

“Las plantas tienen neuronas, son seres inteligentes”

29/12/2010 – 02:03

"Las plantas tienen neuronas, son seres inteligentes"

Foto: KIM MANRESA

IMA SANCHÍS

Cerebro vegetal

Gracias a nuestros amigos de Redes, el programa de Eduard Punset, buscadores incansables de todo conocimiento científico que amplíe los límites del saber, de quiénes somos y qué papel desempeñamos en esta sopa de universos, descubrimos a Mancuso, que nos explica que las plantas, vistas a cámara rápida, se comportan como si tuvieran cerebro: tienen neuronas, se comunican mediante señales químicas, toman decisiones, son altruistas y manipuladoras. ¿Hace cinco años era imposible hablar de comportamiento de las plantas, hoy podemos empezar a hablar de su inteligencia¿… Puede que pronto empecemos a hablar de sus sentimientos. Mancuso estará en Redes el próximo día 2. No se lo pierdan.

Sorpréndame.

Las plantas son organismos inteligentes, pero se mueven y toman decisiones en un tiempo más largo que el del hombre.

Lo intuía.

Hoy sabemos que tienen familia y parientes y que reconocen su cercanía. Se comportan de manera totalmente distinta si a su lado hay parientes o hay extraños. Si son parientes no compiten: a través de las raíces, dividen el territorio de manera equitativa.

¿Un árbol puede voluntariamente mandar savia a una planta pequeña?

Sí. Las plantas requieren luz para vivir, ypara que una semilla llegue a la luz deben pasar muchos años; mientras tanto, son nutridas por árboles de su misma especie.

Curioso.

Los cuidados parentales sólo se dan en animales muy evolucionados y es increíble que se den en las plantas.

Entonces, se comunican.

Sí, en una selva todas las plantas están en comunicación subterránea a través de las raíces. Y también fabrican moléculas volátiles que avisan a plantas lejanas sobre lo que está sucediendo.

¿Por ejemplo?

Cuando una planta es atacada por un patógeno, inmediatamente produce moléculas volátiles que pueden viajar kilómetros, y que avisan a todas las demás para que preparen sus defensas.

¿Qué defensas?

Producen moléculas químicas que las convierten en indigeribles, y pueden ser muy agresivas. Hace diez años, en Botsuana introdujeron en un gran parque 200.000 antílopes, que comenzaron a comerse las acacias con intensidad. Tras pocas semanas muchos murieron y al cabo de seis meses murieron más de 10.000, y no advertían por qué. Hoy sabemos que fueron las plantas.

Demasiada predación.

Sí, y las plantas aumentaron hasta tal punto la concentración de taninos en sus hojas, que se convirtieron en un veneno.

¿Las plantas también son empáticas con otros seres?

Es difícil decirlo, pero hay una cosa segura: las plantas pueden manipular a los animales. Durante la polinización producen néctar y otras sustancias para atraer a los insectos. Las orquídeas producen flores que son muy similares a las hembras de algunos insectos, que, engañados, acuden a ellas. Y hay quien afirma que hasta el ser humano es manipulado por las plantas.

¿. ..?

Todas las drogas que usa el hombre (café, tabaco, opio, marihuana…) derivan de las plantas, ¿pero por qué las plantas producen una sustancia que convierte a humanos en dependientes? Porque así las propagamos. Las plantas utilizan al hombre como transporte. Hay investigaciones sobre ello.

Increíble.

Si mañana desaparecieran las plantas del planeta, en un mes toda la vida se extinguiría porque no habría comida ni oxígeno. Todo el oxígeno que respiramos viene de ellas. Pero si nosotros desapareciéramos, no pasaría nada. Somos dependientes de las plantas, pero las plantas no lo son de nosotros. Quien es dependiente está en una situación inferior, ¿no?

Las plantas son mucho más sensibles. Cuando algo cambia en el ambiente, como ellas no pueden escapar, han de ser capaces de sentir con mucha anticipación cualquier mínimo cambio para adaptarse.

¿Y cómo perciben?

Cada punta de raíz es capaz de percibir continuamente y a la vez como mínimo quince parámetros distintos físicos y químicos (temperatura, luz, gravedad, presencia de nutrientes, oxígeno).

Es su gran descubrimiento, y es suyo.

En cada punta de las raíces existen células similares a nuestras neuronas y su función es la misma: comunicar señales mediante impulsos eléctricos, igual que nuestro cerebro. En una planta puede haber millones de puntas de raíces, cada una con su pequeña comunidad de células; y trabajan en red como internet.

Ha encontrado el cerebro vegetal.

Sí, su zona de cálculo. La cuestión es cómo medir su inteligencia. Pero de una cosa estamos seguros: son muy inteligentes, su poder de resolver problemas, de adaptación, es grande. Hoy sobre el planeta el 99,6% de todo lo que está vivo son plantas.

… Y sólo conocemos el 10%.

Y en ese porcentaje tenemos todo nuestro alimento y la medicina. ¿Qué habrá en el restante 90%?… A diario, cientos de especies vegetales desconocidas se extinguen. Tal vez poseían la capacidad de una cura importante, no lo sabremos nunca. Debemos proteger las plantas por nuestra supervivencia.

¿Qué le emociona de las plantas?

Algunos comportamientos son muy emocionantes. Todas las plantas duermen, se despiertan, buscan la luz con sus hojas; tienen una actividad similar a la de los animales. Filmé el crecimiento de unos girasoles, y se ve clarísimo cómo juegan entre ellos.

¿Juegan?

Sí, establecen el comportamiento típico del juego que se ve en tantos animales. Cogimos una de esas pequeñas plantas y la hicimos crecer sola. De adulta tenía problemas de comportamiento: le costaba girar en busca del sol, le faltaba el aprendizaje a través del juego. Ver estas cosas es emocionante.

Leer más: http://www.lavanguardia.com/lacontra/20101229/54095622430/las-plantas-tienen-neuronas-son-seres-inteligentes.html#ixzz3A8PpebKp

Social origins of intelligence in the brain (Science Daily)

Date: July 29, 2014

Source: University of Illinois at Urbana-Champaign

Summary: By studying the injuries and aptitudes of Vietnam War veterans who suffered penetrating head wounds during the war, scientists are tackling — and beginning to answer — longstanding questions about how the brain works. The researchers found that brain regions that contribute to optimal social functioning also are vital to general intelligence and to emotional intelligence. This finding bolsters the view that general intelligence emerges from the emotional and social context of one’s life.


Brain regions that contribute to optimal social functioning also are vital to general intelligence and to emotional intelligence. Credit: © christingasner / Fotolia

By studying the injuries and aptitudes of Vietnam War veterans who suffered penetrating head wounds during the war, scientists are tackling — and beginning to answer — longstanding questions about how the brain works.

The researchers found that brain regions that contribute to optimal social functioning also are vital to general intelligence and to emotional intelligence. This finding bolsters the view that general intelligence emerges from the emotional and social context of one’s life.

The findings are reported in the journal Brain.

“We are trying to understand the nature of general intelligence and to what extent our intellectual abilities are grounded in social cognitive abilities,” said Aron Barbey, a University of Illinois professor of neuroscience, of psychology, and of speech and hearing science. Barbey (bar-BAY), an affiliate of the Beckman Institute and of the Institute for Genomic Biology at the U. of I., led the new study with an international team of collaborators.

Studies in social psychology indicate that human intellectual functions originate from the social context of everyday life, Barbey said.

“We depend at an early stage of our development on social relationships — those who love us care for us when we would otherwise be helpless,” he said.

Social interdependence continues into adulthood and remains important throughout the lifespan, Barbey said.

“Our friends and family tell us when we could make bad mistakes and sometimes rescue us when we do,” he said. “And so the idea is that the ability to establish social relationships and to navigate the social world is not secondary to a more general cognitive capacity for intellectual function, but that it may be the other way around. Intelligence may originate from the central role of relationships in human life and therefore may be tied to social and emotional capacities.”

The study involved 144 Vietnam veterans injured by shrapnel or bullets that penetrated the skull, damaging distinct brain tissues while leaving neighboring tissues intact. Using CT scans, the scientists painstakingly mapped the affected brain regions of each participant, then pooled the data to build a collective map of the brain.

The researchers used a battery of carefully designed tests to assess participants’ intellectual, emotional and social capabilities. They then looked for patterns that tied damage to specific brain regions to deficits in the participants’ ability to navigate the intellectual, emotional or social realms. Social problem solving in this analysis primarily involved conflict resolution with friends, family and peers at work.

As in their earlier studies of general intelligence and emotional intelligence, the researchers found that regions of the frontal cortex (at the front of the brain), the parietal cortex (further back near the top of the head) and the temporal lobes (on the sides of the head behind the ears) are all implicated in social problem solving. The regions that contributed to social functioning in the parietal and temporal lobes were located only in the brain’s left hemisphere, while both left and right frontal lobes were involved.

The brain networks found to be important to social adeptness were not identical to those that contribute to general intelligence or emotional intelligence, but there was significant overlap, Barbey said.

“The evidence suggests that there’s an integrated information-processing architecture in the brain, that social problem solving depends upon mechanisms that are engaged for general intelligence and emotional intelligence,” he said. “This is consistent with the idea that intelligence depends to a large extent on social and emotional abilities, and we should think about intelligence in an integrated fashion rather than making a clear distinction between cognition and emotion and social processing. This makes sense because our lives are fundamentally social — we direct most of our efforts to understanding others and resolving social conflict. And our study suggests that the architecture of intelligence in the brain may be fundamentally social, too.”

Journal Reference:

  1. A. K. Barbey, R. Colom, E. J. Paul, A. Chau, J. Solomon, J. H. Grafman. Lesion mapping of social problem solving. Brain, 2014; DOI: 10.1093/brain/awu207

‘Free choice’ in primates altered through brain stimulation (Science Daily)

Date: May 29, 2014

Source: KU Leuven

Summary: When electrical pulses are applied to the ventral tegmental area of their brain, macaques presented with two images change their preference from one image to the other. The study is the first to confirm a causal link between activity in the ventral tegmental area and choice behavior in primates.

The study is the first to show a causal link between activity in ventral tegmental area and choice behaviour.. Credit: Image courtesy of KU Leuven

When electrical pulses are applied to the ventral tegmental area of their brain, macaques presented with two images change their preference from one image to the other. The study by researchers Wim Vanduffel and John Arsenault (KU Leuven and Massachusetts General Hospital) is the first to confirm a causal link between activity in the ventral tegmental area and choice behaviour in primates.

The ventral tegmental area is located in the midbrain and helps regulate learning and reinforcement in the brain’s reward system. It produces dopamine, a neurotransmitter that plays an important role in positive feelings, such as receiving a reward. “In this way, this small area of the brain provides learning signals,” explains Professor Vanduffel. “If a reward is larger or smaller than expected, behavior is reinforced or discouraged accordingly.”

Causal link

This effect can be artificially induced: “In one experiment, we allowed macaques to choose multiple times between two images — a star or a ball, for example. This told us which of the two visual stimuli they tended to naturally prefer. In a second experiment, we stimulated the ventral tegmental area with mild electrical currents whenever they chose the initially nonpreferred image. This quickly changed their preference. We were also able to manipulate their altered preference back to the original favorite.”

The study, which will be published online in the journal Current Biology on 16 June, is the first to confirm a causal link between activity in the ventral tegmental area and choice behaviour in primates. “In scans we found that electrically stimulating this tiny brain area activated the brain’s entire reward system, just as it does spontaneously when a reward is received. This has important implications for research into disorders relating to the brain’s reward network, such as addiction or learning disabilities.”

Could this method be used in the future to manipulate our choices? “Theoretically, yes. But the ventral tegmental area is very deep in the brain. At this point, stimulating it can only be done invasively, by surgically placing electrodes — just as is currently done for deep brain stimulation to treat Parkinson’s or depression. Once non-invasive methods — light or ultrasound, for example — can be applied with a sufficiently high level of precision, they could potentially be used for correcting defects in the reward system, such as addiction and learning disabilities.”

 Journal Reference:
  1. John T. Arsenault, Samy Rima, Heiko Stemmann, Wim Vanduffel. Role of the Primate Ventral Tegmental Area in Reinforcement and MotivationCurrent Biology, 2014; DOI: 10.1016/j.cub.2014.04.044

Stronger Brains, Weaker Bodies (New York Times)

Why does the metabolism of a sloth differ from that of a human? Brains are a big reason, say researchers who recently carried out a detailed comparison of metabolism in humans and other mammals. CreditFelipe Dana/Associated Press

All animals do the same thing to the food they eat — they break it down to extract fuel and building blocks for growing new tissue. But the metabolism of one species may be profoundly different from another’s. A sloth will generate just enough energy to hang from a tree, for example, while some birds can convert their food into a flight from Alaska to New Zealand.

For decades, scientists have wondered how our metabolism compares to that of other species. It’s been a hard question to tackle, because metabolism is complicated — something that anyone who’s stared at a textbook diagram knows all too well. As we break down our food, we produce thousands of small molecules, some of which we flush out of our bodies and some of which we depend on for our survival.

An international team of researchers has now carried out a detailed comparison of metabolism in humans and other mammals. As they report in the journal PLOS Biology, both our brains and our muscles turn out to be unusual, metabolically speaking. And it’s possible that their odd metabolism was part of what made us uniquely human.

When scientists first began to study metabolism, they could measure it only in simple ways. They might estimate how many calories an animal burned in a day, for example. If they were feeling particularly ambitious, they might try to estimate how many calories each organ in the animal’s body burned.

Those tactics were enough to reveal some striking things about metabolism. Compared with other animals, we humans have ravenous brains. Twenty percent of the calories we take in each day are consumed by our neurons as they send signals to one another.

Ten years ago, Philipp Khaitovich of the Max Planck Institute of Evolutionary Anthropology and his colleagues began to study human metabolism in a more detailed way. They started making a catalog of the many molecules produced as we break down food.

“We wanted to get as much data as possible, just to see what happened,” said Dr. Khaitovich.

To do so, the scientists obtained brain, muscle and kidney tissues from organ donors. They then extracted metabolic compounds like glucose from the samples and measured their concentrations. All told, they measured the levels of over 10,000 different molecules.

The scientists found that each tissue had a different metabolic fingerprint, with high levels of some molecules and low levels of others.

These distinctive fingerprints came as little surprise, since each tissue has a different job to carry out. Muscles need to burn energy to generate mechanical forces, for example, while kidney cells need to pull waste out of the bloodstream.

The scientists then carried out the same experiment on chimpanzees, monkeys and mice. They found that the metabolic fingerprint for a given tissue was usually very similar in closely related species. The same tissues in more distantly related species had fingerprints with less in common.

But the scientists found two exceptions to this pattern.

The first exception turned up in the front of the brain. This region, called the prefrontal cortex, is important for figuring out how to reach long-term goals. Dr. Khaitovich’s team found that the way the human prefrontal cortex uses energy is quite distinct from other species; other tissues had comparable metabolic fingerprints across species, and even in other regions of the brain, the scientists didn’t find such a drastic difference.

This result fit in nicely with findings by other scientists that the human prefrontal cortex expanded greatly over the past six million years of our evolution. Its expansion accounts for much of the extra demand our brains make for calories.

The evolution of our enormous prefrontal cortex also had a profound effect on our species. We use it for many of the tasks that only humans can perform, such as reflecting on ourselves, thinking about what others are thinking and planning for the future.

But the prefrontal cortex was not the only part of the human body that has experienced a great deal of metabolic evolution. Dr. Khaitovich and his colleagues found that the metabolic fingerprint of muscle is even more distinct in humans.

“Muscle was really off the charts,” Dr. Khaitovich said. “We didn’t expect to see that at all.”

It was possible that the peculiar metabolism in human muscle was just the result of our modern lifestyle — not an evolutionary shift in our species. Our high-calorie diet might change the way muscle cells generated energy. It was also possible that a sedentary lifestyle made muscles weaker, creating a smaller metabolic demand.

To test that possibility, Dr. Khaitovich compared the strength of humans to that of our closest relatives. They found that chimpanzees and monkeys are far stronger, for their weight, than even university basketball players or professional climbers.

The scientists also tested their findings by putting monkeys on a couch-potato regime for a month to see if their muscles acquired a human metabolic fingerprint.

They barely changed.

Dr. Khaitovich suspects that the metabolic fingerprint of our muscles represents a genuine evolutionary change in our species.

Karen Isler and Carel van Schaik of the University of Zurich have argued that the gradual changes in human brains and muscles were intimately linked. To fuel a big brain, our ancestors had to sacrifice other tissues, including muscles.

Dr. Isler said that the new research fit their hypothesis nicely. “It looks quite convincing,” she said.

Daniel E. Lieberman, a professor of human evolutionary biology at Harvard, said he found Dr. Khaitovich’s study “very cool,” but didn’t think the results meant that brain growth came at the cost of strength. Instead, he suggested, our ancestors evolved muscles adapted for a new activity: long-distance walking and running.

“We have traded strength for endurance,” he said. And that endurance allowed our ancestors to gather more food, which could then fuel bigger brains.

“It may be that the human brain is bigger not in spite of brawn but rather because of brawn, albeit a very different kind,” he said.

Cientistas identificam gene que relaciona estrutura cerebral à inteligência (O Globo)

JC e-mail 4892, de 11 de fevereiro de 2014

Descoberta pode ter implicações importantes para a compreensão de transtornos psiquiátricos como esquizofrenia e autismo

Cientistas do King’s College London identificaram, pela primeira vez, um gene que relaciona a espessura da massa cinzenta do cérebro à inteligência. O estudo foi publicado nesta terça-feira na revista “Molecular Psychiatry” e pode ajudar a entender os mecanismos biológicos por trás de determinados danos intelectuais.

Até agora já se sabia que a massa cinzenta tinha um papel importante para a memória, atenção, pensamento, linguagem e consciência. Estudos anteriores também já mostravam que a espessura do córtex cerebral tinha a ver com a habilidade intelectual, mas nenhum gene tinha sido identificado.

Um time internacional de cientistas, liderado pelo King´s College, analisou amostras de DNA e exames de ressonância magnética por imagem de 1.583 adolescentes saudáveis de 14 anos, que também se submeteram a uma série de testes para determinar inteligência verbal e não verbal.

– Queríamos descobrir como diferenças estruturais no cérebro tinham a ver com diferenças na habilidade intelectual. Identificamos uma variação genética relacionada à plasticidade sináptica, de como os neurônios se comunicam – explica Sylvane Desrivières, principal autora do estudo, pelo Instituto de Psiquiatria do King’s College London. – Isto pode nos ajudar a entender o que acontece em nível neuronal com certas formas de comprometimento intelectual, onde a habilidade de comunicação dos neurônios é, de alguma forma, comprometida.

Ela acrescenta que é importante apontar que a inteligência é influenciada por muitos fatores genéticos e ambientais. O gene que identificamos só explica uma pequena proporção das diferenças nas habilidades intelectuais e não é, de forma alguma, “o gene da inteligência”.

Os pesquisadores observaram 54 mil possíveis variações envolvidas no desenvolvimento cerebral. Em média, adolescentes com uma variante genética particular tinham um córtex mais fino no hemisfério cerebral esquerdo, particularmente nos lobos frontal e temporal, e executavam bem testes de capacidade intelectual. A variação genética afeta a expressão do gene NPTN, que codifica uma proteína que atua nas sinapses neuronais e, portanto, afeta a forma como as células do cérebro se comunicam.

Para confirmar as suas conclusões, os pesquisadores estudaram o gene NPTN em células de camundongo e do cérebro humano. Os pesquisadores verificaram que o gene NPTN tinha uma atividade diferente nos hemisférios esquerdo e direito do cérebro, o que pode fazer com que o hemisfério esquerdo seja mais sensível aos efeitos das mutações NPTN. Os resultados sugerem que algumas diferenças na capacidade intelectual podem resultar da diminuição da função do gene NPTN em determinadas regiões do hemisfério esquerdo do cérebro.

A variação genética identificada neste estudo representa apenas uma estimativa de 0,5% da variação total em inteligência. No entanto, as descobertas podem ter implicações importantes para a compreensão dos mecanismos biológicos subjacentes de vários transtornos psiquiátricos, como esquizofrenia e autismo, nas quais a capacidade cognitiva é uma característica fundamental da doença.

http://oglobo.globo.com/ciencia/cientistas-identificam-gene-que-relaciona-estrutura-cerebral-inteligencia-11563313#ixzz2t1amCUSy

Brain regions thought to be uniquely human share many similarities with monkeys (Science Daily)

January 28, 2014

Source: Cell Press

Summary: New research suggests a surprising degree of similarity in the organization of regions of the brain that control language and complex thought processes in humans and monkeys. The study also revealed some key differences. The findings may provide valuable insights into the evolutionary processes that established our ties to other primates but also made us distinctly human.

 (A) The right vlFC ROI. Dorsally it included the inferior frontal sulcus and, more posteriorly, it included PMv; anteriorly it was bound by the paracingulate sulcus and ventrally by the lateral orbital sulcus and the border between the dorsal insula and the opercular cortex. (B) A schematic depiction of the result of the 12 cluster parcellation solution using an iterative parcellation approach. We subdivided PMv into ventral and dorsal regions (6v and 6r, purple and black). We delineated the IFJ area (blue) and areas 44d (gray) and 44v (red) in lateral pars opercularis. More anteriorly, we delineated areas 45 (orange) in the pars triangularis and adjacent operculum and IFS (green) in the inferior frontal sulcus and dorsal pars triangularis. We found area 12/47 in the pars orbitalis (light blue) and area Op (bright yellow) in the deep frontal operculum. We also identified area 46 (yellow), and lateral and medial frontal pole regions (FPl and FPm, ruby colored and pink). Credit: Neuron, Neubert et al.

New research suggests a surprising degree of similarity in the organization of regions of the brain that control language and complex thought processes in humans and monkeys. The study, publishing online January 28 in the Cell Press journal Neuron, also revealed some key differences. The findings may provide valuable insights into the evolutionary processes that established our ties to other primates but also made us distinctly human.

The research concerns the ventrolateral frontal cortex, a region of the brain known for more than 150 years to be important for cognitive processes including language, cognitive flexibility, and decision-making. “It has been argued that to develop these abilities, humans had to evolve a completely new neural apparatus; however others have suggested precursors to these specialized brain systems might have existed in other primates,” explains lead author Dr. Franz-Xaver Neubert of the University of Oxford, in the UK.

By using non-invasive MRI techniques in 25 people and 25 macaques, Dr. Neubert and his team compared ventrolateral frontal cortex connectivity and architecture in humans and monkeys. The investigators were surprised to find many similarities in the connectivity of these regions. This suggests that some uniquely human cognitive traits may rely on an evolutionarily conserved neural apparatus that initially supported different functions. Additional research may reveal how slight changes in connectivity accompanied or facilitated the development of distinctly human abilities.

The researchers also noted some key differences between monkeys and humans. For example, ventrolateral frontal cortex circuits in the two species differ in the way that they interact with brain areas involved with hearing.

“This could explain why monkeys perform very poorly in some auditory tasks and might suggest that we humans use auditory information in a different way when making decisions and selecting actions,” says Dr. Neubert.

A region in the human ventrolateral frontal cortex — called the lateral frontal pole — does not seem to have an equivalent area in the monkey. This area is involved with strategic planning, decision-making, and multi-tasking abilities.

“This might relate to humans being particularly proficient in tasks that require strategic planning and decision making as well as ‘multi-tasking’,” says Dr. Neubert.

Interestingly, some of the ventrolateral frontal cortex regions that were similar in humans and monkeys are thought to play roles in psychiatric disorders such as attention deficit hyperactivity disorder, obsessive compulsive disorder, and substance abuse. A better understanding of the networks that are altered in these disorders might lead to therapeutic insights.

Journal Reference:

  1. Franz-Xaver Neubert et al. Comparison of human ventral frontal cortex areas for cognitive control and language with areas in monkey frontal cortex.Neuron, Jan 28, 2014

Spirituality, Religion May Protect Against Major Depression by Thickening Brain Cortex (Science Daily)

Jan. 16, 2014 — A thickening of the brain cortex associated with regular meditation or other spiritual or religious practice could be the reason those activities guard against depression — particularly in people who are predisposed to the disease, according to new research led by Lisa Miller, professor and director of Clinical Psychology and director of the Spirituality Mind Body Institute at Teachers College, Columbia University.

The study, published online by JAMA Psychiatry, involved 103 adults at either high or low risk of depression, based on family history. The subjects were asked how highly they valued religion or spirituality. Brain MRIs showed thicker cortices in subjects who placed a high importance on religion or spirituality than those who did not. The relatively thicker cortex was found in precisely the same regions of the brain that had otherwise shown thinning in people at high risk for depression.

Although more research is necessary, the results suggest that spirituality or religion may protect against major depression by thickening the brain cortex and counteracting the cortical thinning that would normally occur with major depression. The study, published on Dec. 25, 2013, is the first published investigation on the neuro-correlates of the protective effect of spirituality and religion against depression.

“The new study links this extremely large protective benefit of spirituality or religion to previous studies which identified large expanses of cortical thinning in specific regions of the brain in adult offspring of families at high risk for major depression,” Miller said.

Previous studies by Miller and the team published in theAmerican Journal of Psychiatry (2012) showed a 90 percent decrease in major depression in adults who said they highly valued spirituality or religiosity and whose parents suffered from the disease. While regular attendance at church was not necessary, a strong personal importance placed on spirituality or religion was most protective against major depression in people who were at high familial risk.

Journal Reference:

  1. Lisa Miller, Ravi Bansal, Priya Wickramaratne, Xuejun Hao, Craig E. Tenke, Myrna M. Weissman, Bradley S. Peterson.Neuroanatomical Correlates of Religiosity and SpiritualityJAMA Psychiatry, 2013; : 1 DOI:10.1001/jamapsychiatry.2013.3067

A mulher que encolheu o cérebro humano (O Globo)

Suzana Herculano é a primeira brasileira a falar na prestigiada conferência TED

Ela debaterá o cérebro de 86 bilhões de neurônios (e não 100 bilhões, como se acreditava) e como o homem se diferenciou dos primatas 

Publicado:24/05/13 – 7h00; Atualizado:24/05/13 – 11h41

Suzana Herculano-Houzel, professora do Instituto de Ciências Biomédicas da UFRJFoto: Guito Moreto

Suzana Herculano-Houzel, professora do Instituto de Ciências Biomédicas da UFRJ Guito Moreto

Neurocientista da UFRJ, Suzana Herculano-Houzel é a primeira brasileira a participar da TED (Tecnologia, Entretenimento e Design, em português) — prestigiada série de conferências que reúne grandes nomes das mais diversas áreas do conhecimento para debater novas ideias. Suzana falará no dia 12 de junho, sob o tema “Ouça a natureza”, e destacará suas descobertas únicas sobre o cérebro humano.

Sobre o que vai falar na TED?

Vou falar sobre o cérebro humano e mostrar como ele não é um cérebro especial, uma exceção à regra. Nossas pesquisas nos revelaram que se trata apenas de um cérebro de primata grande. O notável é que passamos a ter um cérebro enorme, do tamanho que nenhum outro primata tem, nem os maiores, porque inventamos o cozimento dos alimentos e, com isso, passamos a ter um número enorme de neurônios.

O cozimento foi fundamental para nos tornarmos humanos?

Sim, burlamos a limitação energética imposta pela dieta crua. E a implicação bacana e irônica é que, com isso, conseguimos liberar tempo no cérebro para nos dedicarmos a outras coisas (que não buscar alimentos), como criar a agricultura, as civilizações, a geladeira e a eletricidade. Até o ponto em que conseguir comida cozida e calorias em excesso ficou tão fácil que, agora, temos o problema inverso: estamos comendo demais. Por isso, voltamos à saladinha.

Se alimentarmos orangotangos e gorilas com comida cozida eles serão tão inteligentes quanto nós?

Sim, porque não seriam limitados pelo número reduzido de calorias que conseguem com a comida crua. Claro que nós fizemos uma inovação cultural ao inventar a cozinha. Tem uma diferença entre dar comida cozida para o animal e ele ter o desenvolvimento cultural do cozimento. Mas, ainda assim, se em todas as refeições eles tiverem acesso à comida cozida, daqui a 200 mil ou 300 mil anos eles terão o cérebro maior. Com a alimentação que têm hoje, não é possível terem um cérebro maior dado o corpo grande que têm. É uma coisa ou outra.

Somos especiais?

A gente não é especial coisa alguma. Somos apenas um primata que burlou as regras energéticas e conseguiu botar mais neurônios no cérebro de um jeito que nenhum outro animal conseguiu. Por isso estudamos os outros animais e não o contrário.

Persistem ainda mitos sobre o cérebro? Como o dos 100 bilhões de neurônios, que seus estudos demonstraram que são, na verdade, 86 bilhões?

Sim, eles continuam existindo, mesmo na neurociência. O nosso trabalho já é muito citado como referência. As coisas estão mudando. E o mais legal é que é por conta da ciência tupiniquim, o que eu acho maravilhoso. Mas vemos que é um processo, que ainda tem muita gente que insiste no número antigo.

O novo manual de diagnóstico de doenças mentais dos EUA (que serve de referência para todo o mundo, inclusive para a OMS) foi lançado na semana passada em meio à controvérsia. Especialistas acham que são tantos transtornos que praticamente não resta mais nenhum espaço para a normalidade. Qual a sua opinião?

Acho que essa discussão é muito necessária, justamente para reconhecermos o que são as variações ao redor do normal e quais são os extremos problemáticos e doentios de fato. Então, a discussão é importante, ótima a qualquer momento. Mas acho também que há muita informação errada e sensacionalista circulando, sobretudo sobre o déficit de atenção. As estatísticas variam muito de país para país, às vezes porque varia o número de médicos que reconhece a criança como portadora do distúrbio. E acho que ainda há um problema enorme, um medo enorme do estereótipo da doença mental. Até hoje ainda existe uma resistência louca em ir a um psiquiatra. E acho que, pelo contrário, ganhamos muito reconhecendo que existem transtornos e que eles podem ser tratados.

Ainda há muito estigma?

O maior problema hoje em dia é que é feio ter um distúrbio no cérebro. Perceba que nem estou falando em transtorno mental. Precisar de remédio para o cérebro é terrível. E temos tanto a ganhar reconhecendo os problemas, fazendo os diagnósticos. O cérebro é tão complexo, tem tanta coisa para dar errado, que o espantoso é que não dê problema em todo mundo sempre. Então, acho normal que boa parte da população tenha algum problema, não me espanta nem um pouco. E, uma vez que se reconhece o problema, que se faz o diagnóstico, há a opção de poder tratar. Se dispomos de um tratamento, por que não usar?

O presidente dos EUA, Barack Obama, recentemente anunciou uma inédita iniciativa de reunir pesquisadores dos mais diversos centros para estudar exclusivamente o cérebro. O que podemos esperar de tamanho esforço científico?

Não só o cérebro, mas o cérebro em atividade. Obama quer ir além do que já tinham feito — estudar a função de diferentes áreas — e entender como se conectam, como falam umas com as outras, ter ideia desse funcionamento integrado, dessa interação. Essa é uma das grandes lacunas do conhecimento: entender como as várias partes do cérebro funcionam ao mesmo tempo. Não sabemos como o cérebro funciona como um todo; é uma das fronteiras finais do conhecimento.

Não sabemos como o cérebro funciona?

Como um todo, não. Sabemos o que as partes fazem, mas não sabemos como se dá a conversa entre elas. Não sabemos a origem da consciência, da sensação do “eu estou aqui agora”. Que áreas são fundamentais para isso? É esse tipo de conhecimento que se está buscando, do cérebro funcionando ao vivo e em cores, em tempo real.

O objetivo não é estudar doenças, então?

Não, o grande objetivo é estudar consciência, memória; entender como o cérebro reúne emoção e lógica, coisas que são fruto da ação coordenada de várias partes. Claro que desse conhecimento todo podem surgir implicações para o Alzheimer e outras doenças. Mas, na verdade, falar em doenças é uma roupagem usada pela divulgação do programa para o público assimilar melhor. Existe esse preconceito de que a ciência só vale quando resolve uma doença.

Leia mais sobre esse assunto em http://oglobo.globo.com/ciencia/a-mulher-que-encolheu-cerebro-humano-8482825#ixzz2UFWUvdYn © 1996 – 2013. Todos direitos reservados a Infoglobo Comunicação e Participações S.A. Este material não pode ser publicado, transmitido por broadcast, reescrito ou redistribuído sem autorização.

Schizophrenia Symptoms Eliminated in Animal Model (Science Daily)

May 22, 2013 — Overexpression of a gene associated with schizophrenia causes classic symptoms of the disorder that are reversed when gene expression returns to normal, scientists report. 

Overexpression of a gene associated with schizophrenia causes classic symptoms of the disorder that are reversed when gene expression returns to normal, scientists report. Pictured are (left to right) Drs. Lin Mei, Dongmin Yin and Yongjun Chen, Medical College of Georgia at Georgia Regents University. (Credit: Phil Jones, Georgia Regents University Photographer)

They genetically engineered mice so they could turn up levels of neuregulin-1 to mimic high levels found in some patients then return levels to normal, said Dr. Lin Mei, Director of the Institute of Molecular Medicine and Genetics at the Medical College of Georgia at Georgia Regents University.

They found that when elevated, mice were hyperactive, couldn’t remember what they had just learned and couldn’t ignore distracting background or white noise. When they returned neuregulin-1 levels to normal in adult mice, the schizophrenia-like symptoms went away, said Mei, corresponding author of the study in the journal Neuron.

While schizophrenia is generally considered a developmental disease that surfaces in early adulthood, Mei and his colleagues found that even when they kept neuregulin-1 levels normal until adulthood, mice still exhibited schizophrenia-like symptoms once higher levels were expressed. Without intervention, they developed symptoms at about the same age humans do.

“This shows that high levels of neuregulin-1 are a cause of schizophrenia, at least in mice, because when you turn them down, the behavior deficit disappears,” Mei said. “Our data certainly suggests that we can treat this cause by bringing down excessive levels of neuregulin-1 or blocking its pathologic effects.”

Schizophrenia is a spectrum disorder with multiple causes — most of which are unknown — that tends to run in families, and high neuregulin-1 levels have been found in only a minority of patients. To reduce neuregulin-1 levels in those individuals likely would require development of small molecules that could, for example, block the gene’s signaling pathways, Mei said. Current therapies treat symptoms and generally focus on reducing the activity of two neurotransmitters since the bottom line is excessive communication between neurons.

The good news is it’s relatively easy to measure neuregulin-1 since blood levels appear to correlate well with brain levels. To genetically alter the mice, they put a copy of the neuregulin-1 gene into mouse DNA then, to make sure they could control the levels, they put in front of the DNA a binding protein for doxycycline, a stable analogue for the antibiotic tetracycline, which is infamous for staining the teeth of fetuses and babies.

The mice are born expressing high levels of neuregulin-1 and giving the antibiotic restores normal levels. “If you don’t feed the mice tetracycline, the neuregulin-1 levels are always high,” said Mei, noting that endogenous levels of the gene are not affected. High-levels of neuregulin-1 appear to activate the kinase LIMK1, impairing release of the neurotransmitter glutamate and normal behavior. The LIMK1 connection identifies another target for intervention, Mei said.

Neuregulin-1 is essential for heart development as well as formation of myelin, the insulation around nerves. It’s among about 100 schizophrenia-associated genes identified through genome-wide association studies and has remained a consistent susceptibility gene using numerous other methods for examining the genetics of the disease. It’s also implicated in cancer.

Mei and his colleagues were the first to show neuregulin-1’s positive impact in the developed brain, reporting in Neuron in 2007 that it and its receptor ErbB4 help maintain a healthy balance of excitement and inhibition by releasing GABA, a major inhibitory neurotransmitter, at the sight of inhibitory synapses, the communication paths between neurons. Years before, they showed the genes were also at excitatory synapses, where they also could quash activation. In 2009, the MCG researchers provided additional evidence of the role of neuregulin-1 in schizophrenia by selectively deleting the gene for its receptor, ErbB4 and creating another symptomatic mouse.

Schizophrenia affects about 1 percent of the population, causing hallucinations, depression and impaired thinking and social behavior. Babies born to mothers who develop a severe infection, such as influenza or pneumonia, during pregnancy have a significantly increased risk of schizophrenia.

Journal Reference:

  1. Dong-Min Yin, Yong-Jun Chen, Yi-Sheng Lu, Jonathan C. Bean, Anupama Sathyamurthy, Chengyong Shen, Xihui Liu, Thiri W. Lin, Clifford A. Smith, Wen-Cheng Xiong, Lin Mei.Reversal of Behavioral Deficits and Synaptic Dysfunction in Mice Overexpressing Neuregulin 1.Neuron, 2013; 78 (4): 644 DOI:10.1016/j.neuron.2013.03.028

Tamed fox shows domestication’s effects on the brain (Science News)

Gene activity changes accompany doglike behavior

By Tina Hesman Saey

Web edition: May 15, 2013

download

Taming silver foxes (shown) alters their behavior. A new study links those behavior changes to changes in brain chemicals. Tom Reichner/Shutterstock

COLD SPRING HARBOR, N.Y. – Taming foxes changes not only the animals’ behavior but also their brain chemistry, a new study shows.

The finding could shed light on how the foxes’ genetic cousins, wolves, morphed into man’s best friend. Lenore Pipes of Cornell University presented the results May 10 at the Biology of Genomes conference.

The foxes she worked with come from a long line started in 1959 when a Russian scientist named Dmitry Belyaev attempted to recreate dog domestication, but using foxes instead of wolves. He bred silver foxes (Vulpes vulpes), which are actually a type of red fox with white-tipped black fur. Belyaev and his colleagues selected the least aggressive animals they could find at local fox farms and bred them. Each generation, the scientists picked the tamest animals to mate, creating ever friendlier foxes. Now, more than 50 years later, the foxes act like dogs, wagging their tails, jumping with excitement and leaping into the arms of caregivers for caresses.

At the same time, the scientists also bred the most aggressive foxes on the farms. The descendents of those foxes crouch, flatten their ears, growl, bare their teeth and lunge at people who approach their cages.

The foxes’ tame and aggressive behaviors are rooted in genetics, but scientists have not found DNA changes that account for the differences. Rather than search for changes in genes themselves, Pipes and her colleagues took an indirect approach, looking for differences in the activity of genes in the foxes’ brains.

The team collected two brain parts, the prefrontal cortex and amygdala, from a dozen aggressive foxes and a dozen tame ones. The prefrontal cortex, an area at the front of the brain, is involved in decision making and in controlling social behavior, among other tasks. The amygdala, a pair of almond-size regions on either side of the brain, helps process emotional information.

Pipes found that the activity of hundreds of genes in the two brain regions differed between the groups of affable and hostile foxes. For example, aggressive animals had increased activity of some genes for sensing dopamine. Pipes speculated that tame animals’ lower levels of dopamine sensors might make them less anxious.

The team had expected to find changes in many genes involved in serotonin signaling, a process targeted by some popular antidepressants such as Prozac. Tame foxes are known to have more serotonin in their brains. But only one gene for sensing serotonin had higher activity in the friendly animals.

In a different sort of analysis, Pipes discovered that all aggressive foxes carry one form of the GRM3 glutamate receptor gene, while a majority of the friendly foxes have a different variant of the gene. In people, genetic variants of GRM3 have been linked to schizophrenia, bipolar disorder and other mood disorders. Other genes involved in transmitting glutamate signals, which help regulate mood, had increased activity in tame foxes, Pipes said.

It is not clear whether similar brain chemical changes accompanied the transformation of wolves into dogs, said Adam Freedman, an evolutionary biologist at Harvard University. Even if dogs and wolves now have differing brain chemical levels, researchers can’t turn back time to watch the process unfold; they can only guess at how domestication happened. “We have to reconstruct an unobservable series of steps,” he said. Pipes’ study is an interesting example of what might have happened to dogs’ brains during domestication, he said.