Arquivo da tag: CIências cognitivas

Why can’t the world’s greatest minds solve the mystery of consciousness? (Guardian)

Peter Gamlen cover3

Illustration by Pete Gamlen

Original article

The long read

Philosophers and scientists have been at war for decades over the question of what makes human beings more than complex robots

by Oliver Burkeman

Wed 21 Jan 2015 06.00 GMTShare

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

brain on a blackboard illustration

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

In all seriousness, Koch said he thought it wasn’t impossible that his iPhone might have feelings


By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Illustration by Pete Gamlen
Illustration by Pete Gamlen

“People thought I was crazy to be getting involved,” Koch recalled. “A senior colleague took me out to lunch and said, yes, he had the utmost respect for Francis, but Francis was a Nobel laureate and a half-god and he could do whatever he wanted, whereas I didn’t have tenure yet, so I should be incredibly careful. Stick to more mainstream science! These fringey things – why not leave them until retirement, when you’re coming close to death, and you can worry about the soul and stuff like that?”

It was around this time that David Chalmers started talking about zombies.


As a child, Chalmers was short-sighted in one eye, and he vividly recalls the day he was first fitted with glasses to rectify the problem. “Suddenly I had proper binocular vision,” he said. “And the world just popped out. It was three-dimensional to me in a way it hadn’t been.” He thought about that moment frequently as he grew older. Of course, you could tell a simple mechanical story about what was going on in the lens of his glasses, his eyeball, his retina, and his brain. “But how does that explain the way the world just pops out like that?” To a physicalist, the glasses-eyeball-retina story is the only story. But to a thinker of Chalmers’s persuasion, it was clear that it wasn’t enough: it told you what the machinery of the eye was doing, but it didn’t begin to explain that sudden, breathtaking experience of depth and clarity. Chalmers’s “zombie” thought experiment is his attempt to show why the mechanical account is not enough – why the mystery of conscious awareness goes deeper than a purely material science can explain.

“Look, I’m not a zombie, and I pray that you’re not a zombie,” Chalmers said, one Sunday before Christmas, “but the point is that evolution could have produced zombies instead of conscious creatures – and it didn’t!” We were drinking espressos in his faculty apartment at New York University, where he recently took up a full-time post at what is widely considered the leading philosophy department in the Anglophone world; boxes of his belongings, shipped over from Australia, lay unpacked around his living-room. Chalmers, now 48, recently cut his hair in a concession to academic respectability, and he wears less denim, but his ideas remain as heavy-metal as ever. The zombie scenario goes as follows: imagine that you have a doppelgänger. This person physically resembles you in every respect, and behaves identically to you; he or she holds conversations, eats and sleeps, looks happy or anxious precisely as you do. The sole difference is that the doppelgänger has no consciousness; this – as opposed to a groaning, blood-spattered walking corpse from a movie – is what philosophers mean by a “zombie”.

Such non-conscious humanoids don’t exist, of course. (Or perhaps it would be better to say that I know I’m not one, anyhow; I could never know for certain that you aren’t.) But the point is that, in principle, it feels as if they could. Evolution might have produced creatures that were atom-for-atom the same as humans, capable of everything humans can do, except with no spark of awareness inside. As Chalmers explained: “I’m talking to you now, and I can see how you’re behaving; I could do a brain scan, and find out exactly what’s going on in your brain – yet it seems it could be consistent with all that evidence that you have no consciousness at all.” If you were approached by me and my doppelgänger, not knowing which was which, not even the most powerful brain scanner in existence could tell us apart. And the fact that one can even imagine this scenario is sufficient to show that consciousness can’t just be made of ordinary physical atoms. So consciousness must, somehow, be something extra – an additional ingredient in nature.

Chalmers recently cut his hair and he wears less denim, but his ideas remain as heavy-metal as ever

It would be understating things a bit to say that this argument wasn’t universally well-received when Chalmers began to advance it, most prominently in his 1996 book The Conscious Mind. The withering tone of the philosopher Massimo Pigliucci sums up the thousands of words that have been written attacking the zombie notion: “Let’s relegate zombies to B-movies and try to be a little more serious about our philosophy, shall we?” Yes, it may be true that most of us, in our daily lives, think of consciousness as something over and above our physical being – as if your mind were “a chauffeur inside your own body”, to quote the spiritual author Alan Watts. But to accept this as a scientific principle would mean rewriting the laws of physics. Everything we know about the universe tells us that reality consists only of physical things: atoms and their component particles, busily colliding and combining. Above all, critics point out, if this non-physical mental stuff did exist, how could it cause physical things to happen – as when the feeling of pain causes me to jerk my fingers away from the saucepan’s edge?

Nonetheless, just occasionally, science has dropped tantalising hints that this spooky extra ingredient might be real. In the 1970s, at what was then the National Hospital for Nervous Diseases in London, the neurologist Lawrence Weiskrantz encountered a patient, known as “DB”, with a blind spot in his left visual field, caused by brain damage. Weiskrantz showed him patterns of striped lines, positioned so that they fell on his area of blindness, then asked him to say whether the stripes were vertical or horizontal. Naturally, DB protested that he could see no stripes at all. But Weiskrantz insisted that he guess the answers anyway – and DB got them right almost 90% of the time. Apparently, his brain was perceiving the stripes without his mind being conscious of them. One interpretation is that DB was a semi-zombie, with a brain like any other brain, but partially lacking the magical add-on of consciousness.

Chalmers knows how wildly improbable his ideas can seem, and takes this in his stride: at philosophy conferences, he is fond of clambering on stage to sing The Zombie Blues, a lament about the miseries of having no consciousness. (“I act like you act / I do what you do / But I don’t know / What it’s like to be you.”) “The conceit is: wouldn’t it be a drag to be a zombie? Consciousness is what makes life worth living, and I don’t even have that: I’ve got the zombie blues.” The song has improved since its debut more than a decade ago, when he used to try to hold a tune. “Now I’ve realised it sounds better if you just shout,” he said.


Pete Gamlen
Illustration by Pete Gamlen

The consciousness debates have provoked more mudslinging and fury than most in modern philosophy, perhaps because of how baffling the problem is: opposing combatants tend not merely to disagree, but to find each other’s positions manifestly preposterous. An admittedly extreme example concerns the Canadian-born philosopher Ted Honderich, whose book On Consciousness was described, in an article by his fellow philosopher Colin McGinn in 2007, as “banal and pointless”, “excruciating”, “absurd”, running “the full gamut from the mediocre to the ludicrous to the merely bad”. McGinn added, in a footnote: “The review that appears here is not as I originally wrote it. The editors asked me to ‘soften the tone’ of the original [and] I have done so.” (The attack may have been partly motivated by a passage in Honderich’s autobiography, in which he mentions “my small colleague Colin McGinn”; at the time, Honderich told this newspaper he’d enraged McGinn by referring to a girlfriend of his as “not as plain as the old one”.)

McGinn, to be fair, has made a career from such hatchet jobs. But strong feelings only slightly more politely expressed are commonplace. Not everybody agrees there is a Hard Problem to begin with – making the whole debate kickstarted by Chalmers an exercise in pointlessness. Daniel Dennett, the high-profile atheist and professor at Tufts University outside Boston, argues that consciousness, as we think of it, is an illusion: there just isn’t anything in addition to the spongy stuff of the brain, and that spongy stuff doesn’t actually give rise to something called consciousness. Common sense may tell us there’s a subjective world of inner experience – but then common sense told us that the sun orbits the Earth, and that the world was flat. Consciousness, according to Dennett’s theory, is like a conjuring trick: the normal functioning of the brain just makes it look as if there is something non-physical going on. To look for a real, substantive thing called consciousness, Dennett argues, is as silly as insisting that characters in novels, such as Sherlock Holmes or Harry Potter, must be made up of a peculiar substance named “fictoplasm”; the idea is absurd and unnecessary, since the characters do not exist to begin with. This is the point at which the debate tends to collapse into incredulous laughter and head-shaking: neither camp can quite believe what the other is saying. To Dennett’s opponents, he is simply denying the existence of something everyone knows for certain: their inner experience of sights, smells, emotions and the rest. (Chalmers has speculated, largely in jest, that Dennett himself might be a zombie.) It’s like asserting that cancer doesn’t exist, then claiming you’ve cured cancer; more than one critic of Dennett’s most famous book, Consciousness Explained, has joked that its title ought to be Consciousness Explained Away. Dennett’s reply is characteristically breezy: explaining things away, he insists, is exactly what scientists do. When physicists first concluded that the only difference between gold and silver was the number of subatomic particles in their atoms, he writes, people could have felt cheated, complaining that their special “goldness” and “silveriness” had been explained away. But everybody now accepts that goldness and silveriness are really just differences in atoms. However hard it feels to accept, we should concede that consciousness is just the physical brain, doing what brains do.

“The history of science is full of cases where people thought a phenomenon was utterly unique, that there couldn’t be any possible mechanism for it, that we might never solve it, that there was nothing in the universe like it,” said Patricia Churchland of the University of California, a self-described “neurophilosopher” and one of Chalmers’s most forthright critics. Churchland’s opinion of the Hard Problem, which she expresses in caustic vocal italics, is that it is nonsense, kept alive by philosophers who fear that science might be about to eliminate one of the puzzles that has kept them gainfully employed for years. Look at the precedents: in the 17th century, scholars were convinced that light couldn’t possibly be physical – that it had to be something occult, beyond the usual laws of nature. Or take life itself: early scientists were convinced that there had to be some magical spirit – the élan vital – that distinguished living beings from mere machines. But there wasn’t, of course. Light is electromagnetic radiation; life is just the label we give to certain kinds of objects that can grow and reproduce. Eventually, neuroscience will show that consciousness is just brain states. Churchland said: “The history of science really gives you perspective on how easy it is to talk ourselves into this sort of thinking – that if my big, wonderful brain can’t envisage the solution, then it must be a really, really hard problem!”

Solutions have regularly been floated: the literature is awash in references to “global workspace theory”, “ego tunnels”, “microtubules”, and speculation that quantum theory may provide a way forward. But the intractability of the arguments has caused some thinkers, such as Colin McGinn, to raise an intriguing if ultimately defeatist possibility: what if we’re just constitutionally incapable of ever solving the Hard Problem? After all, our brains evolved to help us solve down-to-earth problems of survival and reproduction; there is no particular reason to assume they should be capable of cracking every big philosophical puzzle we happen to throw at them. This stance has become known as “mysterianism” – after the 1960s Michigan rock’n’roll band ? and the Mysterians, who themselves borrowed the name from a work of Japanese sci-fi – but the essence of it is that there’s actually no mystery to why consciousness hasn’t been explained: it’s that humans aren’t up to the job. If we struggle to understand what it could possibly mean for the mind to be physical, maybe that’s because we are, to quote the American philosopher Josh Weisberg, in the position of “squirrels trying to understand quantum mechanics”. In other words: “It’s just not going to happen.”


Or maybe it is: in the last few years, several scientists and philosophers, Chalmers and Koch among them, have begun to look seriously again at a viewpoint so bizarre that it has been neglected for more than a century, except among followers of eastern spiritual traditions, or in the kookier corners of the new age. This is “panpsychism”, the dizzying notion that everything in the universe might be conscious, or at least potentially conscious, or conscious when put into certain configurations. Koch concedes that this sounds ridiculous: when he mentions panpsychism, he has written, “I often encounter blank stares of incomprehension.” But when it comes to grappling with the Hard Problem, crazy-sounding theories are an occupational hazard. Besides, panpsychism might help unravel an enigma that has attached to the study of consciousness from the start: if humans have it, and apes have it, and dogs and pigs probably have it, and maybe birds, too – well, where does it stop?

Illustration by Pete Gamlen
Illustration by Pete Gamlen

Growing up as the child of German-born Catholics, Koch had a dachshund named Purzel. According to the church, because he was a dog, that meant he didn’t have a soul. But he whined when anxious and yelped when injured – “he certainly gave every appearance of having a rich inner life”. These days we don’t much speak of souls, but it is widely assumed that many non-human brains are conscious – that a dog really does feel pain when he is hurt. The problem is that there seems to be no logical reason to draw the line at dogs, or sparrows or mice or insects, or, for that matter, trees or rocks. Since we don’t know how the brains of mammals create consciousness, we have no grounds for assuming it’s only the brains of mammals that do so – or even that consciousness requires a brain at all. Which is how Koch and Chalmers have both found themselves arguing, in the pages of the New York Review of Books, that an ordinary household thermostat or a photodiode, of the kind you might find in your smoke detector, might in principle be conscious.

The argument unfolds as follows: physicists have no problem accepting that certain fundamental aspects of reality – such as space, mass, or electrical charge – just do exist. They can’t be explained as being the result of anything else. Explanations have to stop somewhere. The panpsychist hunch is that consciousness could be like that, too – and that if it is, there is no particular reason to assume that it only occurs in certain kinds of matter.

Koch’s specific twist on this idea, developed with the neuroscientist and psychiatrist Giulio Tononi, is narrower and more precise than traditional panpsychism. It is the argument that anything at all could be conscious, providing that the information it contains is sufficiently interconnected and organised. The human brain certainly fits the bill; so do the brains of cats and dogs, though their consciousness probably doesn’t resemble ours. But in principle the same might apply to the internet, or a smartphone, or a thermostat. (The ethical implications are unsettling: might we owe the same care to conscious machines that we bestow on animals? Koch, for his part, tries to avoid stepping on insects as he walks.)

Unlike the vast majority of musings on the Hard Problem, moreover, Tononi and Koch’s “integrated information theory” has actually been tested. A team of researchers led by Tononi has designed a device that stimulates the brain with electrical voltage, to measure how interconnected and organised – how “integrated” – its neural circuits are. Sure enough, when people fall into a deep sleep, or receive an injection of anaesthetic, as they slip into unconsciousness, the device demonstrates that their brain integration declines, too. Among patients suffering “locked-in syndrome” – who are as conscious as the rest of us – levels of brain integration remain high; among patients in coma – who aren’t – it doesn’t. Gather enough of this kind of evidence, Koch argues and in theory you could take any device, measure the complexity of the information contained in it, then deduce whether or not it was conscious.

But even if one were willing to accept the perplexing claim that a smartphone could be conscious, could you ever know that it was true? Surely only the smartphone itself could ever know that? Koch shrugged. “It’s like black holes,” he said. “I’ve never been in a black hole. Personally, I have no experience of black holes. But the theory [that predicts black holes] seems always to be true, so I tend to accept it.”

Peter Gamelen
Illustration by Pete Gamlen

It would be satisfying for multiple reasons if a theory like this were eventually to vanquish the Hard Problem. On the one hand, it wouldn’t require a belief in spooky mind-substances that reside inside brains; the laws of physics would escape largely unscathed. On the other hand, we wouldn’t need to accept the strange and soulless claim that consciousness doesn’t exist, when it’s so obvious that it does. On the contrary, panpsychism says, it’s everywhere. The universe is throbbing with it.

Last June, several of the most prominent combatants in the consciousness debates – including Chalmers, Churchland and Dennett – boarded a tall-masted yacht for a trip among the ice floes of Greenland. This conference-at-sea was funded by a Russian internet entrepreneur, Dmitry Volkov, the founder of the Moscow Centre for Consciousness Studies. About 30 academics and graduate students, plus crew, spent a week gliding through dark waters, past looming snow-topped mountains and glaciers, in a bracing chill conducive to focused thought, giving the problem of consciousness another shot. In the mornings, they visited islands to go hiking, or examine the ruins of ancient stone huts; in the afternoons, they held conference sessions on the boat. For Chalmers, the setting only sharpened the urgency of the mystery: how could you feel the Arctic wind on your face, take in the visual sweep of vivid greys and whites and greens, and still claim conscious experience was unreal, or that it was simply the result of ordinary physical stuff, behaving ordinarily?

The question was rhetorical. Dennett and Churchland were not converted; indeed, Chalmers has no particular confidence that a consensus will emerge in the next century. “Maybe there’ll be some amazing new development that leaves us all, now, looking like pre-Darwinians arguing about biology,” he said. “But it wouldn’t surprise me in the least if in 100 years, neuroscience is incredibly sophisticated, if we have a complete map of the brain – and yet some people are still saying, ‘Yes, but how does any of that give you consciousness?’ while others are saying ‘No, no, no – that just is the consciousness!’” The Greenland cruise concluded in collegial spirits, and mutual incomprehension.

It would be poetic – albeit deeply frustrating – were it ultimately to prove that the one thing the human mind is incapable of comprehending is itself. An answer must be out there somewhere. And finding it matters: indeed, one could argue that nothing else could ever matter more – since anything at all that matters, in life, only does so as a consequence of its impact on conscious brains. Yet there’s no reason to assume that our brains will be adequate vessels for the voyage towards that answer. Nor that, were we to stumble on a solution to the Hard Problem, on some distant shore where neuroscience meets philosophy, we would even recognise that we’d found it.

Follow the Long Read on Twitter: @gdnlongread

  • This article was amended on 21 January 2015. The conference-at-sea was funded by the Russian internet entrepreneur Dmitry Volkov, not Dmitry Itskov as was originally stated. This has been corrected.

The new science of death: ‘There’s something happening in the brain that makes no sense’ (Guardian)

blurred figure in a tunnel moving towards a light

Photograph: Gaia Moments/Alamy

Original article

The long read

New research into the dying brain suggests the line between life and death may be less distinct than previously thought

by Alex Blasdel

Tue 2 Apr 2024 05.00 BSTShare

Patient One was 24 years old and pregnant with her third child when she was taken off life support. It was 2014. A couple of years earlier, she had been diagnosed with a disorder that caused an irregular heartbeat, and during her two previous pregnancies she had suffered seizures and faintings. Four weeks into her third pregnancy, she collapsed on the floor of her home. Her mother, who was with her, called 911. By the time an ambulance arrived, Patient One had been unconscious for more than 10 minutes. Paramedics found that her heart had stopped.

After being driven to a hospital where she couldn’t be treated, Patient One was taken to the emergency department at the University of Michigan. There, medical staff had to shock her chest three times with a defibrillator before they could restart her heart. She was placed on an external ventilator and pacemaker, and transferred to the neurointensive care unit, where doctors monitored her brain activity. She was unresponsive to external stimuli, and had a massive swelling in her brain. After she lay in a deep coma for three days, her family decided it was best to take her off life support. It was at that point – after her oxygen was turned off and nurses pulled the breathing tube from her throat – that Patient One became one of the most intriguing scientific subjects in recent history.

For several years, Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness. Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived.

Dying seemed like such an important area of research – we all do it, after all – that Borjigin assumed other scientists had already developed a thorough understanding of what happens to the brain in the process of death. But when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit. Among them was Patient One.

At the time Borjigin began her research into Patient One, the scientific understanding of death had reached an impasse. Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies. A handful of those patients even claimed to witness, from above, doctors’ attempts to resuscitate them. According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.

As remarkable as these near-death experiences sounded, they were consistent enough that some scientists began to believe there was truth to them: maybe people really did have minds or souls that existed separately from their living bodies. In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.

Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences. Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased. “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.

But by 2015, experiments such as Parnia’s had yielded ambiguous results, and the field of near-death studies was not much closer to understanding death than it had been when it was founded four decades earlier. That’s when Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.

“I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”


For all that science has learned about the workings of life, death remains among the most intractable of mysteries. “At times I have been tempted to believe that the creator has eternally intended this department of nature to remain baffling, to prompt our curiosities and hopes and suspicions all in equal measure,” the philosopher William James wrote in 1909.

The first time that the question Borjigin began asking in 2015 was posed – about what happens to the brain during death – was a quarter of a millennium earlier. Around 1740, a French military physician reviewed the case of a famous apothecary who, after a “malign fever” and several blood-lettings, fell unconscious and thought he had travelled to the Kingdom of the BlessedThe physician speculated that the apothecary’s experience had been caused by a surge of blood to the brain. But between that early report and the mid-20th century, scientific interest in near-death experiences remained sporadic.

In 1892, the Swiss climber and geologist Albert Heim collected the first systematic accounts of near-death experiences from 30 fellow climbers who had suffered near-fatal falls. In many cases, the climbers underwent a sudden review of their entire past, heard beautiful music, and “fell in a superbly blue heaven containing roseate cloudlets”, Heim wrote. “Then consciousness was painlessly extinguished, usually at the moment of impact.” There were a few more attempts to do research in the early 20th century, but little progress was made in understanding near-death experiences scientifically. Then, in 1975, an American medical student named Raymond Moody published a book called Life After Life.

Sunbeams behind clouds in vivid sunset sky reflecting in ocean water
 Photograph: Getty Images/Blend Images

In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”. Moody’s book became an international bestseller.

In 1976, the New York Times reported on the burgeoning scientific interest in “life after death” and the “emerging field of thanatology”. The following year, Moody and several fellow thanatologists founded an organisation that became the International Association for Near-Death Studies. In 1981, they printed the inaugural issue of Vital Signs, a magazine for the general reader that was largely devoted to stories of near-death experiences. The following year they began producing the field’s first peer-reviewed journal, which became the Journal of Near-Death Studies. The field was growing, and taking on the trappings of scientific respectability. Reviewing its rise in 1988, the British Journal of Psychiatry captured the field’s animating spirit: “A grand hope has been expressed that, through NDE research, new insights can be gained into the ageless mystery of human mortality and its ultimate significance, and that, for the first time, empirical perspectives on the nature of death may be achieved.”

But near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine. As researchers, the spiritualists’ aim was to collect as many reports of near-death experience as possible, and to proselytise society about the reality of life after death. Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.

The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individual. Many of them were physicians and psychiatrists who had been deeply affected after hearing the near-death stories of patients they had treated in the ICU. Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.

Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain. (Indeed, many of the states reported by near-death experiencers can apparently be achieved by taking a hero’s dose of ketamine.) Their basic premise was: no functioning brain means no consciousness, and certainly no life after death. Their task, which Borjigin took up in 2015, was to discover what was happening during near-death experiences on a fundamentally physical level.

Slowly, the spiritualists left the field of research for the loftier domains of Christian talk radio, and the parapsychologists and physicalists started bringing near-death studies closer to the scientific mainstream. Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221. Those articles have appeared everywhere from the Canadian Urological Association Journal to the esteemed pages of The Lancet.

Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries. Charlotte Martial, a neuroscientist at the University of Liège in Belgium who has done some of the best physicalist work on near-death experiences, hopes we will soon develop a new understanding of the relationship between the internal experience of consciousness and its outward manifestations, for example in coma patients. “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,” she told me. Parnia, the resuscitation specialist, who studies the physical processes of dying but is also sympathetic to a parapsychological theory of consciousness, has a radically different take on what we are poised to find out. “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”


If the field of near-death studies is at the threshold of new discoveries about consciousness and death, it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest. Lance Becker has been a leader in resuscitation science for more than 30 years. As a young doctor attempting to revive people through CPR in the mid-1980s, senior physicians would often step in to declare patients dead. “At a certain point, they would just say, ‘OK, that’s enough. Let’s stop. This is unsuccessful. Time of death: 1.37pm,’” he recalled recently. “And that would be the last thing. And one of the things running through my head as a young doctor was, ‘Well, what really happened at 1.37?’”

In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest. (It is different from a heart attack, in which there is a blockage in a heart that’s still pumping.) Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.

For almost all people at all times in history, cardiac arrest was basically the end of the line. That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques, and soon people were being revived in previously unthinkable, if still modest, numbers.

As more and more people were resuscitated, scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.

It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care. In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.

“I don’t think there’s ever been a more exciting time for the field,” Becker told me. “We’re discovering new drugs, we’re discovering new devices, and we’re discovering new things about the brain.”


The brain – that’s the tricky part. In January 2021, as the Covid-19 pandemic was surging toward what would become its deadliest week on record, Netflix released a documentary series called Surviving Death. In the first episode, some of near-death studies’ most prominent parapsychologists presented the core of their arguments for why they believe near-death experiences show that consciousness exists independently of the brain. “When the heart stops, within 20 seconds or so, you get flatlining, which means no brain activity,” Bruce Greyson, an emeritus professor of psychiatry at the University of Virginia and one of the founding members of the International Association for Near-Death Studies, says in the documentary. “And yet,” he goes on to claim, “people have near-death experiences when they’ve been (quote) ‘flatlined’ for longer than that.”

That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain. Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.

In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down. To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead. Dozens of such reports exist. One of the most famous is about a woman who apparently travelled so far outside her body that she was able to spot a shoe on a window ledge in another part of the hospital where she went into cardiac arrest; the shoe was later reportedly found by a nurse.

an antique illustration of an ‘out of body experience’
 Photograph: Chronicle/Alamy

At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,” Sue Blackmore, a well-known researcher into parapsychology who had her own near-death experience as a young woman in 1970, has written.

The case of the shoe, Blackmore pointed out, relied solely on the report of the nurse who claimed to have found it. That’s far from the standard of proof the scientific community would require to accept a result as radical as that consciousness can travel beyond the body and exist after death. In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,” Charlotte Martial, the University of Liège neuroscientist, told me.

The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?


Perhaps the story to be written about near-death experiences is not that they prove consciousness is radically different from what we thought it was. Instead, it is that the process of dying is far stranger than scientists ever suspected. The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.

In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.

“As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.

In those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irrevocably deeper into death, something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.

The shadows of anonymous people are seen on a wall
 Photograph: Richard Baker/Corbis/Getty Images

Those glimmers and flashes of something like life contradict the expectations of almost everyone working in the field of resuscitation science and near-death studies. The predominant belief – expressed by Greyson, the psychiatrist and co-founder of the International Association of Near Death Studies, in the Netflix series Surviving Death – was that as soon as oxygen stops going to the brain, neurological activity falls precipitously. Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.

Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course, Patient One did not recover, so no one can prove that the extraordinary happenings in her dying brain had experiential counterparts. Greyson and one of the other grandees of near-death studies, a Dutch cardiologist named Pim van Lommel, have asserted that Patient One’s brain activity can shed no light on near-death experiences because her heart hadn’t fully flatlined, but that is a self-defeating argument: there is no rigorous empirical evidence that near-death experiences occur in people whose hearts have completely stopped.

At the very least, Patient One’s brain activity – and the activity in the dying brain of another patient Borjigin studied, a 77-year-old woman known as Patient Three – seems to close the door on the argument that the brain always and nearly immediately ceases to function in a coherent manner in the moments after clinical death. “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.


Borjigin believes that understanding the dying brain is one of the “holy grails” of neuroscience. “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?” Although most people would take that result for granted, Borjigin thinks that, on a physical level, it actually makes little sense.

Borjigin hopes that understanding the neurophysiology of death can help us to reverse it. She already has brain activity data from dozens of deceased patients that she is waiting to analyse. But because of the paranormal stigma associated with near-death studies, she says, few research agencies want to grant her funding. “Consciousness is almost a dirty word amongst funders,” she added. “Hardcore scientists think research into it should belong to maybe theology, philosophy, but not in hardcore science. Other people ask, ‘What’s the use? The patients are gonna die anyway, so why study that process? There’s nothing you can do about it.’”

Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing. The pigs’ brain scans didn’t show the widespread electrical activity that we typically associate with sentience or consciousness. But the fact that there was any activity at all suggests the frontiers of life may one day extend much, much farther into the realms of death than most scientists currently imagine.

Other serious avenues of research into near-death experience are ongoing. Martial and her colleagues at the University of Liège are working on many issues relating to near-death experiences. One is whether people with a history of trauma, or with more creative minds, tend to have such experiences at higher rates than the general population. Another is on the evolutionary biology of near-death experiences. Why, evolutionarily speaking, should we have such experiences at all? Martial and her colleagues speculate that it may be a form of the phenomenon known as thanatosis, in which creatures throughout the animal kingdom feign death to escape mortal dangers. Other researchers have proposed that the surge of electrical activity in the moments after cardiac arrest is just the final seizure of a dying brain, or have hypothesised that it’s a last-ditch attempt by the brain to restart itself, like jump-starting the engine on a car.

Meanwhile, in parts of the culture where enthusiasm is reserved not for scientific discovery in this world, but for absolution or benediction in the next, the spiritualists, along with sundry other kooks and grifters, are busily peddling their tales of the afterlife. Forget the proverbial tunnel of light: in America in particular, a pipeline of money has been discovered from death’s door, through Christian media, to the New York Times bestseller list and thence to the fawning, gullible armchairs of the nation’s daytime talk shows. First stop, paradise; next stop, Dr Oz.

But there is something that binds many of these people – the physicalists, the parapsychologists, the spiritualists – together. It is the hope that by transcending the current limits of science and of our bodies, we will achieve not a deeper understanding of death, but a longer and more profound experience of life. That, perhaps, is the real attraction of the near-death experience: it shows us what is possible not in the next world, but in this one.

 Follow the Long Read on X at @gdnlongread, listen to our podcasts here and sign up to the long read weekly email here.

The Terrible Costs of a Phone-Based Childhood (The Atlantic)

theatlantic.com

The environment in which kids grow up today is hostile to human development.

By Jonathan Haidt

Photographs by Maggie Shannon

MARCH 13, 2024


Two teens sit on a bed looking at their phones

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

Something went suddenly and horribly wrong for adolescents in the early 2010s. By now you’ve likely seen the statistics: Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies from 2010 to 2019. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent.

The problem was not limited to the U.S.: Similar patterns emerged around the same time in Canada, the U.K., Australia, New Zealand, the Nordic countries, and beyond. By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data.

The decline in mental health is just one of many signs that something went awry. Loneliness and friendlessness among American teens began to surge around 2012. Academic achievement went down, too. According to “The Nation’s Report Card,” scores in reading and math began to decline for U.S. students after 2012, reversing decades of slow but generally steady increase. PISA, the major international measure of educational trends, shows that declines in math, reading, and science happened globally, also beginning in the early 2010s.

As the oldest members of Gen Z reach their late 20s, their troubles are carrying over into adulthood. Young adults are dating less, having less sex, and showing less interest in ever having children than prior generations. They are more likely to live with their parents. They were less likely to get jobs as teens, and managers say they are harder to work with. Many of these trends began with earlier generations, but most of them accelerated with Gen Z.

Surveys show that members of Gen Z are shyer and more risk averse than previous generations, too, and risk aversion may make them less ambitious. In an interview last May, OpenAI co-founder Sam Altman and Stripe co-founder Patrick Collison noted that, for the first time since the 1970s, none of Silicon Valley’s preeminent entrepreneurs are under 30. “Something has really gone wrong,” Altman said. In a famously young industry, he was baffled by the sudden absence of great founders in their 20s.

Generations are not monolithic, of course. Many young people are flourishing. Taken as a whole, however, Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. And if a generation is doing poorly––if it is more anxious and depressed and is starting families, careers, and important companies at a substantially lower rate than previous generations––then the sociological and economic consequences will be profound for the entire society.

graph showing rates of self-harm in children
Number of emergency-department visits for nonfatal self-harm per 100,000 children (source: Centers for Disease Control and Prevention)

What happened in the early 2010s that altered adolescent development and worsened mental health? Theories abound, but the fact that similar trends are found in many countries worldwide means that events and trends that are specific to the United States cannot be the main story.

I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online—particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. Life changed rapidly for younger children, too, as they began to get access to their parents’ smartphones and, later, got their own iPads, laptops, and even smartphones during elementary school.


As a social psychologist who has long studied social and moral development, I have been involved in debates about the effects of digital technology for years. Typically, the scientific questions have been framed somewhat narrowly, to make them easier to address with data. For example, do adolescents who consume more social media have higher levels of depression? Does using a smartphone just before bedtime interfere with sleep? The answer to these questions is usually found to be yes, although the size of the relationship is often statistically small, which has led some researchers to conclude that these new technologies are not responsible for the gigantic increases in mental illness that began in the early 2010s.

But before we can evaluate the evidence on any one potential avenue of harm, we need to step back and ask a broader question: What is childhood––including adolescence––and how did it change when smartphones moved to the center of it? If we take a more holistic view of what childhood is and what young children, tweens, and teens need to do to mature into competent adults, the picture becomes much clearer. Smartphone-based life, it turns out, alters or interferes with a great number of developmental processes.

The intrusion of smartphones and social media are not the only changes that have deformed childhood. There’s an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world.

My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now.

1. The Decline of Play and Independence

Human brains are extraordinarily large compared with those of other primates, and human childhoods are extraordinarily long, too, to give those large brains time to wire up within a particular culture. A child’s brain is already 90 percent of its adult size by about age 6. The next 10 or 15 years are about learning norms and mastering skills—physical, analytical, creative, and social. As children and adolescents seek out experiences and practice a wide variety of behaviors, the synapses and neurons that are used frequently are retained while those that are used less often disappear. Neurons that fire together wire together, as brain researchers say.

Brain development is sometimes said to be “experience-expectant,” because specific parts of the brain show increased plasticity during periods of life when an animal’s brain can “expect” to have certain kinds of experiences. You can see this with baby geese, who will imprint on whatever mother-sized object moves in their vicinity just after they hatch. You can see it with human children, who are able to learn languages quickly and take on the local accent, but only through early puberty; after that, it’s hard to learn a language and sound like a native speaker. There is also some evidence of a sensitive period for cultural learning more generally. Japanese children who spent a few years in California in the 1970s came to feel “American” in their identity and ways of interacting only if they attended American schools for a few years between ages 9 and 15. If they left before age 9, there was no lasting impact. If they didn’t arrive until they were 15, it was too late; they didn’t come to feel American.

Human childhood is an extended cultural apprenticeship with different tasks at different ages all the way through puberty. Once we see it this way, we can identify factors that promote or impede the right kinds of learning at each age. For children of all ages, one of the most powerful drivers of learning is the strong motivation to play. Play is the work of childhood, and all young mammals have the same job: to wire up their brains by playing vigorously and often, practicing the moves and skills they’ll need as adults. Kittens will play-pounce on anything that looks like a mouse tail. Human children will play games such as tag and sharks and minnows, which let them practice both their predator skills and their escaping-from-predator skills. Adolescents will play sports with greater intensity, and will incorporate playfulness into their social interactions—flirting, teasing, and developing inside jokes that bond friends together. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play.

One crucial aspect of play is physical risk taking. Children and adolescents must take risks and fail—often—in environments in which failure is not very costly. This is how they extend their abilities, overcome their fears, learn to estimate risk, and learn to cooperate in order to take on larger challenges later. The ever-present possibility of getting hurt while running around, exploring, play-fighting, or getting into a real conflict with another group adds an element of thrill, and thrilling play appears to be the most effective kind for overcoming childhood anxieties and building social, emotional, and physical competence. The desire for risk and thrill increases in the teen years, when failure might carry more serious consequences. Children of all ages need to choose the risk they are ready for at a given moment. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults.

Human childhood and adolescence evolved outdoors, in a physical world full of dangers and opportunities. Its central activities––play, exploration, and intense socializing––were largely unsupervised by adults, allowing children to make their own choices, resolve their own conflicts, and take care of one another. Shared adventures and shared adversity bound young people together into strong friendship clusters within which they mastered the social dynamics of small groups, which prepared them to master bigger challenges and larger groups later on.

And then we changed childhood.

The changes started slowly in the late 1970s and ’80s, before the arrival of the internet, as many parents in the U.S. grew fearful that their children would be harmed or abducted if left unsupervised. Such crimes have always been extremely rare, but they loomed larger in parents’ minds thanks in part to rising levels of street crime combined with the arrival of cable TV, which enabled round-the-clock coverage of missing-children cases. A general decline in social capital––the degree to which people knew and trusted their neighbors and institutions––exacerbated parental fears. Meanwhile, rising competition for college admissions encouraged more intensive forms of parenting. In the 1990s, American parents began pulling their children indoors or insisting that afternoons be spent in adult-run enrichment activities. Free play, independent exploration, and teen-hangout time declined.

In recent decades, seeing unchaperoned children outdoors has become so novel that when one is spotted in the wild, some adults feel it is their duty to call the police. In 2015, the Pew Research Center found that parents, on average, believed that children should be at least 10 years old to play unsupervised in front of their house, and that kids should be 14 before being allowed to go unsupervised to a public park. Most of these same parents had enjoyed joyous and unsupervised outdoor play by the age of 7 or 8.

But overprotection is only part of the story. The transition away from a more independent childhood was facilitated by steady improvements in digital technology, which made it easier and more inviting for young people to spend a lot more time at home, indoors, and alone in their rooms. Eventually, tech companies got access to children 24/7. They developed exciting virtual activities, engineered for “engagement,” that are nothing like the real-world experiences young brains evolved to expect.

Triptych: teens on their phones at the mall, park, and bedroom

2. The Virtual World Arrives in Two Waves

The internet, which now dominates the lives of young people, arrived in two waves of linked technologies. The first one did little harm to Millennials. The second one swallowed Gen Z whole.

The first wave came ashore in the 1990s with the arrival of dial-up internet access, which made personal computers good for something beyond word processing and basic games. By 2003, 55 percent of American households had a computer with (slow) internet access. Rates of adolescent depression, loneliness, and other measures of poor mental health did not rise in this first wave. If anything, they went down a bit. Millennial teens (born 1981 through 1995), who were the first to go through puberty with access to the internet, were psychologically healthier and happier, on average, than their older siblings or parents in Generation X (born 1965 through 1980).

The second wave began to rise in the 2000s, though its full force didn’t hit until the early 2010s. It began rather innocently with the introduction of social-media platforms that helped people connect with their friends. Posting and sharing content became much easier with sites such as Friendster (launched in 2003), Myspace (2003), and Facebook (2004).

Teens embraced social media soon after it came out, but the time they could spend on these sites was limited in those early years because the sites could only be accessed from a computer, often the family computer in the living room. Young people couldn’t access social media (and the rest of the internet) from the school bus, during class time, or while hanging out with friends outdoors. Many teens in the early-to-mid-2000s had cellphones, but these were basic phones (many of them flip phones) that had no internet access. Typing on them was difficult––they had only number keys. Basic phones were tools that helped Millennials meet up with one another in person or talk with each other one-on-one. I have seen no evidence to suggest that basic cellphones harmed the mental health of Millennials.

It was not until the introduction of the iPhone (2007), the App Store (2008), and high-speed internet (which reached 50 percent of American homes in 2007)—and the corresponding pivot to mobile made by many providers of social media, video games, and porn—that it became possible for adolescents to spend nearly every waking moment online. The extraordinary synergy among these innovations was what powered the second technological wave. In 2011, only 23 percent of teens had a smartphone. By 2015, that number had risen to 73 percent, and a quarter of teens said they were online “almost constantly.” Their younger siblings in elementary school didn’t usually have their own smartphones, but after its release in 2010, the iPad quickly became a staple of young children’s daily lives. It was in this brief period, from 2010 to 2015, that childhood in America (and many other countries) was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development.

3. Techno-optimism and the Birth of the Phone-Based Childhood

The phone-based childhood created by that second wave—including not just smartphones themselves, but all manner of internet-connected devices, such as tablets, laptops, video-game consoles, and smartwatches—arrived near the end of a period of enormous optimism about digital technology. The internet came into our lives in the mid-1990s, soon after the fall of the Soviet Union. By the end of that decade, it was widely thought that the web would be an ally of democracy and a slayer of tyrants. When people are connected to each other, and to all the information in the world, how could any dictator keep them down?

In the 2000s, Silicon Valley and its world-changing inventions were a source of pride and excitement in America. Smart and ambitious young people around the world wanted to move to the West Coast to be part of the digital revolution. Tech-company founders such as Steve Jobs and Sergey Brin were lauded as gods, or at least as modern Prometheans, bringing humans godlike powers. The Arab Spring bloomed in 2011 with the help of decentralized social platforms, including Twitter and Facebook. When pundits and entrepreneurs talked about the power of social media to transform society, it didn’t sound like a dark prophecy.

You have to put yourself back in this heady time to understand why adults acquiesced so readily to the rapid transformation of childhood. Many parents had concerns, even then, about what their children were doing online, especially because of the internet’s ability to put children in contact with strangers. But there was also a lot of excitement about the upsides of this new digital world. If computers and the internet were the vanguards of progress, and if young people––widely referred to as “digital natives”––were going to live their lives entwined with these technologies, then why not give them a head start? I remember how exciting it was to see my 2-year-old son master the touch-and-swipe interface of my first iPhone in 2008. I thought I could see his neurons being woven together faster as a result of the stimulation it brought to his brain, compared to the passivity of watching television or the slowness of building a block tower. I thought I could see his future job prospects improving.

Touchscreen devices were also a godsend for harried parents. Many of us discovered that we could have peace at a restaurant, on a long car trip, or at home while making dinner or replying to emails if we just gave our children what they most wanted: our smartphones and tablets. We saw that everyone else was doing it and figured it must be okay.

It was the same for older children, desperate to join their friends on social-media platforms, where the minimum age to open an account was set by law to 13, even though no research had been done to establish the safety of these products for minors. Because the platforms did nothing (and still do nothing) to verify the stated age of new-account applicants, any 10-year-old could open multiple accounts without parental permission or knowledge, and many did. Facebook and later Instagram became places where many sixth and seventh graders were hanging out and socializing. If parents did find out about these accounts, it was too late. Nobody wanted their child to be isolated and alone, so parents rarely forced their children to shut down their accounts.

We had no idea what we were doing.

4. The High Cost of a Phone-Based Childhood

In Walden, his 1854 reflection on simple living, Henry David Thoreau wrote, “The cost of a thing is the amount of … life which is required to be exchanged for it, immediately or in the long run.” It’s an elegant formulation of what economists would later call the opportunity cost of any choice—all of the things you can no longer do with your money and time once you’ve committed them to something else. So it’s important that we grasp just how much of a young person’s day is now taken up by their devices.

The numbers are hard to believe. The most recent Gallup data show that American teens spend about five hours a day just on social-media platforms (including watching videos on TikTok and YouTube). Add in all the other phone- and screen-based activities, and the number rises to somewhere between seven and nine hours a day, on average. The numbers are even higher in single-parent and low-income families, and among Black, Hispanic, and Native American families.

These very high numbers do not include time spent in front of screens for school or homework, nor do they include all the time adolescents spend paying only partial attention to events in the real world while thinking about what they’re missing on social media or waiting for their phones to ping. Pew reports that in 2022, one-third of teens said they were on one of the major social-media sites “almost constantly,” and nearly half said the same of the internet in general. For these heavy users, nearly every waking hour is an hour absorbed, in full or in part, by their devices.

overhead image of teens hands with phones

In Thoreau’s terms, how much of life is exchanged for all this screen time? Arguably, most of it. Everything else in an adolescent’s day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed, and for the hundreds of “friends,” “followers,” and other network connections that must be serviced with texts, posts, comments, likes, snaps, and direct messages. I recently surveyed my students at NYU, and most of them reported that the very first thing they do when they open their eyes in the morning is check their texts, direct messages, and social-media feeds. It’s also the last thing they do before they close their eyes at night. And it’s a lot of what they do in between.

The amount of time that adolescents spend sleeping declined in the early 2010s, and many studies tie sleep loss directly to the use of devices around bedtime, particularly when they’re used to scroll through social media. Exercise declined, too, which is unfortunate because exercise, like sleep, improves both mental and physical health. Book reading has been declining for decades, pushed aside by digital alternatives, but the decline, like so much else, sped up in the early 2010s. With passive entertainment always available, adolescent minds likely wander less than they used to; contemplation and imagination might be placed on the list of things winnowed down or crowded out.

But perhaps the most devastating cost of the new phone-based childhood was the collapse of time spent interacting with other people face-to-face. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends (about two hours a day, on average, not counting time together at school) than did older people (who spent just 30 to 60 minutes with friends). Time with friends began decreasing for young people in the 2000s, but the drop accelerated in the 2010s, while it barely changed for older people. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck.

You might question the importance of this decline. After all, isn’t much of this online time spent interacting with friends through texting, social media, and multiplayer video games? Isn’t that just as good?

Some of it surely is, and virtual interactions offer unique benefits too, especially for young people who are geographically or socially isolated. But in general, the virtual world lacks many of the features that make human interactions in the real world nutritious, as we might say, for physical, social, and emotional development. In particular, real-world relationships and social interactions are characterized by four features—typical for hundreds of thousands of years—that online interactions either distort or erase.

First, real-world interactions are embodied, meaning that we use our hands and facial expressions to communicate, and we learn to respond to the body language of others. Virtual interactions, in contrast, mostly rely on language alone. No matter how many emojis are offered as compensation, the elimination of communication channels for which we have eons of evolutionary programming is likely to produce adults who are less comfortable and less skilled at interacting in person.

Second, real-world interactions are synchronous; they happen at the same time. As a result, we learn subtle cues about timing and conversational turn taking. Synchronous interactions make us feel closer to the other person because that’s what getting “in sync” does. Texts, posts, and many other virtual interactions lack synchrony. There is less real laughter, more room for misinterpretation, and more stress after a comment that gets no immediate response.

Third, real-world interactions primarily involve one‐to‐one communication, or sometimes one-to-several. But many virtual communications are broadcast to a potentially huge audience. Online, each person can engage in dozens of asynchronous interactions in parallel, which interferes with the depth achieved in all of them. The sender’s motivations are different, too: With a large audience, one’s reputation is always on the line; an error or poor performance can damage social standing with large numbers of peers. These communications thus tend to be more performative and anxiety-inducing than one-to-one conversations.

Finally, real-world interactions usually take place within communities that have a high bar for entry and exit, so people are strongly motivated to invest in relationships and repair rifts when they happen. But in many virtual networks, people can easily block others or quit when they are displeased. Relationships within such networks are usually more disposable.

These unsatisfying and anxiety-producing features of life online should be recognizable to most adults. Online interactions can bring out antisocial behavior that people would never display in their offline communities. But if life online takes a toll on adults, just imagine what it does to adolescents in the early years of puberty, when their “experience expectant” brains are rewiring based on feedback from their social interactions.

Kids going through puberty online are likely to experience far more social comparison, self-consciousness, public shaming, and chronic anxiety than adolescents in previous generations, which could potentially set developing brains into a habitual state of defensiveness. The brain contains systems that are specialized for approach (when opportunities beckon) and withdrawal (when threats appear or seem likely). People can be in what we might call “discover mode” or “defend mode” at any moment, but generally not both. The two systems together form a mechanism for quickly adapting to changing conditions, like a thermostat that can activate either a heating system or a cooling system as the temperature fluctuates. Some people’s internal thermostats are generally set to discover mode, and they flip into defend mode only when clear threats arise. These people tend to see the world as full of opportunities. They are happier and less anxious. Other people’s internal thermostats are generally set to defend mode, and they flip into discover mode only when they feel unusually safe. They tend to see the world as full of threats and are more prone to anxiety and depressive disorders.

graph showing rates of disabilities in US college freshman
Percentage of U.S. college freshmen reporting various kinds of disabilities and disorders (source: Higher Education Research Institute)

A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting “safe spaces” and trigger warnings. They were highly sensitive to “microaggressions” and sometimes claimed that words were “violence.” These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.

5. So Many Harms

The debate around adolescents’ use of smartphones and social media typically revolves around mental health, and understandably so. But the harms that have resulted from transforming childhood so suddenly and heedlessly go far beyond mental health. I’ve touched on some of them—social awkwardness, reduced self-confidence, and a more sedentary childhood. Here are three additional harms.

Fragmented Attention, Disrupted Learning

Staying on task while sitting at a computer is hard enough for an adult with a fully developed prefrontal cortex. It is far more difficult for adolescents in front of their laptop trying to do homework. They are probably less intrinsically motivated to stay on task. They’re certainly less able, given their undeveloped prefrontal cortex, and hence it’s easy for any company with an app to lure them away with an offer of social validation or entertainment. Their phones are pinging constantly—one study found that the typical adolescent now gets 237 notifications a day, roughly 15 every waking hour. Sustained attention is essential for doing almost anything big, creative, or valuable, yet young people find their attention chopped up into little bits by notifications offering the possibility of high-pleasure, low-effort digital experiences.

It even happens in the classroom. Studies confirm that when students have access to their phones during class time, they use them, especially for texting and checking social media, and their grades and learning suffer. This might explain why benchmark test scores began to decline in the U.S. and around the world in the early 2010s—well before the pandemic hit.

Addiction and Social Withdrawal

The neural basis of behavioral addiction to social media or video games is not exactly the same as chemical addiction to cocaine or opioids. Nonetheless, they all involve abnormally heavy and sustained activation of dopamine neurons and reward pathways. Over time, the brain adapts to these high levels of dopamine; when the child is not engaged in digital activity, their brain doesn’t have enough dopamine, and the child experiences withdrawal symptoms. These generally include anxiety, insomnia, and intense irritability. Kids with these kinds of behavioral addictions often become surly and aggressive, and withdraw from their families into their bedrooms and devices.

Social-media and gaming platforms were designed to hook users. How successful are they? How many kids suffer from digital addictions?

The main addiction risks for boys seem to be video games and porn. “Internet gaming disorder,” which was added to the main diagnosis manual of psychiatry in 2013 as a condition for further study, describes “significant impairment or distress” in several aspects of life, along with many hallmarks of addiction, including an inability to reduce usage despite attempts to do so. Estimates for the prevalence of IGD range from 7 to 15 percent among adolescent boys and young men. As for porn, a nationally representative survey of American adults published in 2019 found that 7 percent of American men agreed or strongly agreed with the statement “I am addicted to pornography”—and the rates were higher for the youngest men.

Girls have much lower rates of addiction to video games and porn, but they use social media more intensely than boys do. A study of teens in 29 nations found that between 5 and 15 percent of adolescents engage in what is called “problematic social media use,” which includes symptoms such as preoccupation, withdrawal symptoms, neglect of other areas of life, and lying to parents and friends about time spent on social media. That study did not break down results by gender, but many others have found that rates of “problematic use” are higher for girls.

I don’t want to overstate the risks: Most teens do not become addicted to their phones and video games. But across multiple studies and across genders, rates of problematic use come out in the ballpark of 5 to 15 percent. Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?

The Decay of Wisdom and the Loss of Meaning

During that crucial sensitive period for cultural learning, from roughly ages 9 through 15, we should be especially thoughtful about who is socializing our children for adulthood. Instead, that’s when most kids get their first smartphone and sign themselves up (with or without parental permission) to consume rivers of content from random strangers. Much of that content is produced by other adolescents, in blocks of a few minutes or a few seconds.

This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life. Adolescents spend less time steeped in their local or national culture. They are coming of age in a confusing, placeless, ahistorical maelstrom of 30-second stories curated by algorithms designed to mesmerize them. Without solid knowledge of the past and the filtering of good ideas from bad––a process that plays out over many generations––young people will be more prone to believe whatever terrible ideas become popular around them, which might explain why videos showing young people reacting positively to Osama bin Laden’s thoughts about America were trending on TikTok last fall.

All this is made worse by the fact that so much of digital public life is an unending supply of micro dramas about somebody somewhere in our country of 340 million people who did something that can fuel an outrage cycle, only to be pushed aside by the next. It doesn’t add up to anything and leaves behind only a distorted sense of human nature and affairs.

When our public life becomes fragmented, ephemeral, and incomprehensible, it is a recipe for anomie, or normlessness. The great French sociologist Émile Durkheim showed long ago that a society that fails to bind its people together with some shared sense of sacredness and common respect for rules and norms is not a society of great individual freedom; it is, rather, a place where disoriented individuals have difficulty setting goals and exerting themselves to achieve them. Durkheim argued that anomie was a major driver of suicide rates in European countries. Modern scholars continue to draw on his work to understand suicide rates today.

graph showing rates of young people who struggle with mental health
Percentage of U.S. high-school seniors who agreed with the statement “Life often seems meaningless.” (Source: Monitoring the Future)

Durkheim’s observations are crucial for understanding what happened in the early 2010s. A long-running survey of American teens found that, from 1990 to 2010, high-school seniors became slightly less likely to agree with statements such as “Life often feels meaningless.” But as soon as they adopted a phone-based life and many began to live in the whirlpool of social media, where no stability can be found, every measure of despair increased. From 2010 to 2019, the number who agreed that their lives felt “meaningless” increased by about 70 percent, to more than one in five.

6. Young People Don’t Like Their Phone-Based Lives

How can I be confident that the epidemic of adolescent mental illness was kicked off by the arrival of the phone-based childhood? Skeptics point to other events as possible culprits, including the 2008 global financial crisis, global warming, the 2012 Sandy Hook school shooting and the subsequent active-shooter drills, rising academic pressures, and the opioid epidemic. But while these events might have been contributing factors in some countries, none can explain both the timing and international scope of the disaster.

An additional source of evidence comes from Gen Z itself. With all the talk of regulating social media, raising age limits, and getting phones out of schools, you might expect to find many members of Gen Z writing and speaking out in opposition. I’ve looked for such arguments and found hardly any. In contrast, many young adults tell stories of devastation.

Freya India, a 24-year-old British essayist who writes about girls, explains how social-media sites carry girls off to unhealthy places: “It seems like your child is simply watching some makeup tutorials, following some mental health influencers, or experimenting with their identity. But let me tell you: they are on a conveyor belt to someplace bad. Whatever insecurity or vulnerability they are struggling with, they will be pushed further and further into it.” She continues:

Gen Z were the guinea pigs in this uncontrolled global social experiment. We were the first to have our vulnerabilities and insecurities fed into a machine that magnified and refracted them back at us, all the time, before we had any sense of who we were. We didn’t just grow up with algorithms. They raised us. They rearranged our faces. Shaped our identities. Convinced us we were sick.

Rikki Schlott, a 23-year-old American journalist and co-author of The Canceling of the American Mind, writes,

The day-to-day life of a typical teen or tween today would be unrecognizable to someone who came of age before the smartphone arrived. Zoomers are spending an average of 9 hours daily in this screen-time doom loop—desperate to forget the gaping holes they’re bleeding out of, even if just for … 9 hours a day. Uncomfortable silence could be time to ponder why they’re so miserable in the first place. Drowning it out with algorithmic white noise is far easier.

A 27-year-old man who spent his adolescent years addicted (his word) to video games and pornography sent me this reflection on what that did to him:

I missed out on a lot of stuff in life—a lot of socialization. I feel the effects now: meeting new people, talking to people. I feel that my interactions are not as smooth and fluid as I want. My knowledge of the world (geography, politics, etc.) is lacking. I didn’t spend time having conversations or learning about sports. I often feel like a hollow operating system.

Or consider what Facebook found in a research project involving focus groups of young people, revealed in 2021 by the whistleblower Frances Haugen: “Teens blame Instagram for increases in the rates of anxiety and depression among teens,” an internal document said. “This reaction was unprompted and consistent across all groups.”

How can it be that an entire generation is hooked on consumer products that so few praise and so many ultimately regret using? Because smartphones and especially social media have put members of Gen Z and their parents into a series of collective-action traps. Once you understand the dynamics of these traps, the escape routes become clear.

diptych: teens on phone on couch and on a swing

7. Collective-Action Problems

Social-media companies such as Meta, TikTok, and Snap are often compared to tobacco companies, but that’s not really fair to the tobacco industry. It’s true that companies in both industries marketed harmful products to children and tweaked their products for maximum customer retention (that is, addiction), but there’s a big difference: Teens could and did choose, in large numbers, not to smoke. Even at the peak of teen cigarette use, in 1997, nearly two-thirds of high-school students did not smoke.

Social media, in contrast, applies a lot more pressure on nonusers, at a much younger age and in a more insidious way. Once a few students in any middle school lie about their age and open accounts at age 11 or 12, they start posting photos and comments about themselves and other students. Drama ensues. The pressure on everyone else to join becomes intense. Even a girl who knows, consciously, that Instagram can foster beauty obsession, anxiety, and eating disorders might sooner take those risks than accept the seeming certainty of being out of the loop, clueless, and excluded. And indeed, if she resists while most of her classmates do not, she might, in fact, be marginalized, which puts her at risk for anxiety and depression, though via a different pathway than the one taken by those who use social media heavily. In this way, social media accomplishes a remarkable feat: It even harms adolescents who do not use it.

A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.

Social media is all about network effects. Most students are only on it because everyone else is too. Most of them would prefer that nobody be on these platforms. Later in the study, students were asked directly, “Would you prefer to live in a world without Instagram [or TikTok]?” A majority of students said yes––58 percent for each app.

This is the textbook definition of what social scientists call a collective-action problem. It’s what happens when a group would be better off if everyone in the group took a particular action, but each actor is deterred from acting, because unless the others do the same, the personal cost outweighs the benefit. Fishermen considering limiting their catch to avoid wiping out the local fish population are caught in this same kind of trap. If no one else does it too, they just lose profit.

Cigarettes trapped individual smokers with a biological addiction. Social media has trapped an entire generation in a collective-action problem. Early app developers deliberately and knowingly exploited the psychological weaknesses and insecurities of young people to pressure them to consume a product that, upon reflection, many wish they could use less, or not at all.

8. Four Norms to Break Four Traps

Young people and their parents are stuck in at least four collective-action traps. Each is hard to escape for an individual family, but escape becomes much easier if families, schools, and communities coordinate and act together. Here are four norms that would roll back the phone-based childhood. I believe that any community that adopts all four will see substantial improvements in youth mental health within two years.

No smartphones before high school 

The trap here is that each child thinks they need a smartphone because “everyone else” has one, and many parents give in because they don’t want their child to feel excluded. But if no one else had a smartphone—or even if, say, only half of the child’s sixth-grade class had one—parents would feel more comfortable providing a basic flip phone (or no phone at all). Delaying round-the-clock internet access until ninth grade (around age 14) as a national or community norm would help to protect adolescents during the very vulnerable first few years of puberty. According to a 2022 British study, these are the years when social-media use is most correlated with poor mental health. Family policies about tablets, laptops, and video-game consoles should be aligned with smartphone restrictions to prevent overuse of other screen activities.

No social media before 16

The trap here, as with smartphones, is that each adolescent feels a strong need to open accounts on TikTok, Instagram, Snapchat, and other platforms primarily because that’s where most of their peers are posting and gossiping. But if the majority of adolescents were not on these accounts until they were 16, families and adolescents could more easily resist the pressure to sign up. The delay would not mean that kids younger than 16 could never watch videos on TikTok or YouTube—only that they could not open accounts, give away their data, post their own content, and let algorithms get to know them and their preferences.

Phone‐free schools

Most schools claim that they ban phones, but this usually just means that students aren’t supposed to take their phone out of their pocket during class. Research shows that most students do use their phones during class time. They also use them during lunchtime, free periods, and breaks between classes––times when students could and should be interacting with their classmates face-to-face. The only way to get students’ minds off their phones during the school day is to require all students to put their phones (and other devices that can send or receive texts) into a phone locker or locked pouch at the start of the day. Schools that have gone phone-free always seem to report that it has improved the culture, making students more attentive in class and more interactive with one another. Published studies back them up.

More independence, free play, and responsibility in the real world

Many parents are afraid to give their children the level of independence and responsibility they themselves enjoyed when they were young, even though rates of homicide, drunk driving, and other physical threats to children are way down in recent decades. Part of the fear comes from the fact that parents look at each other to determine what is normal and therefore safe, and they see few examples of families acting as if a 9-year-old can be trusted to walk to a store without a chaperone. But if many parents started sending their children out to play or run errands, then the norms of what is safe and accepted would change quickly. So would ideas about what constitutes “good parenting.” And if more parents trusted their children with more responsibility––for example, by asking their kids to do more to help out, or to care for others––then the pervasive sense of uselessness now found in surveys of high-school students might begin to dissipate.

It would be a mistake to overlook this fourth norm. If parents don’t replace screen time with real-world experiences involving friends and independent activity, then banning devices will feel like deprivation, not the opening up of a world of opportunities.

The main reason why the phone-based childhood is so harmful is because it pushes aside everything else. Smartphones are experience blockers. Our ultimate goal should not be to remove screens entirely, nor should it be to return childhood to exactly the way it was in 1960. Rather, it should be to create a version of childhood and adolescence that keeps young people anchored in the real world while flourishing in the digital age.

9. What Are We Waiting For?

An essential function of government is to solve collective-action problems. Congress could solve or help solve the ones I’ve highlighted—for instance, by raising the age of “internet adulthood” to 16 and requiring tech companies to keep underage children off their sites.

In recent decades, however, Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry. Governors and state legislators have been much more effective, and their successes might let us evaluate how well various reforms work. But the bottom line is that to change norms, we’re going to need to do most of the work ourselves, in neighborhood groups, schools, and other communities.

There are now hundreds of organizations––most of them started by mothers who saw what smartphones had done to their children––that are working to roll back the phone-based childhood or promote a more independent, real-world childhood. (I have assembled a list of many of them.) One that I co-founded, at LetGrow.org, suggests a variety of simple programs for parents or schools, such as play club (schools keep the playground open at least one day a week before or after school, and kids sign up for phone-free, mixed-age, unstructured play as a regular weekly activity) and the Let Grow Experience (a series of homework assignments in which students––with their parents’ consent––choose something to do on their own that they’ve never done before, such as walk the dog, climb a tree, walk to a store, or cook dinner).

Even without the help of organizations, parents could break their families out of collective-action traps if they coordinated with the parents of their children’s friends. Together they could create common smartphone rules and organize unsupervised play sessions or encourage hangouts at a home, park, or shopping mall.

teen on her phone in her room

Parents are fed up with what childhood has become. Many are tired of having daily arguments about technologies that were designed to grab hold of their children’s attention and not let go. But the phone-based childhood is not inevitable.

The four norms I have proposed cost almost nothing to implement, they cause no clear harm to anyone, and while they could be supported by new legislation, they can be instilled even without it. We can begin implementing all of them right away, this year, especially in communities with good cooperation between schools and parents. A single memo from a principal asking parents to delay smartphones and social media, in support of the school’s effort to improve mental health by going phone free, would catalyze collective action and reset the community’s norms.

We didn’t know what we were doing in the early 2010s. Now we do. It’s time to end the phone-based childhood.


This article is adapted from Jonathan Haidt’s forthcoming book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.

‘Everybody has a breaking point’: how the climate crisis affects our brains (Guardian)

Researchers measuring the effect of Hurricane Sandy on children in utero at the time reported: ‘Our findings are extremely alarming.’ Illustration: Ngadi Smart/The Guardian

Are growing rates of anxiety, depression, ADHD, PTSD, Alzheimer’s and motor neurone disease related to rising temperatures and other extreme environmental changes?

Original article

Clayton Page Aldern

Wed 27 Mar 2024 05.00 GMTShare

In late October 2012, a category 3 hurricane howled into New York City with a force that would etch its name into the annals of history. Superstorm Sandy transformed the city, inflicting more than $60bn in damage, killing dozens, and forcing 6,500 patients to be evacuated from hospitals and nursing homes. Yet in the case of one cognitive neuroscientist, the storm presented, darkly, an opportunity.

Yoko Nomura had found herself at the centre of a natural experiment. Prior to the hurricane’s unexpected visit, Nomura – who teaches in the psychology department at Queens College, CUNY, as well as in the psychiatry department of the Icahn School of Medicine at Mount Sinai – had meticulously assembled a research cohort of hundreds of expectant New York mothers. Her investigation, the Stress in Pregnancy study, had aimed since 2009 to explore the potential imprint of prenatal stress on the unborn. Drawing on the evolving field of epigenetics, Nomura had sought to understand the ways in which environmental stressors could spur changes in gene expression, the likes of which were already known to influence the risk of specific childhood neurobehavioural outcomes such as autism, schizophrenia and attention deficit hyperactivity disorder (ADHD).

The storm, however, lent her research a new, urgent question. A subset of Nomura’s cohort of expectant women had been pregnant during Sandy. She wanted to know if the prenatal stress of living through a hurricane – of experiencing something so uniquely catastrophic – acted differentially on the children these mothers were carrying, relative to those children who were born before or conceived after the storm.

More than a decade later, she has her answer. The conclusions reveal a startling disparity: children who were in utero during Sandy bear an inordinately high risk of psychiatric conditions today. For example, girls who were exposed to Sandy prenatally experienced a 20-fold increase in anxiety and a 30-fold increase in depression later in life compared with girls who were not exposed. Boys had 60-fold and 20-fold increased risks of ADHD and conduct disorder, respectively. Children expressed symptoms of the conditions as early as preschool.

A resident pulls a woman in a canoe down 6th Street as high tide, rain and winds flood local streets on October 29, 2012 in Lindenhurst, New York.
Flooding in Lindenhurst, New York, in October 2012, after Hurricane Sandy struck. Photograph: Bruce Bennett/Getty Images

“Our findings are extremely alarming,” the researchers wrote in a 2022 study summarising their initial results. It is not the type of sentence one usually finds in the otherwise measured discussion sections of academic papers.

Yet Nomura and her colleagues’ research also offers a representative page in a new story of the climate crisis: a story that says a changing climate doesn’t just shape the environment in which we live. Rather, the climate crisis spurs visceral and tangible transformations in our very brains. As the world undergoes dramatic environmental shifts, so too does our neurological landscape. Fossil-fuel-induced changes – from rising temperatures to extreme weather to heightened levels of atmospheric carbon dioxide – are altering our brain health, influencing everything from memory and executive function to language, the formation of identity, and even the structure of the brain. The weight of nature is heavy, and it presses inward.

Evidence comes from a variety of fields. Psychologists and behavioural economists have illustrated the ways in which temperature spikes drive surges in everything from domestic violence to online hate speech. Cognitive neuroscientists have charted the routes by which extreme heat and surging CO2 levels impair decision-making, diminish problem-solving abilities, and short-circuit our capacity to learn. Vectors of brain disease, such as ticks and mosquitoes, are seeing their habitable ranges expand as the world warms. And as researchers like Nomura have shown, you don’t need to go to war to suffer from post-traumatic stress disorder: the violence of a hurricane or wildfire is enough. It appears that, due to epigenetic inheritance, you don’t even need to have been born yet.

When it comes to the health effects of the climate crisis, says Burcin Ikiz, a neuroscientist at the mental-health philanthropy organisation the Baszucki Group, “we know what happens in the cardiovascular system; we know what happens in the respiratory system; we know what happens in the immune system. But there’s almost nothing on neurology and brain health.” Ikiz, like Nomura, is one of a growing cadre of neuroscientists seeking to connect the dots between environmental and neurological wellness.

As a cohesive effort, the field – which we might call climatological neuroepidemiology – is in its infancy. But many of the effects catalogued by such researchers feel intuitive.

Two people trudge along a beach, with the sea behind them, and three folded beach umbrellas standing on the beach. The sky is a dark orange colour and everything in the picture is strongly tinted orange.
Residents evacuate Evia, Greece, in 2021, after wildfires hit the island. Photograph: Bloomberg/Getty Images

Perhaps you’ve noticed that when the weather gets a bit muggier, your thinking does the same. That’s no coincidence; it’s a nearly universal phenomenon. During a summer 2016 heatwave in Boston, Harvard epidemiologists showed that college students living in dorms without air conditioning performed standard cognitive tests more slowly than those living with it. In January of this year, Chinese economists noted that students who took mathematics tests on days above 32C looked as if they had lost the equivalent of a quarter of a year of education, relative to test days in the range 22–24C. Researchers estimate that the disparate effects of hot school days – disproportionately felt in poorer school districts without access to air conditioning and home to higher concentrations of non-white students – account for something on the order of 5% of the racial achievement gap in the US.

Cognitive performance is the tip of the melting iceberg. You may have also noticed, for example, your own feelings of aggression on hotter days. You and everyone else – and animals, too. Black widow spiders tend more quickly toward sibling cannibalism in the heat. Rhesus monkeys start more fights with one another. Baseball pitchers are more likely to intentionally hit batters with their pitches as temperatures rise. US Postal Service workers experience roughly 5% more incidents of harassment and discrimination on days above 32C, relative to temperate days.

Neuroscientists point to a variety of routes through which extreme heat can act on behaviour. In 2015, for example, Korean researchers found that heat stress triggers inflammation in the hippocampus of mice, a brain region essential for memory storage. Extreme heat also diminishes neuronal communication in zebrafish, a model organism regularly studied by scientists interested in brain function. In human beings, functional connections between brain areas appear more randomised at higher temperatures. In other words, heat limits the degree to which brain activity appears coordinated. On the aggression front, Finnish researchers noted in 2017 that high temperatures appear to suppress serotonin function, more so among people who had committed violent crimes. For these people, blood levels of a serotonin transporter protein, highly correlated with outside temperatures, could account for nearly 40% of the fluctuations in the country’s rate of violent crime.

Illustration of a person sweating in an extreme heat scenario
Prolonged exposure to heat can activate a multitude of biochemical pathways associated with Alzheimer’s and Parkinson’s. Illustration: Ngadi Smart/The Guardian

“We’re not thinking about any of this,” says Ikiz. “We’re not getting our healthcare systems ready. We’re not doing anything in terms of prevention or protections.”

Ikiz is particularly concerned with the neurodegenerative effects of the climate crisis. In part, that’s because prolonged exposure to heat in its own right – including an increase of a single degree centigrade – can activate a multitude of biochemical pathways associated with neurodegenerative diseases such as Alzheimer’s and Parkinson’s. Air pollution does the same thing. (In rats, such effects are seen after exposure to extreme heat for a mere 15 minutes a day for one week.) Thus, with continued burning of fossil fuels, whether through direct or indirect effects, comes more dementia. Researchers have already illustrated the manners in which dementia-related hospitalisations rise with temperature. Warmer weather worsens the symptoms of neurodegeneration as well.

Prior to her move to philanthropy, Ikiz’s neuroscience research largely focused on the mechanisms underlying the neurodegenerative disease amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease or motor neurone disease). Today, she points to research suggesting that blue-green algae, blooming with ever-increasing frequency under a changing global climate, releases a potent neurotoxin that offers one of the most compelling causal explanations for the incidence of non-genetic ALS. Epidemiologists have, for example, identified clusters of ALS cases downwind of freshwater lakes prone to blue-green algae blooms.

A woman pushing a shopping trolley grabs the last water bottles from a long empty shelf in a supermarket.
A supermarket in Long Beach is stripped of water bottles in preparation for Hurricane Sandy. Photograph: Mike Stobe/Getty Images

It’s this flavour of research that worries her the most. Children constitute one of the populations most vulnerable to these risk factors, since such exposures appear to compound cumulatively over one’s life, and neurodegenerative diseases tend to manifest in the later years. “It doesn’t happen acutely,” says Ikiz. “Years pass, and then people get these diseases. That’s actually what really scares me about this whole thing. We are seeing air pollution exposure from wildfires. We’re seeing extreme heat. We’re seeing neurotoxin exposure. We’re in an experiment ourselves, with the brain chronically exposed to multiple toxins.”

Other scientists who have taken note of these chronic exposures resort to similarly dramatic language as that of Nomura and Ikiz. “Hallmarks of Alzheimer disease are evolving relentlessly in metropolitan Mexico City infants, children and young adults,” is part of the title of a recent paper spearheaded by Dr Lilian Calderón-Garcidueñas, a toxicologist who directs the University of Montana’s environmental neuroprevention laboratory. The researchers investigated the contributions of urban air pollution and ozone to biomarkers of neurodegeneration and found physical hallmarks of Alzheimer’s in 202 of the 203 brains they examined, from residents aged 11 months to 40 years old. “Alzheimer’s disease starting in the brainstem of young children and affecting 99.5% of young urbanites is a serious health crisis,” Calderón-Garcidueñas and her colleagues wrote. Indeed.

A flooded Scottish street, with cars standing in water, their wheels just breaking the surface. A row of houses in the background with one shop called The Pet Shop.
Flooding in Stonehaven, Aberdeenshire, in 2020. Photograph: Martin Anderson/PA

Such neurodevelopmental challenges – the effects of environmental degradation on the developing and infant brain – are particularly large, given the climate prognosis. Rat pups exposed in utero to 40C heat miss brain developmental milestones. Heat exposure during neurodevelopment in zebrafish magnifies the toxic effects of lead exposure. In people, early pregnancy exposure to extreme heat is associated with a higher risk of children developing neuropsychiatric conditions such as schizophrenia and anorexia. It is also probable that the ALS-causing neurotoxin can travel in the air.

Of course, these exposures only matter if you make it to an age in which neural rot has a chance to manifest. Neurodegenerative disease mostly makes itself known in middle-aged and elderly people. But, on the other hand, the brain-eating amoeba likely to spread as a result of the climate crisis – which is 97% fatal and will kill someone in a week – mostly infects children who swim in lakes. As children do.

A coordinated effort to fully understand and appreciate the neurological costs of the climate crisis does not yet exist. Ikiz is seeking to rectify this. In spring 2024, she will convene the first meeting of a team of neurologists, neuroscientists and planetary scientists, under the banner of the International Neuro Climate Working Group.

Mexico City landscape engulfed in smog.
Smog hits Mexico City. Photograph: E_Rojas/Getty Images/iStockphoto

The goal of the working group (which, full disclosure, I have been invited to join) is to wrap a collective head around the problem and seek to recommend treatment practices and policy recommendations accordingly, before society finds itself in the midst of overlapping epidemics. The number of people living with Alzheimer’s is expected to triple by 2050, says Ikiz – and that’s without taking the climate crisis into account. “That scares me,” she says. “Because in 2050, we’ll be like: ‘Ah, this is awful. Let’s try to do something.’ But it will be too late for a lot of people.

“I think that’s why it’s really important right now, as evidence is building, as we’re understanding more, to be speaking and raising awareness on these issues,” she says. “Because we don’t want to come to that point of irreversible damage.”

For neuroscientists considering the climate problem, avoiding that point of no return implies investing in resilience research today. But this is not a story of climate anxiety and mental fortitude. “I’m not talking about psychological resilience,” says Nomura. “I’m talking about biological resilience.”

A research agenda for climatological neuroepidemiology would probably bridge multiple fields and scales of analysis. It would merge insights from neurology, neurochemistry, environmental science, cognitive neuroscience and behavioural economics – from molecular dynamics to the individual brain to whole ecosystems. Nomura, for example, wants to understand how external environmental pressures influence brain health and cognitive development; who is most vulnerable to these pressures and when; and which preventive strategies might bolster neurological resilience against climate-induced stressors. Others want to price these stressors, so policymakers can readily integrate them into climate-action cost-benefit analyses.

Wrecked houses along a beach.
Storm devastation in Seaside Heights, New Jersey. Photograph: Mike Groll/AP

For Nomura, it all comes back to stress. Under the right conditions, prenatal exposure to stress can be protective, she says. “It’s like an inoculation, right? You’re artificially exposed to something in utero and you become better at handling it – as long as it is not overwhelmingly toxic.” Stress in pregnancy, in moderation, can perhaps help immunise the foetus against the most deleterious effects of stress later in life. “But everybody has a breaking point,” she says.

Identifying these breaking points is a core challenge of Nomura’s work. And it’s a particularly thorny challenge, in that as a matter of both research ethics and atmospheric physics, she and her colleagues can’t just gin up a hurricane and selectively expose expecting mothers to it. “Human research in this field is limited in a way. We cannot run the gold standard of randomised clinical trials,” she says. “We cannot do it. So we have to take advantage of this horrible natural disaster.”

Recently, Nomura and her colleagues have begun to turn their attention to the developmental effects of heat. They will apply similar methods to those they applied to understanding the effects of Hurricane Sandy – establishing natural cohorts and charting the developmental trajectories in which they’re interested.

The work necessarily proceeds slowly, in part because human research is further complicated by the fact that it takes people longer than animals to develop. Rats zoom through infancy and are sexually mature by about six weeks, whereas for humans it takes more than a decade. “That’s a reason this longitudinal study is really important – and a reason why we cannot just get started on the question right now,” says Nomura. “You cannot buy 10 years’ time. You cannot buy 12 years’ time.” You must wait. And so she waits, and she measures, as the waves continue to crash.

Clayton Page Aldern’s book The Weight of Nature, on the effects of climate change on brain health, is published by Allen Lane on 4 April.

Consciousness theory slammed as ‘pseudoscience’ — sparking uproar (Nature)

nature.com

Researchers publicly call out theory that they say is not well supported by science, but that gets undue attention.

Mariana Lenharo

20 September 2023


Scanning electron micrograph of human brain cells.
Some research has focused on how neurons (shown here in a false-colour scanning electron micrograph) are involved in consciousness.Credit: Ted Kinsman/Science Photo Library

A letter, signed by 124 scholars and posted online last week1, has caused an uproar in the consciousness research community. It claims that a prominent theory describing what makes someone or something conscious — called the integrated information theory (IIT) — should be labelled “pseudoscience”. Since its publication on 15 September in the preprint repository PsyArXiv, the letter has some researchers arguing over the label and others worried it will increase polarization in a field that has grappled with issues of credibility in the past.Decades-long bet on consciousness ends — and it’s philosopher 1, neuroscientist 0

“I think it’s inflammatory to describe IIT as pseudoscience,” says neuroscientist Anil Seth, director of the Centre for Consciousness Science at the University of Sussex near Brighton, UK, adding that he disagrees with the label. “IIT is a theory, of course, and therefore may be empirically wrong,” says neuroscientist Christof Koch, a meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington, and a proponent of the theory. But he says that it makes its assumptions — for example, that consciousness has a physical basis and can be mathematically measured — very clear.

There are dozens of theories that seek to understand consciousness — everything that a human or non-human experiences, including what they feel, see and hear — as well as its underlying neural foundations. IIT has often been described as one of the central theories, alongside others, such as global neuronal workspace theory (GNW), higher-order thought theory and recurrent processing theory. It proposes that consciousness emerges from the way information is processed within a ‘system’ (for instance, networks of neurons or computer circuits), and that systems that are more interconnected, or integrated, have higher levels of consciousness.

A growing discomfort

Hakwan Lau, a neuroscientist at Riken Center for Brain Science in Wako, Japan, and one of the authors of the letter, says that some researchers in the consciousness field are uncomfortable with what they perceive as a discrepancy between IIT’s scientific merit and the considerable attention it receives from the popular media because of how it is promoted by advocates. “Has IIT become a leading theory because of academic acceptance first, or is it because of the popular noise that kind of forced the academics to give it acknowledgement?”, Lau asks.If AI becomes conscious: here’s how researchers will know

Negative feelings towards the theory intensified after it captured headlines in June. Media outlets, including Nature, reported the results of an ‘adversarial’ study that pitted IIT and GNW against one another. The experiments, which included brain scans, didn’t prove or completely disprove either theory, but some researchers found it problematic that IIT was highlighted as a leading theory of consciousness, prompting Lau and his co-authors to draft their letter.

But why label IIT as pseudoscience? Although the letter doesn’t clearly define pseudoscience, Lau notes that a “commonsensical definition” is that pseudoscience refers to “something that is not very scientifically supported, that masquerades as if it is already very scientifically established”. In this sense, he thinks that IIT fits the bill.

Is it testable?

Additionally, Lau says, some of his co-authors think that it’s not possible to empirically test IIT’s core assumptions, which they argue contributes to the theory’s status as pseudoscience.Decoding the neuroscience of consciousness

Seth, who is not a proponent of IIT, although he has worked on related ideas in the past, disagrees. “The core claims are harder to test than other theories because it’s a more ambitious theory,” he says. But there are some predictions stemming from the theory, about neural activity associated with consciousness, for instance, that can be tested, he adds. A 2022 review found 101 empirical studies involving IIT2.

Liad Mudrik, a neuroscientist at Tel Aviv University, in Israel, who co-led the adversarial study of IIT versus GNW, also defends IIT’s testability at the neural level. “Not only did we test it, we managed to falsify one of its predictions,” she says. “I think many people in the field don’t like IIT, and this is completely fine. Yet it is not clear to me what is the basis for claiming that it is not one of the leading theories.”

The same criticism about a lack of meaningful empirical tests could be made about other theories of consciousness, says Erik Hoel, a neuroscientist and writer who lives on Cape Cod, in Massachusetts, and who is a former student of Giulio Tononi, a neuroscientist at the University of Wisconsin-Madison who is a proponent of IIT. “Everyone who works in the field has to acknowledge that we don’t have perfect brain scans,” he says. “And yet, somehow, IIT is singled out in the letter as this being a problem that’s unique to it.”

Damaging effect

Lau says he doesn’t expect a consensus on the topic. “But I think if it is known that, let’s say, a significant minority of us are willing to [sign our names] that we think it is pseudoscience, knowing that some people may disagree, that’s still a good message.” He hopes that the letter reaches young researchers, policymakers, journal editors and funders. “All of them right now are very easily swayed by the media narrative.”

Mudrik, who emphasizes that she deeply respects the people who signed the letter, some of whom are close collaborators and friends, says that she worries about the effect it will have on the way the consciousness field is perceived. “Consciousness research has been struggling with scepticism from its inception, trying to establish itself as a legitimate scientific field,” she says. “In my opinion, the way to fight such scepticism is by conducting excellent and rigorous research”, rather than by publicly calling out certain people and ideas.

Hoel fears that the letter might discourage the development of other ambitious theories. “The most important thing for me is that we don’t make our hypotheses small and banal in order to avoid being tarred with the pseudoscience label.”