Arquivo mensal: março 2024

The Terrible Costs of a Phone-Based Childhood (The Atlantic)

theatlantic.com

The environment in which kids grow up today is hostile to human development.

By Jonathan Haidt

Photographs by Maggie Shannon

MARCH 13, 2024


Two teens sit on a bed looking at their phones

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

Something went suddenly and horribly wrong for adolescents in the early 2010s. By now you’ve likely seen the statistics: Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies from 2010 to 2019. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent.

The problem was not limited to the U.S.: Similar patterns emerged around the same time in Canada, the U.K., Australia, New Zealand, the Nordic countries, and beyond. By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data.

The decline in mental health is just one of many signs that something went awry. Loneliness and friendlessness among American teens began to surge around 2012. Academic achievement went down, too. According to “The Nation’s Report Card,” scores in reading and math began to decline for U.S. students after 2012, reversing decades of slow but generally steady increase. PISA, the major international measure of educational trends, shows that declines in math, reading, and science happened globally, also beginning in the early 2010s.

As the oldest members of Gen Z reach their late 20s, their troubles are carrying over into adulthood. Young adults are dating less, having less sex, and showing less interest in ever having children than prior generations. They are more likely to live with their parents. They were less likely to get jobs as teens, and managers say they are harder to work with. Many of these trends began with earlier generations, but most of them accelerated with Gen Z.

Surveys show that members of Gen Z are shyer and more risk averse than previous generations, too, and risk aversion may make them less ambitious. In an interview last May, OpenAI co-founder Sam Altman and Stripe co-founder Patrick Collison noted that, for the first time since the 1970s, none of Silicon Valley’s preeminent entrepreneurs are under 30. “Something has really gone wrong,” Altman said. In a famously young industry, he was baffled by the sudden absence of great founders in their 20s.

Generations are not monolithic, of course. Many young people are flourishing. Taken as a whole, however, Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. And if a generation is doing poorly––if it is more anxious and depressed and is starting families, careers, and important companies at a substantially lower rate than previous generations––then the sociological and economic consequences will be profound for the entire society.

graph showing rates of self-harm in children
Number of emergency-department visits for nonfatal self-harm per 100,000 children (source: Centers for Disease Control and Prevention)

What happened in the early 2010s that altered adolescent development and worsened mental health? Theories abound, but the fact that similar trends are found in many countries worldwide means that events and trends that are specific to the United States cannot be the main story.

I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online—particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. Life changed rapidly for younger children, too, as they began to get access to their parents’ smartphones and, later, got their own iPads, laptops, and even smartphones during elementary school.


As a social psychologist who has long studied social and moral development, I have been involved in debates about the effects of digital technology for years. Typically, the scientific questions have been framed somewhat narrowly, to make them easier to address with data. For example, do adolescents who consume more social media have higher levels of depression? Does using a smartphone just before bedtime interfere with sleep? The answer to these questions is usually found to be yes, although the size of the relationship is often statistically small, which has led some researchers to conclude that these new technologies are not responsible for the gigantic increases in mental illness that began in the early 2010s.

But before we can evaluate the evidence on any one potential avenue of harm, we need to step back and ask a broader question: What is childhood––including adolescence––and how did it change when smartphones moved to the center of it? If we take a more holistic view of what childhood is and what young children, tweens, and teens need to do to mature into competent adults, the picture becomes much clearer. Smartphone-based life, it turns out, alters or interferes with a great number of developmental processes.

The intrusion of smartphones and social media are not the only changes that have deformed childhood. There’s an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world.

My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now.

1. The Decline of Play and Independence

Human brains are extraordinarily large compared with those of other primates, and human childhoods are extraordinarily long, too, to give those large brains time to wire up within a particular culture. A child’s brain is already 90 percent of its adult size by about age 6. The next 10 or 15 years are about learning norms and mastering skills—physical, analytical, creative, and social. As children and adolescents seek out experiences and practice a wide variety of behaviors, the synapses and neurons that are used frequently are retained while those that are used less often disappear. Neurons that fire together wire together, as brain researchers say.

Brain development is sometimes said to be “experience-expectant,” because specific parts of the brain show increased plasticity during periods of life when an animal’s brain can “expect” to have certain kinds of experiences. You can see this with baby geese, who will imprint on whatever mother-sized object moves in their vicinity just after they hatch. You can see it with human children, who are able to learn languages quickly and take on the local accent, but only through early puberty; after that, it’s hard to learn a language and sound like a native speaker. There is also some evidence of a sensitive period for cultural learning more generally. Japanese children who spent a few years in California in the 1970s came to feel “American” in their identity and ways of interacting only if they attended American schools for a few years between ages 9 and 15. If they left before age 9, there was no lasting impact. If they didn’t arrive until they were 15, it was too late; they didn’t come to feel American.

Human childhood is an extended cultural apprenticeship with different tasks at different ages all the way through puberty. Once we see it this way, we can identify factors that promote or impede the right kinds of learning at each age. For children of all ages, one of the most powerful drivers of learning is the strong motivation to play. Play is the work of childhood, and all young mammals have the same job: to wire up their brains by playing vigorously and often, practicing the moves and skills they’ll need as adults. Kittens will play-pounce on anything that looks like a mouse tail. Human children will play games such as tag and sharks and minnows, which let them practice both their predator skills and their escaping-from-predator skills. Adolescents will play sports with greater intensity, and will incorporate playfulness into their social interactions—flirting, teasing, and developing inside jokes that bond friends together. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play.

One crucial aspect of play is physical risk taking. Children and adolescents must take risks and fail—often—in environments in which failure is not very costly. This is how they extend their abilities, overcome their fears, learn to estimate risk, and learn to cooperate in order to take on larger challenges later. The ever-present possibility of getting hurt while running around, exploring, play-fighting, or getting into a real conflict with another group adds an element of thrill, and thrilling play appears to be the most effective kind for overcoming childhood anxieties and building social, emotional, and physical competence. The desire for risk and thrill increases in the teen years, when failure might carry more serious consequences. Children of all ages need to choose the risk they are ready for at a given moment. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults.

Human childhood and adolescence evolved outdoors, in a physical world full of dangers and opportunities. Its central activities––play, exploration, and intense socializing––were largely unsupervised by adults, allowing children to make their own choices, resolve their own conflicts, and take care of one another. Shared adventures and shared adversity bound young people together into strong friendship clusters within which they mastered the social dynamics of small groups, which prepared them to master bigger challenges and larger groups later on.

And then we changed childhood.

The changes started slowly in the late 1970s and ’80s, before the arrival of the internet, as many parents in the U.S. grew fearful that their children would be harmed or abducted if left unsupervised. Such crimes have always been extremely rare, but they loomed larger in parents’ minds thanks in part to rising levels of street crime combined with the arrival of cable TV, which enabled round-the-clock coverage of missing-children cases. A general decline in social capital––the degree to which people knew and trusted their neighbors and institutions––exacerbated parental fears. Meanwhile, rising competition for college admissions encouraged more intensive forms of parenting. In the 1990s, American parents began pulling their children indoors or insisting that afternoons be spent in adult-run enrichment activities. Free play, independent exploration, and teen-hangout time declined.

In recent decades, seeing unchaperoned children outdoors has become so novel that when one is spotted in the wild, some adults feel it is their duty to call the police. In 2015, the Pew Research Center found that parents, on average, believed that children should be at least 10 years old to play unsupervised in front of their house, and that kids should be 14 before being allowed to go unsupervised to a public park. Most of these same parents had enjoyed joyous and unsupervised outdoor play by the age of 7 or 8.

But overprotection is only part of the story. The transition away from a more independent childhood was facilitated by steady improvements in digital technology, which made it easier and more inviting for young people to spend a lot more time at home, indoors, and alone in their rooms. Eventually, tech companies got access to children 24/7. They developed exciting virtual activities, engineered for “engagement,” that are nothing like the real-world experiences young brains evolved to expect.

Triptych: teens on their phones at the mall, park, and bedroom

2. The Virtual World Arrives in Two Waves

The internet, which now dominates the lives of young people, arrived in two waves of linked technologies. The first one did little harm to Millennials. The second one swallowed Gen Z whole.

The first wave came ashore in the 1990s with the arrival of dial-up internet access, which made personal computers good for something beyond word processing and basic games. By 2003, 55 percent of American households had a computer with (slow) internet access. Rates of adolescent depression, loneliness, and other measures of poor mental health did not rise in this first wave. If anything, they went down a bit. Millennial teens (born 1981 through 1995), who were the first to go through puberty with access to the internet, were psychologically healthier and happier, on average, than their older siblings or parents in Generation X (born 1965 through 1980).

The second wave began to rise in the 2000s, though its full force didn’t hit until the early 2010s. It began rather innocently with the introduction of social-media platforms that helped people connect with their friends. Posting and sharing content became much easier with sites such as Friendster (launched in 2003), Myspace (2003), and Facebook (2004).

Teens embraced social media soon after it came out, but the time they could spend on these sites was limited in those early years because the sites could only be accessed from a computer, often the family computer in the living room. Young people couldn’t access social media (and the rest of the internet) from the school bus, during class time, or while hanging out with friends outdoors. Many teens in the early-to-mid-2000s had cellphones, but these were basic phones (many of them flip phones) that had no internet access. Typing on them was difficult––they had only number keys. Basic phones were tools that helped Millennials meet up with one another in person or talk with each other one-on-one. I have seen no evidence to suggest that basic cellphones harmed the mental health of Millennials.

It was not until the introduction of the iPhone (2007), the App Store (2008), and high-speed internet (which reached 50 percent of American homes in 2007)—and the corresponding pivot to mobile made by many providers of social media, video games, and porn—that it became possible for adolescents to spend nearly every waking moment online. The extraordinary synergy among these innovations was what powered the second technological wave. In 2011, only 23 percent of teens had a smartphone. By 2015, that number had risen to 73 percent, and a quarter of teens said they were online “almost constantly.” Their younger siblings in elementary school didn’t usually have their own smartphones, but after its release in 2010, the iPad quickly became a staple of young children’s daily lives. It was in this brief period, from 2010 to 2015, that childhood in America (and many other countries) was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development.

3. Techno-optimism and the Birth of the Phone-Based Childhood

The phone-based childhood created by that second wave—including not just smartphones themselves, but all manner of internet-connected devices, such as tablets, laptops, video-game consoles, and smartwatches—arrived near the end of a period of enormous optimism about digital technology. The internet came into our lives in the mid-1990s, soon after the fall of the Soviet Union. By the end of that decade, it was widely thought that the web would be an ally of democracy and a slayer of tyrants. When people are connected to each other, and to all the information in the world, how could any dictator keep them down?

In the 2000s, Silicon Valley and its world-changing inventions were a source of pride and excitement in America. Smart and ambitious young people around the world wanted to move to the West Coast to be part of the digital revolution. Tech-company founders such as Steve Jobs and Sergey Brin were lauded as gods, or at least as modern Prometheans, bringing humans godlike powers. The Arab Spring bloomed in 2011 with the help of decentralized social platforms, including Twitter and Facebook. When pundits and entrepreneurs talked about the power of social media to transform society, it didn’t sound like a dark prophecy.

You have to put yourself back in this heady time to understand why adults acquiesced so readily to the rapid transformation of childhood. Many parents had concerns, even then, about what their children were doing online, especially because of the internet’s ability to put children in contact with strangers. But there was also a lot of excitement about the upsides of this new digital world. If computers and the internet were the vanguards of progress, and if young people––widely referred to as “digital natives”––were going to live their lives entwined with these technologies, then why not give them a head start? I remember how exciting it was to see my 2-year-old son master the touch-and-swipe interface of my first iPhone in 2008. I thought I could see his neurons being woven together faster as a result of the stimulation it brought to his brain, compared to the passivity of watching television or the slowness of building a block tower. I thought I could see his future job prospects improving.

Touchscreen devices were also a godsend for harried parents. Many of us discovered that we could have peace at a restaurant, on a long car trip, or at home while making dinner or replying to emails if we just gave our children what they most wanted: our smartphones and tablets. We saw that everyone else was doing it and figured it must be okay.

It was the same for older children, desperate to join their friends on social-media platforms, where the minimum age to open an account was set by law to 13, even though no research had been done to establish the safety of these products for minors. Because the platforms did nothing (and still do nothing) to verify the stated age of new-account applicants, any 10-year-old could open multiple accounts without parental permission or knowledge, and many did. Facebook and later Instagram became places where many sixth and seventh graders were hanging out and socializing. If parents did find out about these accounts, it was too late. Nobody wanted their child to be isolated and alone, so parents rarely forced their children to shut down their accounts.

We had no idea what we were doing.

4. The High Cost of a Phone-Based Childhood

In Walden, his 1854 reflection on simple living, Henry David Thoreau wrote, “The cost of a thing is the amount of … life which is required to be exchanged for it, immediately or in the long run.” It’s an elegant formulation of what economists would later call the opportunity cost of any choice—all of the things you can no longer do with your money and time once you’ve committed them to something else. So it’s important that we grasp just how much of a young person’s day is now taken up by their devices.

The numbers are hard to believe. The most recent Gallup data show that American teens spend about five hours a day just on social-media platforms (including watching videos on TikTok and YouTube). Add in all the other phone- and screen-based activities, and the number rises to somewhere between seven and nine hours a day, on average. The numbers are even higher in single-parent and low-income families, and among Black, Hispanic, and Native American families.

These very high numbers do not include time spent in front of screens for school or homework, nor do they include all the time adolescents spend paying only partial attention to events in the real world while thinking about what they’re missing on social media or waiting for their phones to ping. Pew reports that in 2022, one-third of teens said they were on one of the major social-media sites “almost constantly,” and nearly half said the same of the internet in general. For these heavy users, nearly every waking hour is an hour absorbed, in full or in part, by their devices.

overhead image of teens hands with phones

In Thoreau’s terms, how much of life is exchanged for all this screen time? Arguably, most of it. Everything else in an adolescent’s day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed, and for the hundreds of “friends,” “followers,” and other network connections that must be serviced with texts, posts, comments, likes, snaps, and direct messages. I recently surveyed my students at NYU, and most of them reported that the very first thing they do when they open their eyes in the morning is check their texts, direct messages, and social-media feeds. It’s also the last thing they do before they close their eyes at night. And it’s a lot of what they do in between.

The amount of time that adolescents spend sleeping declined in the early 2010s, and many studies tie sleep loss directly to the use of devices around bedtime, particularly when they’re used to scroll through social media. Exercise declined, too, which is unfortunate because exercise, like sleep, improves both mental and physical health. Book reading has been declining for decades, pushed aside by digital alternatives, but the decline, like so much else, sped up in the early 2010s. With passive entertainment always available, adolescent minds likely wander less than they used to; contemplation and imagination might be placed on the list of things winnowed down or crowded out.

But perhaps the most devastating cost of the new phone-based childhood was the collapse of time spent interacting with other people face-to-face. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends (about two hours a day, on average, not counting time together at school) than did older people (who spent just 30 to 60 minutes with friends). Time with friends began decreasing for young people in the 2000s, but the drop accelerated in the 2010s, while it barely changed for older people. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck.

You might question the importance of this decline. After all, isn’t much of this online time spent interacting with friends through texting, social media, and multiplayer video games? Isn’t that just as good?

Some of it surely is, and virtual interactions offer unique benefits too, especially for young people who are geographically or socially isolated. But in general, the virtual world lacks many of the features that make human interactions in the real world nutritious, as we might say, for physical, social, and emotional development. In particular, real-world relationships and social interactions are characterized by four features—typical for hundreds of thousands of years—that online interactions either distort or erase.

First, real-world interactions are embodied, meaning that we use our hands and facial expressions to communicate, and we learn to respond to the body language of others. Virtual interactions, in contrast, mostly rely on language alone. No matter how many emojis are offered as compensation, the elimination of communication channels for which we have eons of evolutionary programming is likely to produce adults who are less comfortable and less skilled at interacting in person.

Second, real-world interactions are synchronous; they happen at the same time. As a result, we learn subtle cues about timing and conversational turn taking. Synchronous interactions make us feel closer to the other person because that’s what getting “in sync” does. Texts, posts, and many other virtual interactions lack synchrony. There is less real laughter, more room for misinterpretation, and more stress after a comment that gets no immediate response.

Third, real-world interactions primarily involve one‐to‐one communication, or sometimes one-to-several. But many virtual communications are broadcast to a potentially huge audience. Online, each person can engage in dozens of asynchronous interactions in parallel, which interferes with the depth achieved in all of them. The sender’s motivations are different, too: With a large audience, one’s reputation is always on the line; an error or poor performance can damage social standing with large numbers of peers. These communications thus tend to be more performative and anxiety-inducing than one-to-one conversations.

Finally, real-world interactions usually take place within communities that have a high bar for entry and exit, so people are strongly motivated to invest in relationships and repair rifts when they happen. But in many virtual networks, people can easily block others or quit when they are displeased. Relationships within such networks are usually more disposable.

These unsatisfying and anxiety-producing features of life online should be recognizable to most adults. Online interactions can bring out antisocial behavior that people would never display in their offline communities. But if life online takes a toll on adults, just imagine what it does to adolescents in the early years of puberty, when their “experience expectant” brains are rewiring based on feedback from their social interactions.

Kids going through puberty online are likely to experience far more social comparison, self-consciousness, public shaming, and chronic anxiety than adolescents in previous generations, which could potentially set developing brains into a habitual state of defensiveness. The brain contains systems that are specialized for approach (when opportunities beckon) and withdrawal (when threats appear or seem likely). People can be in what we might call “discover mode” or “defend mode” at any moment, but generally not both. The two systems together form a mechanism for quickly adapting to changing conditions, like a thermostat that can activate either a heating system or a cooling system as the temperature fluctuates. Some people’s internal thermostats are generally set to discover mode, and they flip into defend mode only when clear threats arise. These people tend to see the world as full of opportunities. They are happier and less anxious. Other people’s internal thermostats are generally set to defend mode, and they flip into discover mode only when they feel unusually safe. They tend to see the world as full of threats and are more prone to anxiety and depressive disorders.

graph showing rates of disabilities in US college freshman
Percentage of U.S. college freshmen reporting various kinds of disabilities and disorders (source: Higher Education Research Institute)

A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting “safe spaces” and trigger warnings. They were highly sensitive to “microaggressions” and sometimes claimed that words were “violence.” These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.

5. So Many Harms

The debate around adolescents’ use of smartphones and social media typically revolves around mental health, and understandably so. But the harms that have resulted from transforming childhood so suddenly and heedlessly go far beyond mental health. I’ve touched on some of them—social awkwardness, reduced self-confidence, and a more sedentary childhood. Here are three additional harms.

Fragmented Attention, Disrupted Learning

Staying on task while sitting at a computer is hard enough for an adult with a fully developed prefrontal cortex. It is far more difficult for adolescents in front of their laptop trying to do homework. They are probably less intrinsically motivated to stay on task. They’re certainly less able, given their undeveloped prefrontal cortex, and hence it’s easy for any company with an app to lure them away with an offer of social validation or entertainment. Their phones are pinging constantly—one study found that the typical adolescent now gets 237 notifications a day, roughly 15 every waking hour. Sustained attention is essential for doing almost anything big, creative, or valuable, yet young people find their attention chopped up into little bits by notifications offering the possibility of high-pleasure, low-effort digital experiences.

It even happens in the classroom. Studies confirm that when students have access to their phones during class time, they use them, especially for texting and checking social media, and their grades and learning suffer. This might explain why benchmark test scores began to decline in the U.S. and around the world in the early 2010s—well before the pandemic hit.

Addiction and Social Withdrawal

The neural basis of behavioral addiction to social media or video games is not exactly the same as chemical addiction to cocaine or opioids. Nonetheless, they all involve abnormally heavy and sustained activation of dopamine neurons and reward pathways. Over time, the brain adapts to these high levels of dopamine; when the child is not engaged in digital activity, their brain doesn’t have enough dopamine, and the child experiences withdrawal symptoms. These generally include anxiety, insomnia, and intense irritability. Kids with these kinds of behavioral addictions often become surly and aggressive, and withdraw from their families into their bedrooms and devices.

Social-media and gaming platforms were designed to hook users. How successful are they? How many kids suffer from digital addictions?

The main addiction risks for boys seem to be video games and porn. “Internet gaming disorder,” which was added to the main diagnosis manual of psychiatry in 2013 as a condition for further study, describes “significant impairment or distress” in several aspects of life, along with many hallmarks of addiction, including an inability to reduce usage despite attempts to do so. Estimates for the prevalence of IGD range from 7 to 15 percent among adolescent boys and young men. As for porn, a nationally representative survey of American adults published in 2019 found that 7 percent of American men agreed or strongly agreed with the statement “I am addicted to pornography”—and the rates were higher for the youngest men.

Girls have much lower rates of addiction to video games and porn, but they use social media more intensely than boys do. A study of teens in 29 nations found that between 5 and 15 percent of adolescents engage in what is called “problematic social media use,” which includes symptoms such as preoccupation, withdrawal symptoms, neglect of other areas of life, and lying to parents and friends about time spent on social media. That study did not break down results by gender, but many others have found that rates of “problematic use” are higher for girls.

I don’t want to overstate the risks: Most teens do not become addicted to their phones and video games. But across multiple studies and across genders, rates of problematic use come out in the ballpark of 5 to 15 percent. Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?

The Decay of Wisdom and the Loss of Meaning

During that crucial sensitive period for cultural learning, from roughly ages 9 through 15, we should be especially thoughtful about who is socializing our children for adulthood. Instead, that’s when most kids get their first smartphone and sign themselves up (with or without parental permission) to consume rivers of content from random strangers. Much of that content is produced by other adolescents, in blocks of a few minutes or a few seconds.

This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life. Adolescents spend less time steeped in their local or national culture. They are coming of age in a confusing, placeless, ahistorical maelstrom of 30-second stories curated by algorithms designed to mesmerize them. Without solid knowledge of the past and the filtering of good ideas from bad––a process that plays out over many generations––young people will be more prone to believe whatever terrible ideas become popular around them, which might explain why videos showing young people reacting positively to Osama bin Laden’s thoughts about America were trending on TikTok last fall.

All this is made worse by the fact that so much of digital public life is an unending supply of micro dramas about somebody somewhere in our country of 340 million people who did something that can fuel an outrage cycle, only to be pushed aside by the next. It doesn’t add up to anything and leaves behind only a distorted sense of human nature and affairs.

When our public life becomes fragmented, ephemeral, and incomprehensible, it is a recipe for anomie, or normlessness. The great French sociologist Émile Durkheim showed long ago that a society that fails to bind its people together with some shared sense of sacredness and common respect for rules and norms is not a society of great individual freedom; it is, rather, a place where disoriented individuals have difficulty setting goals and exerting themselves to achieve them. Durkheim argued that anomie was a major driver of suicide rates in European countries. Modern scholars continue to draw on his work to understand suicide rates today.

graph showing rates of young people who struggle with mental health
Percentage of U.S. high-school seniors who agreed with the statement “Life often seems meaningless.” (Source: Monitoring the Future)

Durkheim’s observations are crucial for understanding what happened in the early 2010s. A long-running survey of American teens found that, from 1990 to 2010, high-school seniors became slightly less likely to agree with statements such as “Life often feels meaningless.” But as soon as they adopted a phone-based life and many began to live in the whirlpool of social media, where no stability can be found, every measure of despair increased. From 2010 to 2019, the number who agreed that their lives felt “meaningless” increased by about 70 percent, to more than one in five.

6. Young People Don’t Like Their Phone-Based Lives

How can I be confident that the epidemic of adolescent mental illness was kicked off by the arrival of the phone-based childhood? Skeptics point to other events as possible culprits, including the 2008 global financial crisis, global warming, the 2012 Sandy Hook school shooting and the subsequent active-shooter drills, rising academic pressures, and the opioid epidemic. But while these events might have been contributing factors in some countries, none can explain both the timing and international scope of the disaster.

An additional source of evidence comes from Gen Z itself. With all the talk of regulating social media, raising age limits, and getting phones out of schools, you might expect to find many members of Gen Z writing and speaking out in opposition. I’ve looked for such arguments and found hardly any. In contrast, many young adults tell stories of devastation.

Freya India, a 24-year-old British essayist who writes about girls, explains how social-media sites carry girls off to unhealthy places: “It seems like your child is simply watching some makeup tutorials, following some mental health influencers, or experimenting with their identity. But let me tell you: they are on a conveyor belt to someplace bad. Whatever insecurity or vulnerability they are struggling with, they will be pushed further and further into it.” She continues:

Gen Z were the guinea pigs in this uncontrolled global social experiment. We were the first to have our vulnerabilities and insecurities fed into a machine that magnified and refracted them back at us, all the time, before we had any sense of who we were. We didn’t just grow up with algorithms. They raised us. They rearranged our faces. Shaped our identities. Convinced us we were sick.

Rikki Schlott, a 23-year-old American journalist and co-author of The Canceling of the American Mind, writes,

The day-to-day life of a typical teen or tween today would be unrecognizable to someone who came of age before the smartphone arrived. Zoomers are spending an average of 9 hours daily in this screen-time doom loop—desperate to forget the gaping holes they’re bleeding out of, even if just for … 9 hours a day. Uncomfortable silence could be time to ponder why they’re so miserable in the first place. Drowning it out with algorithmic white noise is far easier.

A 27-year-old man who spent his adolescent years addicted (his word) to video games and pornography sent me this reflection on what that did to him:

I missed out on a lot of stuff in life—a lot of socialization. I feel the effects now: meeting new people, talking to people. I feel that my interactions are not as smooth and fluid as I want. My knowledge of the world (geography, politics, etc.) is lacking. I didn’t spend time having conversations or learning about sports. I often feel like a hollow operating system.

Or consider what Facebook found in a research project involving focus groups of young people, revealed in 2021 by the whistleblower Frances Haugen: “Teens blame Instagram for increases in the rates of anxiety and depression among teens,” an internal document said. “This reaction was unprompted and consistent across all groups.”

How can it be that an entire generation is hooked on consumer products that so few praise and so many ultimately regret using? Because smartphones and especially social media have put members of Gen Z and their parents into a series of collective-action traps. Once you understand the dynamics of these traps, the escape routes become clear.

diptych: teens on phone on couch and on a swing

7. Collective-Action Problems

Social-media companies such as Meta, TikTok, and Snap are often compared to tobacco companies, but that’s not really fair to the tobacco industry. It’s true that companies in both industries marketed harmful products to children and tweaked their products for maximum customer retention (that is, addiction), but there’s a big difference: Teens could and did choose, in large numbers, not to smoke. Even at the peak of teen cigarette use, in 1997, nearly two-thirds of high-school students did not smoke.

Social media, in contrast, applies a lot more pressure on nonusers, at a much younger age and in a more insidious way. Once a few students in any middle school lie about their age and open accounts at age 11 or 12, they start posting photos and comments about themselves and other students. Drama ensues. The pressure on everyone else to join becomes intense. Even a girl who knows, consciously, that Instagram can foster beauty obsession, anxiety, and eating disorders might sooner take those risks than accept the seeming certainty of being out of the loop, clueless, and excluded. And indeed, if she resists while most of her classmates do not, she might, in fact, be marginalized, which puts her at risk for anxiety and depression, though via a different pathway than the one taken by those who use social media heavily. In this way, social media accomplishes a remarkable feat: It even harms adolescents who do not use it.

A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.

Social media is all about network effects. Most students are only on it because everyone else is too. Most of them would prefer that nobody be on these platforms. Later in the study, students were asked directly, “Would you prefer to live in a world without Instagram [or TikTok]?” A majority of students said yes––58 percent for each app.

This is the textbook definition of what social scientists call a collective-action problem. It’s what happens when a group would be better off if everyone in the group took a particular action, but each actor is deterred from acting, because unless the others do the same, the personal cost outweighs the benefit. Fishermen considering limiting their catch to avoid wiping out the local fish population are caught in this same kind of trap. If no one else does it too, they just lose profit.

Cigarettes trapped individual smokers with a biological addiction. Social media has trapped an entire generation in a collective-action problem. Early app developers deliberately and knowingly exploited the psychological weaknesses and insecurities of young people to pressure them to consume a product that, upon reflection, many wish they could use less, or not at all.

8. Four Norms to Break Four Traps

Young people and their parents are stuck in at least four collective-action traps. Each is hard to escape for an individual family, but escape becomes much easier if families, schools, and communities coordinate and act together. Here are four norms that would roll back the phone-based childhood. I believe that any community that adopts all four will see substantial improvements in youth mental health within two years.

No smartphones before high school 

The trap here is that each child thinks they need a smartphone because “everyone else” has one, and many parents give in because they don’t want their child to feel excluded. But if no one else had a smartphone—or even if, say, only half of the child’s sixth-grade class had one—parents would feel more comfortable providing a basic flip phone (or no phone at all). Delaying round-the-clock internet access until ninth grade (around age 14) as a national or community norm would help to protect adolescents during the very vulnerable first few years of puberty. According to a 2022 British study, these are the years when social-media use is most correlated with poor mental health. Family policies about tablets, laptops, and video-game consoles should be aligned with smartphone restrictions to prevent overuse of other screen activities.

No social media before 16

The trap here, as with smartphones, is that each adolescent feels a strong need to open accounts on TikTok, Instagram, Snapchat, and other platforms primarily because that’s where most of their peers are posting and gossiping. But if the majority of adolescents were not on these accounts until they were 16, families and adolescents could more easily resist the pressure to sign up. The delay would not mean that kids younger than 16 could never watch videos on TikTok or YouTube—only that they could not open accounts, give away their data, post their own content, and let algorithms get to know them and their preferences.

Phone‐free schools

Most schools claim that they ban phones, but this usually just means that students aren’t supposed to take their phone out of their pocket during class. Research shows that most students do use their phones during class time. They also use them during lunchtime, free periods, and breaks between classes––times when students could and should be interacting with their classmates face-to-face. The only way to get students’ minds off their phones during the school day is to require all students to put their phones (and other devices that can send or receive texts) into a phone locker or locked pouch at the start of the day. Schools that have gone phone-free always seem to report that it has improved the culture, making students more attentive in class and more interactive with one another. Published studies back them up.

More independence, free play, and responsibility in the real world

Many parents are afraid to give their children the level of independence and responsibility they themselves enjoyed when they were young, even though rates of homicide, drunk driving, and other physical threats to children are way down in recent decades. Part of the fear comes from the fact that parents look at each other to determine what is normal and therefore safe, and they see few examples of families acting as if a 9-year-old can be trusted to walk to a store without a chaperone. But if many parents started sending their children out to play or run errands, then the norms of what is safe and accepted would change quickly. So would ideas about what constitutes “good parenting.” And if more parents trusted their children with more responsibility––for example, by asking their kids to do more to help out, or to care for others––then the pervasive sense of uselessness now found in surveys of high-school students might begin to dissipate.

It would be a mistake to overlook this fourth norm. If parents don’t replace screen time with real-world experiences involving friends and independent activity, then banning devices will feel like deprivation, not the opening up of a world of opportunities.

The main reason why the phone-based childhood is so harmful is because it pushes aside everything else. Smartphones are experience blockers. Our ultimate goal should not be to remove screens entirely, nor should it be to return childhood to exactly the way it was in 1960. Rather, it should be to create a version of childhood and adolescence that keeps young people anchored in the real world while flourishing in the digital age.

9. What Are We Waiting For?

An essential function of government is to solve collective-action problems. Congress could solve or help solve the ones I’ve highlighted—for instance, by raising the age of “internet adulthood” to 16 and requiring tech companies to keep underage children off their sites.

In recent decades, however, Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry. Governors and state legislators have been much more effective, and their successes might let us evaluate how well various reforms work. But the bottom line is that to change norms, we’re going to need to do most of the work ourselves, in neighborhood groups, schools, and other communities.

There are now hundreds of organizations––most of them started by mothers who saw what smartphones had done to their children––that are working to roll back the phone-based childhood or promote a more independent, real-world childhood. (I have assembled a list of many of them.) One that I co-founded, at LetGrow.org, suggests a variety of simple programs for parents or schools, such as play club (schools keep the playground open at least one day a week before or after school, and kids sign up for phone-free, mixed-age, unstructured play as a regular weekly activity) and the Let Grow Experience (a series of homework assignments in which students––with their parents’ consent––choose something to do on their own that they’ve never done before, such as walk the dog, climb a tree, walk to a store, or cook dinner).

Even without the help of organizations, parents could break their families out of collective-action traps if they coordinated with the parents of their children’s friends. Together they could create common smartphone rules and organize unsupervised play sessions or encourage hangouts at a home, park, or shopping mall.

teen on her phone in her room

Parents are fed up with what childhood has become. Many are tired of having daily arguments about technologies that were designed to grab hold of their children’s attention and not let go. But the phone-based childhood is not inevitable.

The four norms I have proposed cost almost nothing to implement, they cause no clear harm to anyone, and while they could be supported by new legislation, they can be instilled even without it. We can begin implementing all of them right away, this year, especially in communities with good cooperation between schools and parents. A single memo from a principal asking parents to delay smartphones and social media, in support of the school’s effort to improve mental health by going phone free, would catalyze collective action and reset the community’s norms.

We didn’t know what we were doing in the early 2010s. Now we do. It’s time to end the phone-based childhood.


This article is adapted from Jonathan Haidt’s forthcoming book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.

‘Everybody has a breaking point’: how the climate crisis affects our brains (Guardian)

Researchers measuring the effect of Hurricane Sandy on children in utero at the time reported: ‘Our findings are extremely alarming.’ Illustration: Ngadi Smart/The Guardian

Are growing rates of anxiety, depression, ADHD, PTSD, Alzheimer’s and motor neurone disease related to rising temperatures and other extreme environmental changes?

Original article

Clayton Page Aldern

Wed 27 Mar 2024 05.00 GMTShare

In late October 2012, a category 3 hurricane howled into New York City with a force that would etch its name into the annals of history. Superstorm Sandy transformed the city, inflicting more than $60bn in damage, killing dozens, and forcing 6,500 patients to be evacuated from hospitals and nursing homes. Yet in the case of one cognitive neuroscientist, the storm presented, darkly, an opportunity.

Yoko Nomura had found herself at the centre of a natural experiment. Prior to the hurricane’s unexpected visit, Nomura – who teaches in the psychology department at Queens College, CUNY, as well as in the psychiatry department of the Icahn School of Medicine at Mount Sinai – had meticulously assembled a research cohort of hundreds of expectant New York mothers. Her investigation, the Stress in Pregnancy study, had aimed since 2009 to explore the potential imprint of prenatal stress on the unborn. Drawing on the evolving field of epigenetics, Nomura had sought to understand the ways in which environmental stressors could spur changes in gene expression, the likes of which were already known to influence the risk of specific childhood neurobehavioural outcomes such as autism, schizophrenia and attention deficit hyperactivity disorder (ADHD).

The storm, however, lent her research a new, urgent question. A subset of Nomura’s cohort of expectant women had been pregnant during Sandy. She wanted to know if the prenatal stress of living through a hurricane – of experiencing something so uniquely catastrophic – acted differentially on the children these mothers were carrying, relative to those children who were born before or conceived after the storm.

More than a decade later, she has her answer. The conclusions reveal a startling disparity: children who were in utero during Sandy bear an inordinately high risk of psychiatric conditions today. For example, girls who were exposed to Sandy prenatally experienced a 20-fold increase in anxiety and a 30-fold increase in depression later in life compared with girls who were not exposed. Boys had 60-fold and 20-fold increased risks of ADHD and conduct disorder, respectively. Children expressed symptoms of the conditions as early as preschool.

A resident pulls a woman in a canoe down 6th Street as high tide, rain and winds flood local streets on October 29, 2012 in Lindenhurst, New York.
Flooding in Lindenhurst, New York, in October 2012, after Hurricane Sandy struck. Photograph: Bruce Bennett/Getty Images

“Our findings are extremely alarming,” the researchers wrote in a 2022 study summarising their initial results. It is not the type of sentence one usually finds in the otherwise measured discussion sections of academic papers.

Yet Nomura and her colleagues’ research also offers a representative page in a new story of the climate crisis: a story that says a changing climate doesn’t just shape the environment in which we live. Rather, the climate crisis spurs visceral and tangible transformations in our very brains. As the world undergoes dramatic environmental shifts, so too does our neurological landscape. Fossil-fuel-induced changes – from rising temperatures to extreme weather to heightened levels of atmospheric carbon dioxide – are altering our brain health, influencing everything from memory and executive function to language, the formation of identity, and even the structure of the brain. The weight of nature is heavy, and it presses inward.

Evidence comes from a variety of fields. Psychologists and behavioural economists have illustrated the ways in which temperature spikes drive surges in everything from domestic violence to online hate speech. Cognitive neuroscientists have charted the routes by which extreme heat and surging CO2 levels impair decision-making, diminish problem-solving abilities, and short-circuit our capacity to learn. Vectors of brain disease, such as ticks and mosquitoes, are seeing their habitable ranges expand as the world warms. And as researchers like Nomura have shown, you don’t need to go to war to suffer from post-traumatic stress disorder: the violence of a hurricane or wildfire is enough. It appears that, due to epigenetic inheritance, you don’t even need to have been born yet.

When it comes to the health effects of the climate crisis, says Burcin Ikiz, a neuroscientist at the mental-health philanthropy organisation the Baszucki Group, “we know what happens in the cardiovascular system; we know what happens in the respiratory system; we know what happens in the immune system. But there’s almost nothing on neurology and brain health.” Ikiz, like Nomura, is one of a growing cadre of neuroscientists seeking to connect the dots between environmental and neurological wellness.

As a cohesive effort, the field – which we might call climatological neuroepidemiology – is in its infancy. But many of the effects catalogued by such researchers feel intuitive.

Two people trudge along a beach, with the sea behind them, and three folded beach umbrellas standing on the beach. The sky is a dark orange colour and everything in the picture is strongly tinted orange.
Residents evacuate Evia, Greece, in 2021, after wildfires hit the island. Photograph: Bloomberg/Getty Images

Perhaps you’ve noticed that when the weather gets a bit muggier, your thinking does the same. That’s no coincidence; it’s a nearly universal phenomenon. During a summer 2016 heatwave in Boston, Harvard epidemiologists showed that college students living in dorms without air conditioning performed standard cognitive tests more slowly than those living with it. In January of this year, Chinese economists noted that students who took mathematics tests on days above 32C looked as if they had lost the equivalent of a quarter of a year of education, relative to test days in the range 22–24C. Researchers estimate that the disparate effects of hot school days – disproportionately felt in poorer school districts without access to air conditioning and home to higher concentrations of non-white students – account for something on the order of 5% of the racial achievement gap in the US.

Cognitive performance is the tip of the melting iceberg. You may have also noticed, for example, your own feelings of aggression on hotter days. You and everyone else – and animals, too. Black widow spiders tend more quickly toward sibling cannibalism in the heat. Rhesus monkeys start more fights with one another. Baseball pitchers are more likely to intentionally hit batters with their pitches as temperatures rise. US Postal Service workers experience roughly 5% more incidents of harassment and discrimination on days above 32C, relative to temperate days.

Neuroscientists point to a variety of routes through which extreme heat can act on behaviour. In 2015, for example, Korean researchers found that heat stress triggers inflammation in the hippocampus of mice, a brain region essential for memory storage. Extreme heat also diminishes neuronal communication in zebrafish, a model organism regularly studied by scientists interested in brain function. In human beings, functional connections between brain areas appear more randomised at higher temperatures. In other words, heat limits the degree to which brain activity appears coordinated. On the aggression front, Finnish researchers noted in 2017 that high temperatures appear to suppress serotonin function, more so among people who had committed violent crimes. For these people, blood levels of a serotonin transporter protein, highly correlated with outside temperatures, could account for nearly 40% of the fluctuations in the country’s rate of violent crime.

Illustration of a person sweating in an extreme heat scenario
Prolonged exposure to heat can activate a multitude of biochemical pathways associated with Alzheimer’s and Parkinson’s. Illustration: Ngadi Smart/The Guardian

“We’re not thinking about any of this,” says Ikiz. “We’re not getting our healthcare systems ready. We’re not doing anything in terms of prevention or protections.”

Ikiz is particularly concerned with the neurodegenerative effects of the climate crisis. In part, that’s because prolonged exposure to heat in its own right – including an increase of a single degree centigrade – can activate a multitude of biochemical pathways associated with neurodegenerative diseases such as Alzheimer’s and Parkinson’s. Air pollution does the same thing. (In rats, such effects are seen after exposure to extreme heat for a mere 15 minutes a day for one week.) Thus, with continued burning of fossil fuels, whether through direct or indirect effects, comes more dementia. Researchers have already illustrated the manners in which dementia-related hospitalisations rise with temperature. Warmer weather worsens the symptoms of neurodegeneration as well.

Prior to her move to philanthropy, Ikiz’s neuroscience research largely focused on the mechanisms underlying the neurodegenerative disease amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease or motor neurone disease). Today, she points to research suggesting that blue-green algae, blooming with ever-increasing frequency under a changing global climate, releases a potent neurotoxin that offers one of the most compelling causal explanations for the incidence of non-genetic ALS. Epidemiologists have, for example, identified clusters of ALS cases downwind of freshwater lakes prone to blue-green algae blooms.

A woman pushing a shopping trolley grabs the last water bottles from a long empty shelf in a supermarket.
A supermarket in Long Beach is stripped of water bottles in preparation for Hurricane Sandy. Photograph: Mike Stobe/Getty Images

It’s this flavour of research that worries her the most. Children constitute one of the populations most vulnerable to these risk factors, since such exposures appear to compound cumulatively over one’s life, and neurodegenerative diseases tend to manifest in the later years. “It doesn’t happen acutely,” says Ikiz. “Years pass, and then people get these diseases. That’s actually what really scares me about this whole thing. We are seeing air pollution exposure from wildfires. We’re seeing extreme heat. We’re seeing neurotoxin exposure. We’re in an experiment ourselves, with the brain chronically exposed to multiple toxins.”

Other scientists who have taken note of these chronic exposures resort to similarly dramatic language as that of Nomura and Ikiz. “Hallmarks of Alzheimer disease are evolving relentlessly in metropolitan Mexico City infants, children and young adults,” is part of the title of a recent paper spearheaded by Dr Lilian Calderón-Garcidueñas, a toxicologist who directs the University of Montana’s environmental neuroprevention laboratory. The researchers investigated the contributions of urban air pollution and ozone to biomarkers of neurodegeneration and found physical hallmarks of Alzheimer’s in 202 of the 203 brains they examined, from residents aged 11 months to 40 years old. “Alzheimer’s disease starting in the brainstem of young children and affecting 99.5% of young urbanites is a serious health crisis,” Calderón-Garcidueñas and her colleagues wrote. Indeed.

A flooded Scottish street, with cars standing in water, their wheels just breaking the surface. A row of houses in the background with one shop called The Pet Shop.
Flooding in Stonehaven, Aberdeenshire, in 2020. Photograph: Martin Anderson/PA

Such neurodevelopmental challenges – the effects of environmental degradation on the developing and infant brain – are particularly large, given the climate prognosis. Rat pups exposed in utero to 40C heat miss brain developmental milestones. Heat exposure during neurodevelopment in zebrafish magnifies the toxic effects of lead exposure. In people, early pregnancy exposure to extreme heat is associated with a higher risk of children developing neuropsychiatric conditions such as schizophrenia and anorexia. It is also probable that the ALS-causing neurotoxin can travel in the air.

Of course, these exposures only matter if you make it to an age in which neural rot has a chance to manifest. Neurodegenerative disease mostly makes itself known in middle-aged and elderly people. But, on the other hand, the brain-eating amoeba likely to spread as a result of the climate crisis – which is 97% fatal and will kill someone in a week – mostly infects children who swim in lakes. As children do.

A coordinated effort to fully understand and appreciate the neurological costs of the climate crisis does not yet exist. Ikiz is seeking to rectify this. In spring 2024, she will convene the first meeting of a team of neurologists, neuroscientists and planetary scientists, under the banner of the International Neuro Climate Working Group.

Mexico City landscape engulfed in smog.
Smog hits Mexico City. Photograph: E_Rojas/Getty Images/iStockphoto

The goal of the working group (which, full disclosure, I have been invited to join) is to wrap a collective head around the problem and seek to recommend treatment practices and policy recommendations accordingly, before society finds itself in the midst of overlapping epidemics. The number of people living with Alzheimer’s is expected to triple by 2050, says Ikiz – and that’s without taking the climate crisis into account. “That scares me,” she says. “Because in 2050, we’ll be like: ‘Ah, this is awful. Let’s try to do something.’ But it will be too late for a lot of people.

“I think that’s why it’s really important right now, as evidence is building, as we’re understanding more, to be speaking and raising awareness on these issues,” she says. “Because we don’t want to come to that point of irreversible damage.”

For neuroscientists considering the climate problem, avoiding that point of no return implies investing in resilience research today. But this is not a story of climate anxiety and mental fortitude. “I’m not talking about psychological resilience,” says Nomura. “I’m talking about biological resilience.”

A research agenda for climatological neuroepidemiology would probably bridge multiple fields and scales of analysis. It would merge insights from neurology, neurochemistry, environmental science, cognitive neuroscience and behavioural economics – from molecular dynamics to the individual brain to whole ecosystems. Nomura, for example, wants to understand how external environmental pressures influence brain health and cognitive development; who is most vulnerable to these pressures and when; and which preventive strategies might bolster neurological resilience against climate-induced stressors. Others want to price these stressors, so policymakers can readily integrate them into climate-action cost-benefit analyses.

Wrecked houses along a beach.
Storm devastation in Seaside Heights, New Jersey. Photograph: Mike Groll/AP

For Nomura, it all comes back to stress. Under the right conditions, prenatal exposure to stress can be protective, she says. “It’s like an inoculation, right? You’re artificially exposed to something in utero and you become better at handling it – as long as it is not overwhelmingly toxic.” Stress in pregnancy, in moderation, can perhaps help immunise the foetus against the most deleterious effects of stress later in life. “But everybody has a breaking point,” she says.

Identifying these breaking points is a core challenge of Nomura’s work. And it’s a particularly thorny challenge, in that as a matter of both research ethics and atmospheric physics, she and her colleagues can’t just gin up a hurricane and selectively expose expecting mothers to it. “Human research in this field is limited in a way. We cannot run the gold standard of randomised clinical trials,” she says. “We cannot do it. So we have to take advantage of this horrible natural disaster.”

Recently, Nomura and her colleagues have begun to turn their attention to the developmental effects of heat. They will apply similar methods to those they applied to understanding the effects of Hurricane Sandy – establishing natural cohorts and charting the developmental trajectories in which they’re interested.

The work necessarily proceeds slowly, in part because human research is further complicated by the fact that it takes people longer than animals to develop. Rats zoom through infancy and are sexually mature by about six weeks, whereas for humans it takes more than a decade. “That’s a reason this longitudinal study is really important – and a reason why we cannot just get started on the question right now,” says Nomura. “You cannot buy 10 years’ time. You cannot buy 12 years’ time.” You must wait. And so she waits, and she measures, as the waves continue to crash.

Clayton Page Aldern’s book The Weight of Nature, on the effects of climate change on brain health, is published by Allen Lane on 4 April.

Ditching ‘Anthropocene’: why ecologists say the term still matters (Nature)

A aerial view of a section of the Niger river in Bamako clogged with plastic waste and other polluting materials.
Plastic waste is clogging the Niger River in Bamako, Mali. After it sediments, plastic will become part of the geological record of human impacts on the planet. Credit: Michele Cattani/AFP via Getty

Original article

Beyond stratigraphic definitions, the name has broader significance for understanding humans’ place on Earth.

David Adam

14 March 2024

After 15 years of discussion, geologists last week decided that the Anthropocene — generally understood to be the age of irreversible human impacts on the planet — will not become an official epoch in Earth’s geological timeline.

The rejected proposal would have codified the end of the current Holocene epoch, which has been in place since the end of the last ice age 11,700 years ago. It suggested that the Anthropocene started in 1952, when plutonium from hydrogen-bomb tests showed up in the sediment of Crawford Lake near Toronto, Canada.

The vote has drawn controversy over procedural details, and debate about its legitimacy continues. But whether or not it’s formally approved as a stratigraphic term, the idea of the Anthropocene is now firmly rooted in research. So, how are scientists using the term, and what does it mean to them and their fields?

‘It’s a term that belongs to everyone’

As head of the Leverhulme Centre for Anthropocene Biodiversity at the University of York, UK, Chris Thomas has perhaps more riding on the term than most. “When the news of this — what sounds like a slightly dodgy vote — happened, I sort of wondered, is it the end of us? But I think not,” he says.

For Thomas, the word Anthropocene neatly summarizes the sense that humans are part of Earth’s system and integral to its processes — what he calls indivisible connectedness. “That helps move us away from the notion that somehow humanity is apart from the rest of nature and natural systems,” he says. “It’s undoable — the change is everywhere.”

The concept of an era of human-driven change also provides convenient common ground for him to collaborate with researchers from other disciplines. “This is something that people in the arts and humanities and the social sciences have picked up as well,” he says. “It is a means of enabling communication about the extent to which we are living in a truly unprecedented and human-altered world.”

Seen through that lens, the fact that the Anthropocene has been formally rejected because scientists can’t agree on when it began seems immaterial. “Many people in the humanities who are using the phrase find the concept of the articulation of a particular year, based on a deposit in a particular lake, a ridiculous way of framing the concept of a human-altered planet.”

Jacquelyn Gill, a palaeoecologist at the University of Maine in Orono, agrees. “It’s a term that belongs to everyone. To people working in philosophy and literary criticism, in the arts, in the humanities, the sciences,” she says. “I think it’s far more meaningful in the way that it is currently being used, than in any attempts that stratigraphers could have made to restrict or define it in some narrow sense.”

She adds: “It serves humanity best as a loose concept that we can use to define something that we all widely understand, which is that we live in an era where humans are the dominant force on ecological and geological processes.”

Capturing human influences

The idea of the Anthropocene is especially helpful to make clear that humans have been shaping the planet for thousands of years, and that not all of those changes have been bad, Gill says. “We could do a better job of thinking about human–environment relationships in ways that are not inherently negative all the time,” she says. “People are not a monolith, and neither are our attitudes or relationships to nature.”

Some 80% of biodiversity is currently stewarded on Indigenous lands, Gill points out. “Which should tell you something, right? That it’s not the presence of people that’s the problem,” she says. “The solution to those problems is changing the way that many dominant cultures relate to the natural world.”

The concept of the Anthropocene is owned by many fields, Gill says. “This reiterates the importance of understanding that the role of people on our planet requires many different ways of knowing and many different disciplines.”

In a world in which the threat of climate change dominates environmental debates, the term Anthropocene can help to broaden the discussion, says Yadvinder Malhi, a biodiversity researcher at the University of Oxford, UK.

“I use it all the time. For me, it captures the time where human influence has a global planetary effect, and it’s multidimensional. It’s much more than just climate change,” he says. “It’s what we’re doing. The oceans, the resources we are extracting, habitats changing.”

He adds: “I need that term when I’m trying to capture this idea of humans affecting the planet in multiple ways because of the size of our activity.”

The looseness of the term is popular, but would a formal definition help in any way? Malhi thinks it would. “There’s no other term available that captures the global multidimensional impacts on the planet,” he says. “But there is a problem in not having a formal definition if people are using it in different terms, in different ways.”

Although the word ‘Anthropocene’ makes some researchers think of processes that began 10,000 years ago, others consider it to mean those of the past century. “I think a formal adoption, like a definition, would actually help to clarify that.”

doi: https://doi.org/10.1038/d41586-024-00786-2

The Anthropocene is dead. Long live the Anthropocene (Science)

Panel rejects a proposed geologic time division reflecting human influence, but the concept is here to stay

Original article

5 MAR 20244:00 PM ET

BY PAUL VOOSEN

A mushroom cloud rises in the night sky
A 1953 nuclear weapons test in Nevada was among the human activities that could have marked the Anthropocene. NNSA/NEVADA FIELD OFFICE/SCIENCE SOURCE

For now, we’re still in the Holocene.

Science has confirmed that a panel of two dozen geologists has voted down a proposal to end the Holocene—our current span of geologic time, which began 11,700 years ago at the end of the last ice age—and inaugurate a new epoch, the Anthropocene. Starting in the 1950s, it would have marked a time when humanity’s influence on the planet became overwhelming. The vote, first reported by The New York Times, is a stunning—though not unexpected—rebuke for the proposal, which has been working its way through a formal approval process for more than a decade.

“The decision is definitive,” says Philip Gibbard, a geologist at the University of Cambridge who is on the panel and serves as secretary-general of the International Commission on Stratigraphy (ICS), the body that governs the geologic timescale. “There are no outstanding issues to be resolved. Case closed.”

The leaders of the Anthropocene Working Group (AWG), which developed the proposal for consideration by ICS’s Subcommission on Quaternary Stratigraphy, are not yet ready to admit defeat. They note that the online tally, in which 12 out of 18 subcommission members voted against the proposal, was leaked to the press without approval of the panel’s chair. “There remain several issues that need to be resolved about the validity of the vote and the circumstances surrounding it,” says Colin Waters, a geologist at the University of Leicester who chaired AWG.

Few opponents of the Anthropocene proposal doubted the enormous impact that human influence, including climate change, is having on the planet. But some felt the proposed marker of the epoch—some 10 centimeters of mud from Canada’s Crawford Lake that captures the global surge in fossil fuel burning, fertilizer use, and atomic bomb fallout that began in the 1950s—isn’t definitive enough.

Others questioned whether it’s even possible to affix one date to the start of humanity’s broad planetary influence: Why not the rise of agriculture? Why not the vast changes that followed European encroachment on the New World? “The Anthropocene epoch was never deep enough to understand human transformation of this planet,” says Erle Ellis, a geographer at the University of Maryland, Baltimore County who resigned last year in protest from AWG.

Opponents also felt AWG made too many announcements to the press over the years while being slow to submit a proposal to the subcommission. “The Anthropocene epoch was pushed through the media from the beginning—a publicity drive,” says Stanley Finney, a stratigrapher at California State University Long Beach and head of the International Union of Geological Sciences, which would have had final approval of the proposal.

Finney also complains that from the start, AWG was determined to secure an “epoch” categorization, and ignored or countered proposals for a less formal Anthropocene designation. If they had only made their formal proposal sooner, they could have avoided much lost time, Finney adds. “It would have been rejected 10 years earlier if they had not avoided presenting it to the stratigraphic community for careful consideration.”

The Anthropocene backers will now have to wait for a decade before their proposal can be considered again. ICS has long instituted this mandatory cooling-off period, given how furious debates can turn, for example, over the boundary between the Pliocene and Pleistocene, and whether the Quaternary—our current geologic period, a category above epochs—should exist at all.

Even if it is not formally recognized by geologists, the Anthropocene is here to stay. It is used in art exhibits, journal titles, and endless books. And Gibbard, Ellis, and others have advanced the view that it can remain an informal geologic term, calling it the “Anthropocene event.” Like the Great Oxygenation Event, in which cyanobacteria flushed the atmosphere with oxygen billions of years ago, the Anthropocene marks a huge transition, but one without an exact date. “Let us work together to ensure the creation of a far deeper and more inclusive Anthropocene event,” Ellis says.

Waters and his colleagues will continue to press that the Anthropocene is worthy of recognition in the geologic timescale, even if that advocacy has to continue in an informal capacity, he says. Although small in size, Anthropocene strata such as the 10 centimeters of lake mud are distinct and can be traced using more than 100 durable geochemical signals, he says. And there is no going back to where the planet was 100 years ago, he says. “The Earth system changes that mark the Anthropocene are collectively irreversible.”


doi: 10.1126/science.z3wcw7b

Are We in the ‘Anthropocene,’ the Human Age? Nope, Scientists Say. (New York Times)

A panel of experts voted down a proposal to officially declare the start of a new interval of geologic time, one defined by humanity’s changes to the planet.

Four people standing on the deck of a ship face a large, white mushroom cloud in the distance.
In weighing their decision, scientists considered the effect on the world of nuclear activity. A 1946 test blast over Bikini atoll. Credit: Jack Rice/Associated Press

Original article

By Raymond Zhong

March 5, 2024

The Triassic was the dawn of the dinosaurs. The Paleogene saw the rise of mammals. The Pleistocene included the last ice ages.

Is it time to mark humankind’s transformation of the planet with its own chapter in Earth history, the “Anthropocene,” or the human age?

Not yet, scientists have decided, after a debate that has spanned nearly 15 years. Or the blink of an eye, depending on how you look at it.

A committee of roughly two dozen scholars has, by a large majority, voted down a proposal to declare the start of the Anthropocene, a newly created epoch of geologic time, according to an internal announcement of the voting results seen by The New York Times.

By geologists’ current timeline of Earth’s 4.6-billion-year history, our world right now is in the Holocene, which began 11,700 years ago with the most recent retreat of the great glaciers. Amending the chronology to say we had moved on to the Anthropocene would represent an acknowledgment that recent, human-induced changes to geological conditions had been profound enough to bring the Holocene to a close.

The declaration would shape terminology in textbooks, research articles and museums worldwide. It would guide scientists in their understanding of our still-unfolding present for generations, perhaps even millenniums, to come.

In the end, though, the members of the committee that voted on the Anthropocene over the past month were not only weighing how consequential this period had been for the planet. They also had to consider when, precisely, it began.

By the definition that an earlier panel of experts spent nearly a decade and a half debating and crafting, the Anthropocene started in the mid-20th century, when nuclear bomb tests scattered radioactive fallout across our world. To several members of the scientific committee that considered the panel’s proposal in recent weeks, this definition was too limited, too awkwardly recent, to be a fitting signpost of Homo sapiens’s reshaping of planet Earth.

“It constrains, it confines, it narrows down the whole importance of the Anthropocene,” said Jan A. Piotrowski, a committee member and geologist at Aarhus University in Denmark. “What was going on during the onset of agriculture? How about the Industrial Revolution? How about the colonizing of the Americas, of Australia?”

“Human impact goes much deeper into geological time,” said another committee member, Mike Walker, an earth scientist and professor emeritus at the University of Wales Trinity Saint David. “If we ignore that, we are ignoring the true impact, the real impact, that humans have on our planet.”

Hours after the voting results were circulated within the committee early Tuesday, some members said they were surprised at the margin of votes against the Anthropocene proposal compared with those in favor: 12 to four, with two abstentions. (Another three committee members neither voted nor formally abstained.)

Even so, it was unclear on Tuesday whether the results stood as a conclusive rejection or whether they might still be challenged or appealed. In an email to The Times, the committee’s chair, Jan A. Zalasiewicz, said there were “some procedural issues to consider” but declined to discuss them further. Dr. Zalasiewicz, a geologist at the University of Leicester, has expressed support for canonizing the Anthropocene.

This question of how to situate our time in the narrative arc of Earth history has thrust the rarefied world of geological timekeepers into an unfamiliar limelight.

The grandly named chapters of our planet’s history are governed by a body of scientists, the International Union of Geological Sciences. The organization uses rigorous criteria to decide when each chapter started and which characteristics defined it. The aim is to uphold common global standards for expressing the planet’s history.

A man stands next to a machine with tubing and lines of plastic that end up in a shallow pool of water.
Polyethylene being extruded and fed into a cooling bath during plastics manufacture, circa 1950. Credit: Hulton Archive, via Getty Images

Geoscientists don’t deny our era stands out within that long history. Radionuclides from nuclear tests. Plastics and industrial ash. Concrete and metal pollutants. Rapid greenhouse warming. Sharply increased species extinctions. These and other products of modern civilization are leaving unmistakable remnants in the mineral record, particularly since the mid-20th century.

Still, to qualify for its own entry on the geologic time scale, the Anthropocene would have to be defined in a very particular way, one that would meet the needs of geologists and not necessarily those of the anthropologists, artists and others who are already using the term.

That’s why several experts who have voiced skepticism about enshrining the Anthropocene emphasized that the vote against it shouldn’t be read as a referendum among scientists on the broad state of the Earth. “This was a narrow, technical matter for geologists, for the most part,” said one of those skeptics, Erle C. Ellis, an environmental scientist at the University of Maryland, Baltimore County. “This has nothing to do with the evidence that people are changing the planet,” Dr. Ellis said. “The evidence just keeps growing.”

Francine M.G. McCarthy, a micropaleontologist at Brock University in St. Catharines, Ontario, is the opposite of a skeptic: She helped lead some of the research to support ratifying the new epoch.

“We are in the Anthropocene, irrespective of a line on the time scale,” Dr. McCarthy said. “And behaving accordingly is our only path forward.”

The Anthropocene proposal got its start in 2009, when a working group was convened to investigate whether recent planetary changes merited a place on the geologic timeline. After years of deliberation, the group, which came to include Dr. McCarthy, Dr. Ellis and some three dozen others, decided that they did. The group also decided that the best start date for the new period was around 1950.

The group then had to choose a physical site that would most clearly show a definitive break between the Holocene and the Anthropocene. They settled on Crawford Lake, in Ontario, where the deep waters have preserved detailed records of geochemical change within the sediments at the bottom.

Last fall, the working group submitted its Anthropocene proposal to the first of three governing committees under the International Union of Geological Sciences. Sixty percent of each committee has to approve the proposal for it to advance to the next.

The members of the first one, the Subcommission on Quaternary Stratigraphy, submitted their votes starting in early February. (Stratigraphy is the branch of geology concerned with rock layers and how they relate in time. The Quaternary is the ongoing geologic period that began 2.6 million years ago.)

Under the rules of stratigraphy, each interval of Earth time needs a clear, objective starting point, one that applies worldwide. The Anthropocene working group proposed the mid-20th century because it bracketed the postwar explosion of economic growth, globalization, urbanization and energy use. But several members of the subcommission said humankind’s upending of Earth was a far more sprawling story, one that might not even have a single start date across every part of the planet.

Two cooling towers, a square building and a larger building behind it with smokestacks and industrial staircases on the outside.
The world’s first full-scale atomic power station in Britain in 1956. Credit: Hulton Archive, via Getty Images

This is why Dr. Walker, Dr. Piotrowski and others prefer to describe the Anthropocene as an “event,” not an “epoch.” In the language of geology, events are a looser term. They don’t appear on the official timeline, and no committees need to approve their start dates.

Yet many of the planet’s most significant happenings are called events, including mass extinctions, rapid expansions of biodiversity and the filling of Earth’s skies with oxygen 2.1 to 2.4 billion years ago.

Even if the subcommission’s vote is upheld and the Anthropocene proposal is rebuffed, the new epoch could still be added to the timeline at some later point. It would, however, have to go through the whole process of discussion and voting all over again.

Time will march on. Evidence of our civilization’s effects on Earth will continue accumulating in the rocks. The task of interpreting what it all means, and how it fits into the grand sweep of history, might fall to the future inheritors of our world.

“Our impact is here to stay and to be recognizable in the future in the geological record — there is absolutely no question about this,” Dr. Piotrowski said. “It will be up to the people that will be coming after us to decide how to rank it.”

Raymond Zhong reports on climate and environmental issues for The Times.

Latest News on Climate Change and the Environment

Protecting groundwater. After years of decline in the nation’s groundwater, a series of developments indicate that U.S. state and federal officials may begin tightening protections for the dwindling resource. In Nevada, Idaho and Montana, court decisions have strengthened states’ ability to restrict overpumping. California is considering penalizing officials for draining aquifers. And the White House has asked scientists to advise how the federal government can help.

Weather-related disasters. An estimated 2.5 million people were forced from their homes in the United States by weather-related disasters in 2023, according to new data from the Census Bureau. The numbers paint a more complete picture than ever before of the lives of people affected by such events as climate change supercharges extreme weather.

Amazon rainforest. Up to half of the Amazon rainforest could transform into grasslands or weakened ecosystems in the coming decades, a new study found, as climate change, deforestation and severe droughts damage huge areas beyond their ability to recover. Those stresses in the most vulnerable parts of the rainforest could eventually drive the entire forest ecosystem past a tipping point that would trigger a forest-wide collapse, researchers said.

A significant threshold. Over the past 12 months, the average temperature worldwide was more than 1.5 degrees Celsius, or 2.7 degrees Fahrenheit, higher than it was at the dawn of the industrial age. That number carries special significance, as nations agreed under the 2015 Paris Agreement to try to keep the difference between average temperatures today and in preindustrial times to 1.5 degrees Celsius, or at least below 2 degrees Celsius.

New highs. The exceptional warmth that first enveloped the planet last summer is continuing strong into 2024: Last month clocked in as the hottest January ever measured, and the hottest January on record for the oceans, too. Sea surface temperatures were just slightly lower than in August 2023, the oceans’ warmest month on the books.

Polémica con el Antropoceno: la humanidad todavía no sabe en qué época geológica vive (El País)

elpais.com

Un comité de expertos ha tumbado la propuesta de declarar un nuevo momento geológico, pero el propio presidente denuncia irregularidades en la votación

Manuel Ansede

Madrid –

Extracción de un testigo de sedimentos del fondo del lago Crawford, a las afueras de Toronto (Canadá). TIM PATTERSON / UNIVERSIDAD DE CARLETON

La idea del Antropoceno —que la humanidad vive desde 1950 en una nueva época geológica caracterizada por la contaminación humana— se ha hecho tan popular en los últimos años que hasta la Real Academia Española adoptó el término en el Diccionario de la Lengua en 2021. Los académicos se dieron esta vez demasiada prisa. El concepto sigue en el aire, en medio de una vehemente polémica entre especialistas. Miembros del comité de expertos que debe tomar la decisión en la Unión Internacional de Ciencias Geológicas (UICG) —la Subcomisión de Estratigrafía del Cuaternario— han filtrado este martes al diario The New York Times que han votado mayoritariamente en contra de reconocer la existencia del Antropoceno. Sin embargo, el presidente de la Subcomisión, el geólogo Jan Zalasiewicz, explica a EL PAÍS que el resultado preliminar de la votación se ha anunciado sin su autorización y que todavía quedan “algunos asuntos pendientes con los votos que hay que resolver”. La humanidad todavía no sabe en qué época geológica vive.

El químico holandés Paul Crutzen, ganador del Nobel de Química por iluminar el agujero de la capa de ozono, planteó en el año 2000 que el planeta había entrado en una nueva época, provocada por el impacto brutal de los seres humanos. Un equipo internacional de especialistas, el Grupo de Trabajo del Antropoceno, ha analizado los hechos científicos desde 2009 y el año pasado presentó una propuesta para proclamar oficialmente esta nueva época geológica, marcada por la radiactividad de las bombas atómicas y los contaminantes procedentes de la quema de carbón y petróleo. El diminuto lago Crawford, a las afueras de Toronto (Canadá), era el lugar indicado para ejemplificar el inicio del Antropoceno, gracias a los sedimentos de su fondo, imperturbados desde hace siglos.

La mayoría de los miembros de la Subcomisión de Estratigrafía del Cuaternario de la UICG ha votado en contra de la propuesta, según el periódico estadounidense. El geólogo británico Colin Waters, líder del Grupo de Trabajo del Antropoceno, explica a EL PAÍS que se ha enterado por la prensa. “Todavía no hemos recibido una confirmación oficial directamente del secretario de la Subcomisión de Estratigrafía del Cuaternario. Parece que The New York Times recibe los resultados antes que nosotros, es muy decepcionante”, lamenta Waters.

El geólogo reconoce que el dictamen, si se confirma, sería el fin de su propuesta actual, pero no se rinde. “Tenemos muchos investigadores eminentes que desean continuar como grupo, de manera informal, defendiendo las evidencias de que el Antropoceno debería ser formalizado como una época”, afirma. A su juicio, los estratos geológicos actuales —contaminados por isótopos radiactivos, microplásticos, cenizas y pesticidas— han cambiado de manera irreversible respecto a los del Holoceno, la época geológica iniciada hace más de 10.000 años, tras la última glaciación. “Dadas las pruebas existentes, que siguen aumentando, no me sorprendería un futuro llamamiento a reconsiderar nuestra propuesta”, opina Waters, de la Universidad de Leicester.

El jefe del Grupo de Trabajo del Antropoceno sostiene que hay “algunas cuestiones de procedimiento” que ponen en duda la validez de la votación. La geóloga italiana Silvia Peppoloni, jefa de la Comisión de Geoética de la UICG, confirma que su equipo ha realizado un informe sobre esta pelea entre la Subcomisión de Estratigrafía del Cuaternario y el Grupo de Trabajo del Antropoceno. El documento está sobre la mesa del presidente de la UICG, el británico John Ludden.

La geóloga canadiense Francine McCarthy estaba convencida de que el lago Crawford convencería a los escépticos. Desde fuera parece pequeño, con apenas 250 metros de largo, pero su profundidad roza los 25 metros. Sus aguas superficiales no se mezclan con las de su lecho, por lo que el suelo del fondo se puede analizar como una lasaña, en la que cada capa acumula sedimentos procedentes de la atmósfera. Ese calendario subacuático del lago Crawford revela la denominada Gran Aceleración, el momento alrededor de 1950 en el que la humanidad empezó a dejar una huella cada vez más evidente, con el lanzamiento de bombas atómicas, la quema masiva de petróleo y carbón y la extinción de especies.

“Ignorar el enorme impacto de los humanos en nuestro planeta desde mediados del siglo XX tiene potencialmente consecuencias dañinas, al minimizar la importancia de los datos científicos para hacer frente al evidente cambio en el sistema de la Tierra, como ya señaló Paul Crutzen hace casi 25 años”, advierte McCarthy.

Em votação, cientistas negam que estejamos no Antropoceno, a época geológica dos humanos (Folha de S.Paulo)

www1.folha.uol.com.br

Grupo rejeitou que mudanças sejam profundas o bastante para encerrar o Holoceno

Raymond Zhong

5 de março de 2024


O Triássico foi o amanhecer dos dinossauros. O Paleogeno viu a ascensão dos mamíferos. O Pleistoceno incluiu as últimas eras glaciais.

Está na hora de marcar a transformação da humanidade no planeta com seu próprio capítulo na história da Terra, o “Antropoceno”, ou a época humana?

Ainda não, decidiram os cientistas, após um debate que durou quase 15 anos. Ou um piscar de olhos, dependendo do ângulo pelo qual você olha.

Um comitê de cerca de duas dezenas de estudiosos votou, em grande maioria, contra uma proposta de declarar o início do Antropoceno, uma época recém-criada do tempo geológico, de acordo com um anúncio interno dos resultados da votação visto pelo The New York Times.

Pela linha do tempo atual dos geólogos da história de 4,6 bilhões de anos da Terra, nosso mundo agora está no Holoceno, que começou há 11,7 mil anos com o recuo mais recente dos grandes glaciares.

Alterar a cronologia para dizer que avançamos para o Antropoceno representaria um reconhecimento de que as mudanças recentes induzidas pelo homem nas condições geológicas foram profundas o suficiente para encerrar o Holoceno.

A declaração moldaria a terminologia em livros didáticos, artigos de pesquisa e museus em todo o mundo. Orientaria os cientistas em sua compreensão do nosso presente ainda em desenvolvimento por gerações, talvez até por milênios.

No fim das contas, porém, os membros do comitê que votaram sobre o Antropoceno nas últimas semanas não estavam apenas considerando o quão determinante esse período havia sido para o planeta. Eles também tiveram que considerar quando, precisamente, ele começou.

Pela definição que um painel anterior de especialistas passou quase uma década e meia debatendo e elaborando, o Antropoceno começou na metade do século 20, quando testes de bombas nucleares espalharam material radioativo por todo o nosso mundo.

Para vários membros do comitê científico que avaliaram a proposta do painel nas últimas semanas, essa definição era muito limitada, muito recente e inadequada para ser um marco adequado da remodelação do Homo sapiens no planeta Terra.

“Isso restringe, confina, estreita toda a importância do Antropoceno”, disse Jan A. Piotrowski, membro do comitê e geólogo da Universidade de Aarhus, na Dinamarca. “O que estava acontecendo durante o início da agricultura? E a Revolução Industrial? E a colonização das Américas, da Austrália?”

“O impacto humano vai muito mais fundo no tempo geológico”, disse outro membro do comitê, Mike Walker, cientista da Terra e professor emérito da Universidade de Gales Trinity Saint David. “Se ignorarmos isso, estamos ignorando o verdadeiro impacto que os humanos têm em nosso planeta.”

Horas após a circulação dos resultados da votação dentro do comitê nesta terça-feira (5) de manhã, alguns membros disseram que ficaram surpresos com a margem de votos contra a proposta do Antropoceno em comparação com os a favor: 12 a 4, com 2 abstenções.

Mesmo assim, nesta terça de manhã não ficou claro se os resultados representavam uma rejeição conclusiva ou se ainda poderiam ser contestados ou apelados. Em um e-mail para o Times, o presidente do comitê, Jan A. Zalasiewicz, disse que havia “algumas questões procedimentais a considerar”, mas se recusou a discuti-las mais a fundo.

Zalasiewicz, geólogo da Universidade de Leicester, expressou apoio à canonização do Antropoceno.

Essa questão de como situar nosso tempo na narrativa da história da Terra colocou o mundo dos guardiões do tempo geológico sob uma luz desconhecida.

Os capítulos grandiosamente nomeados da história de nosso planeta são governados por um grupo de cientistas, a União Internacional de Ciências Geológicas. A organização usa critérios rigorosos para decidir quando cada capítulo começou e quais características o definiram. O objetivo é manter padrões globais comuns para expressar a história do planeta.

Os geocientistas não negam que nossa era se destaca dentro dessa longa história. Radionuclídeos de testes nucleares. Plásticos e cinzas industriais. Poluentes de concreto e metal. Aquecimento global rápido. Aumento acentuado de extinções de espécies. Esses e outros produtos da civilização moderna estão deixando vestígios inconfundíveis no registro mineral, especialmente desde meados do século 20.

Ainda assim, para se qualificar para a entrada na escala de tempo geológico, o Antropoceno teria que ser definido de uma maneira muito específica, que atendesse às necessidades dos geólogos e não necessariamente dos antropólogos, artistas e outros que já estão usando o termo.

Por isso, vários especialistas que expressaram ceticismo quanto à consagração do Antropoceno enfatizaram que o voto contra não deve ser interpretado como um referendo entre cientistas sobre o amplo estado da Terra.

“Este é um assunto específico e técnico para os geólogos, em sua maioria”, disse um desses céticos, Erle C. Ellis, um cientista ambiental da Universidade de Maryland. “Isso não tem nada a ver com a evidência de que as pessoas estão mudando o planeta”, afirmou Ellis. “A evidência continua crescendo.”

Francine M.G. McCarthy, micropaleontóloga da Universidade Brock em St. Catharines, Ontário (Canadá), é tem visão oposta: ela ajudou a liderar algumas das pesquisas para apoiar a ratificação da nova época.

“Estamos no Antropoceno, independentemente de uma linha na escala de tempo”, disse McCarthy. “E agir de acordo é o nosso único caminho a seguir.”

A proposta do Antropoceno teve início em 2009, quando um grupo de trabalho foi convocado para investigar se as recentes mudanças planetárias mereciam um lugar na linha do tempo geológica.

Após anos de deliberação, o grupo, que passou a incluir McCarthy, Ellis e cerca de três dezenas de outros, decidiu que sim. O grupo também decidiu que a melhor data de início para o novo período era por volta de 1950.

O grupo então teve que escolher um local físico que mostrasse de forma mais clara uma quebra definitiva entre o Holoceno e o Antropoceno. Eles escolheram o Lago Crawford, em Ontário, no Canadá, onde as águas profundas preservaram registros detalhados de mudanças geoquímicas nos sedimentos do fundo.

No outono passado, o grupo de trabalho enviou sua proposta do Antropoceno para o primeiro dos três comitês governantes da União Internacional de Ciências Geológicas —60% de cada comitê precisam aprovar a proposta para que ela avance para o próximo.

Os membros do primeiro comitê, a Subcomissão de Estratigrafia do Quaternário, enviaram seus votos a partir do início de fevereiro. (Estratigrafia é o ramo da geologia que se dedica ao estudo das camadas de rocha e como elas se relacionam no tempo. O Quaternário é o período geológico em curso que começou há 2,6 milhões de anos.)

De acordo com as regras da estratigrafia, cada intervalo de tempo da Terra precisa de um ponto de partida claro e objetivo, que se aplique em todo o mundo. O grupo de trabalho do Antropoceno propôs meados do século 20 porque isso abrangia a explosão do crescimento econômico pós-guerra, a globalização, a urbanização e o uso de energia.

Mas vários membros da subcomissão disseram que a transformação da humanidade na Terra era uma história muito mais abrangente, que talvez nem tenha uma única data de início em todas as partes do planeta.

Por isso, Walker, Piotrowski e outros preferem descrever o Antropoceno como um “evento”, não como uma “época”. Na linguagem da geologia, eventos são um termo mais amplo. Eles não aparecem na linha do tempo oficial, e nenhum comitê precisa aprovar suas datas de início.

No entanto, muitos dos acontecimentos mais significativos do planeta são chamados de eventos, incluindo extinções em massa, expansões rápidas da biodiversidade e o preenchimento dos céus da Terra com oxigênio há 2,1 bilhões a 2,4 bilhões de anos.

Mesmo que o voto da subcomissão seja mantido e a proposta do Antropoceno seja rejeitada, a nova época ainda poderá ser adicionada à linha do tempo em algum momento posterior. No entanto, terá que passar por todo o processo de discussão e votação novamente.

The Quiet Threat To Science Posed By ‘Indigenous Knowledge’ (Forbes)

James Broughel

Feb 29, 2024,07:06am EST

Portrait of Quechua man in traditinal hat.
The White House is working on incorporating “indigenous knowledge” into federal regulatory policy. GETTY

“Indigenous knowledge” is in the spotlight thanks to President Biden, who issued an executive order within days of taking office, aimed at ushering in a new era of tribal self-determination. It was a preview of things to come. His administration went on to host an annual White House summit on tribal nations, and convened an interagency working group that spent a year developing government-wide guidance on indigenous knowledge.

Released in late 2022, the 46-page guidance document defines indigenous knowledge as “a body of observations, oral and written knowledge, innovations, practices, and beliefs developed by Tribes and Indigenous Peoples through experience with the environment.” According to the guidance, indigenous knowledge “is applied to phenomena across biological, physical, social, cultural, and spiritual systems.”

Now the Biden Administration wants federal agencies to include these sorts of beliefs into their decision making. As a result, agencies like the EPAFDA, and CDC are incorporating indigenous knowledge into their scientific integrity practices.

In some cases, tribal knowledge can certainly provide empirical data to decisionmakers. For example, if an agency is concerned about pollution in a certain area, tribal leaders might be able to provide insights about abnormally high rates of illness experienced within their community. That said, categorizing knowledge that includes folklore and traditions under the banner of enhancing “scientific integrity” poses a number of serious problems, to put it mildly.

Very often, indigenous knowledge deals in subjective understandings related to culture, stories, and values—not facts or empirically-derived cause-and-effect relationships. In such cases, the knowledge can still be useful, but it is not “science” per se, which is usually thought of as the study of observable phenomena in the physical and natural world.

Treating science and indigenous knowledge as equivalent risks blending oral traditions and spirituality with verifiable data and evidence. Scientists are aware of the danger, which explains why the authors of a recent article in Science Magazine wisely noted “we do not argue that Indigenous Knowledge should usurp the role of, or be called, science.” Instead, they argue, indigenous knowledge can complement scientific information.

Indeed, this knowledge should be collected alongside other input from stakeholders with an interest in the outcomes of federal policy. It shouldn’t be confused with science itself, however. Yet by baking indigenous insights into scientific integrity policies without clearly explaining how the knowledge is to be collected, verified, and used, federal agencies will make it easier to smuggle untested claims into the evidentiary records for rulemakings.

Another issue is that indigenous knowledge varies dramatically across the more than 500 federally-recognized tribes. There are likely to be instances where one group’s teachings may offer time-tested wisdom, while another’s proves unreliable when held up against observable facts. Indigenous knowledge can also point in opposite directions. Last year, the Biden administration invoked indigenous knowledge when it canceled seven oil and gas leases in Alaska, but indigenous groups are known to often support energy development as well.

Even the Biden team admits indigenous knowledge is “often unique and specific” to a single tribe or people. But the Biden team doesn’t offer a way to distinguish between competing or contradictory accounts.

While no one disputes the historical mistreatment of Native Americans, this is unrelated to the question of whether knowledge is accurate. Moreover, other forms of localized knowledge also deserve attention. In rural towns and municipalities, for example, long-time residents often develop their own bodies of knowledge concerning everything from flood patterns to forest fire risks. To be clear, this local knowledge is also not “science” in most cases. But, like indigenous knowledge, it can be critically important.

That agency scientific integrity initiatives would single out knowledge based on social categories like race and ethnicity is unscientific. The danger is that indigenous knowledge policies will enable subjective understandings to become baked into rulemakings alongside the latest in peer-reviewed research.

If federal agencies aim to incorporate subjective belief systems into rulemaking, they should take care to do so responsibly without allowing unverified claims to be smuggled into purportedly impartial regulatory analyses. In most instances, indigenous knowledge will fall outside the scope of what can rightfully be considered part of ensuring scientific integrity.

The path forward lies in incorporating indigenous insights into policy decisions at the stage where they rightfully belong: as part of holding meetings and gathering feedback from stakeholders. Very likely, indigenous and other forms of local knowledge will often turn out to be more important than science. But confusing politics and science risks undermining both.

[Note from RT: there are many problems in the line of reasoning presented in this piece; the one that is perhaps most important is that the author’s perception of what “Indigenous knowledge” is is based on the results of processes of decontextualization, fragmentation, and reconstruction of Indigenous ideas in instrumental ways, inside larger social and cultural frames that have no relation to the contexts in which these ideas circulate originally. Indigenous knowledge would not be so crucial today if it were compatible with non-Indigenous, modern/Western modes of thinking and social organization. In most cases, the complaint that Indigenous knowledge is difficult to accommodate comes from realms in which there is great confidence that business as usual will solve the current environmental situation.]