[Previsão do tempo e previsão de mortes. Observar reação do poder público municipal.]
Se confirmada, a onda de frio será a maior do século, com geada generalizada e temperaturas negativas, o que pode provocar até morte. 25 de julho de 2021
A última atualização dos modelos meteorológicos continuam mantendo a previsão de temperaturas negativas nos três Estados do Sul do Brasil e em áreas do Estado de São Paulo e Sul de Minas Gerais. A fortíssima massa de ar polar poderá ser a mais forte do século e causar prejuízos na agricultura e até mesmo morte de pessoas em situação de vulnerabilidade.
A FRENTE FRIA – SUL
A frente fria que antecede a massa polar vai entrar no Brasil pelo Estado do Rio Grande do Sul na segunda-feira, dia 26, provocando chuva e acentuada queda de temperatura. No dia 27, terça-feira, a chuva já chega em Santa Cataria e no Paraná, fazendo a temperatura despencar rapidamente. Nas serras e áreas de planalto dos três Estados, a temperatura mínima já pode chegar a zero grau.
Na quarta, quinta, sexta e sábado, dias 28,29,30 e 31, praticamente todas as regiões do Sul do Brasil, exceto litoral, terão temperaturas negativas com possibilidade de geada negra, que pode matar a vegetação, provocando sérios prejuízos à agricultura.
Os modelos meteorológicos mantém a chance alta de neve nas serras do Rio Grande do Sul, Santa Catarina e até mesmo no planalto sul do Paraná, entre a noite de quarta-feira (28) e madrugada de quinta-feira (29), atingindo cidades, tais como: Canela/RS, Caxias do Sul/RS, São Joaquim/SC, Urupema/SC, Caçador/SC e Cruz Machado/PR. Confira o mapa abaixo:
A FRENTE FRIA – SÃO PAULO
Na quarta-feira, dia 28, é a vez do Estado de São Paulo experimentar a volta da chuva, que não cairá em todas as regiões, mas manterá o céu nublado com ventos gélidos e temperatura máxima entre 17ºC e 18ºC enquanto as mínimas ficarão entre 5ºC a 10ºC na Grande São Paulo.
Na quinta-feira, dia 29, o Estado de São Paulo já vai amanhecer com muito frio. Temperaturas entre 1ºC e 7ºC serão registradas em toda a Grande São Paulo, Vale do Paraíba, Vale do Ribeira, regiões de Sorocaba, Bauru, Presidente Prudente e Campinas, conforme mapa abaixo:
SEXTA-FEIRA – O ‘PICO’ DO FRIO
A sexta-feira, dia 30 de julho de 2021, deverá ficar marcada na história da meteorologia. Se confirmada, será o dia mais frio do século, com geada generalizada no Estado de São Paulo e temperaturas negativas em várias regiões, o que pode provocar a morte de moradores de rua e/ou pessoas em vulnerabilidade.
Em praticamente todas as regiões do Estado de São Paulo, os modelos atuais indicam temperaturas negativas, conforme mapa baixo: (ATENÇÃO: As previsões podem mudar com o passar dos dias, essa é a indicação atual publicada no domingo, dia 25).
A remarkable new study by a director at one of the largest accounting firms in the world has found that a famous, decades-old warning from MIT about the risk of industrial civilization collapsing appears to be accurate based on new empirical data.
As the world looks forward to a rebound in economic growth following the devastation wrought by the pandemic, the research raises urgent questions about the risks of attempting to simply return to the pre-pandemic ‘normal.’
In 1972, a team of MIT scientists got together to study the risks of civilizational collapse. Their system dynamics model published by the Club of Rome identified impending ‘limits to growth’ (LtG) that meant industrial civilization was on track to collapse sometime within the 21st century, due to overexploitation of planetary resources.
The controversial MIT analysis generated heated debate, and was widely derided at the time by pundits who misrepresented its findings and methods. But the analysis has now received stunning vindication from a study written by a senior director at professional services giant KPMG, one of the ‘Big Four’ accounting firms as measured by global revenue.
Limits to growth
The study was published in the Yale Journal of Industrial Ecology in November 2020 and is available on the KPMG website. It concludes that the current business-as-usual trajectory of global civilization is heading toward the terminal decline of economic growth within the coming decade—and at worst, could trigger societal collapse by around 2040.
The study represents the first time a top analyst working within a mainstream global corporate entity has taken the ‘limits to growth’ model seriously. Its author, Gaya Herrington, is Sustainability and Dynamic System Analysis Lead at KPMG in the United States. However, she decided to undertake the research as a personal project to understand how well the MIT model stood the test of time.
The study itself is not affiliated or conducted on behalf of KPMG, and does not necessarily reflect the views of KPMG. Herrington performed the research as an extension of her Masters thesis at Harvard University in her capacity as an advisor to the Club of Rome. However, she is quoted explaining her project on the KPMG website as follows:
“Given the unappealing prospect of collapse, I was curious to see which scenarios were aligning most closely with empirical data today. After all, the book that featured this world model was a bestseller in the 70s, and by now we’d have several decades of empirical data which would make a comparison meaningful. But to my surprise I could not find recent attempts for this. So I decided to do it myself.”
Titled ‘Update to limits to growth: Comparing the World3 model with empirical data’, the study attempts to assess how MIT’s ‘World3’ model stacks up against new empirical data. Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments. However, the last study of this nature was completed in 2014.
The risk of collapse
Herrington’s new analysis examines data across 10 key variables, namely population, fertility rates, mortality rates, industrial output, food production, services, non-renewable resources, persistent pollution, human welfare, and ecological footprint. She found that the latest data most closely aligns with two particular scenarios, ‘BAU2’ (business-as-usual) and ‘CT’ (comprehensive technology).
“BAU2 and CT scenarios show a halt in growth within a decade or so from now,” the study concludes. “Both scenarios thus indicate that continuing business as usual, that is, pursuing continuous growth, is not possible. Even when paired with unprecedented technological development and adoption, business as usual as modelled by LtG would inevitably lead to declines in industrial capital, agricultural output, and welfare levels within this century.”
Study author Gaya Herrington told Motherboard that in the MIT World3 models, collapse “does not mean that humanity will cease to exist,” but rather that “economic and industrial growth will stop, and then decline, which will hurt food production and standards of living… In terms of timing, the BAU2 scenario shows a steep decline to set in around 2040.”
The ‘Business-as-Usual’ scenario (Source: Herrington, 2021)
The end of growth?
In the comprehensive technology (CT) scenario, economic decline still sets in around this date with a range of possible negative consequences, but this does not lead to societal collapse.
The ‘Comprehensive Technology’ scenario (Source: Herrington, 2021)
Unfortunately, the scenario which was the least closest fit to the latest empirical data happens to be the most optimistic pathway known as ‘SW’ (stabilized world), in which civilization follows a sustainable path and experiences the smallest declines in economic growth—based on a combination of technological innovation and widespread investment in public health and education.
The ‘Stabilized World’ Scenario (Source: Herrington, 2021)
Although both the business-as-usual and comprehensive technology scenarios point to the coming end of economic growth in around 10 years, only the BAU2 scenario “shows a clear collapse pattern, whereas CT suggests the possibility of future declines being relatively soft landings, at least for humanity in general.”
Both scenarios currently “seem to align quite closely not just with observed data,” Herrington concludes in her study, indicating that the future is open.
A window of opportunity
While focusing on the pursuit of continued economic growth for its own sake will be futile, the study finds that technological progress and increased investments in public services could not just avoid the risk of collapse, but lead to a new stable and prosperous civilization operating safely within planetary boundaries. But we really have only the next decade to change course.
“At this point therefore, the data most aligns with the CT and BAU2 scenarios which indicate a slowdown and eventual halt in growth within the next decade or so, but World3 leaves open whether the subsequent decline will constitute a collapse,” the study concludes. Although the ‘stabilized world’ scenario “tracks least closely, a deliberate trajectory change brought about by society turning toward another goal than growth is still possible. The LtG work implies that this window of opportunity is closing fast.”
In a presentation at the World Economic Forum in 2020 delivered in her capacity as a KPMG director, Herrington argued for ‘agrowth’—an agnostic approach to growth which focuses on other economic goals and priorities.
“Changing our societal priorities hardly needs to be a capitulation to grim necessity,” she said. “Human activity can be regenerative and our productive capacities can be transformed. In fact, we are seeing examples of that happening right now. Expanding those efforts now creates a world full of opportunity that is also sustainable.”
She noted how the rapid development and deployment of vaccines at unprecedented rates in response to the COVID-19 pandemic demonstrates that we are capable of responding rapidly and constructively to global challenges if we choose to act. We need exactly such a determined approach to the environmental crisis.
“The necessary changes will not be easy and pose transition challenges but a sustainable and inclusive future is still possible,” said Herrington.
The best available data suggests that what we decide over the next 10 years will determine the long-term fate of human civilization. Although the odds are on a knife-edge, Herrington pointed to a “rapid rise” in environmental, social and good governance priorities as a basis for optimism, signalling the change in thinking taking place in both governments and businesses. She told me that perhaps the most important implication of her research is that it’s not too late to create a truly sustainable civilization that works for all.
Experts say it could spur conflict with a neighboring country.
This week, the Chinese government announced that it plans to drastically increase its use of technology that artificially changes the weather.
Cloud seeding technology, or systems that can blasts silver molecules into the sky to prompt condensation and cloud formation, has been around for decades, and China makes frequent use of it. But now, CNN reports that China wants to increase the total size of its weather modification test area to 5.5 million square miles by 2025 — a huge increase, and an area larger than that of the entire country of India, which could affect the environment on an epic scale and even potentially spur conflict with nearby countries.
Fog Of War
Most notably, China and India share a hotly-disputed border that they’ve violently clashed over as recently as this year, CNN has previously reported. India’s agriculture relies on a monsoon season that’s already grown unpredictable due to climate change, prompting experts in the country to worry that China may use its ability to control rain and snowfall as a weapon.
“Lack of proper coordination of weather modification activity (could) lead to charges of ‘rain stealing’ between neighboring regions,” National Taiwan University researchers conclude in a 2017 paper published in Geoforum.
In the past, China has used its weather modification tech to seed clouds well in advance of major events like the 2008 Olympics and political meetings so the events themselves happen under clear skies, CNN reports.
But this planned expansion of the system means that other countries may be subject to its meteorological whims — seeding international conflict in addition to clouds.
Facing a hotter future, dwindling water sources and an exploding population, scientists in one Middle East country are making it rain.
United Arab Emirates meteorological officials released a video this week of cars driving through a downpour in Ras al Khaimah in the northern part of the country. The storm was the result of one of the UAE’s newest efforts to increase rainfall in a desert nation that gets about four inches a year on average.
Washington, D.C., in contrast, has averaged nearly 45 inches of rain annually for the past decade.
Scientists created rainstorms by launching drones, which then zapped clouds with electricity, the Independent reports. Jolting droplets in the clouds can cause them to clump together, researchers found. The larger raindrops that result then fall to the ground, instead of evaporating midair — which is often the fate of smaller droplets in the UAE, where temperatures are hot and the clouds are high.
“What we are trying to do is to make the droplets inside the clouds big enough so that when they fall out of the cloud, they survive down to the surface,” meteorologist and researcher Keri Nicoll told CNN in May as her team prepared to start testing the drones near Dubai.
Nicoll is part of a team of scientists with the University of Reading in England whose research led to this week’s man-made rainstorms. In 2017, the university’s scientists received $1.5 million for use over three years from the UAE Research Program for Rain Enhancement Science, which has invested in at least nine different research projects over the past five years.
To test their research, Nicoll and her team built four drones with wingspans of about 6½ feet. The drones, which are launched from a catapult, can fly for about 40 minutes, CNN reported. During flight, the drone’s sensors measure temperature, humidity and electrical charge within a cloud, which lets the researchers know when and where they need to zap.
Water is a big issue in the UAE. The country uses about 4 billion cubic meters of it each year but has access to about 4 percent of that in renewable water resources, according to the CIA. The number of people living in the UAE has skyrocketed in recent years, doubling to 8.3 million between 2005 and 2010, which helps explain why demand for water spiked by a third around that time, according to the government’s 2015 “State of Environment” report. The population kept surging over the next decade and is now 9.9 million.
“The water table is sinking drastically in [the] UAE,” University of Reading professor and meteorologist Maarten Ambaum told BBC News, “and the purpose of this [project] is to try to help with rainfall.”
It usually rains just a few days out of the year in the UAE. During the summer, there’s almost no rainfall. Temperatures there recently topped 125 degrees.
In recent years, the UAE’s massive push into desalination technology — which transforms seawater into freshwater by removing the salt — has helped close the gap between the demand for water and supply. Most of the UAE’s drinkable water, and 42 percent of all water used in the country, comes from its roughly 70 desalination plants, according to the UAE government.
Still, part of the government’s “water security strategy” is to lower demand by 21 percent in the next 15 years.
Ideas to get more water for the UAE have not lacked imagination. In 2016, The Washington Post reported government officials were considering building a mountain to create rainfall. As moist air reaches a mountain, it is forced upward, cooling as it rises. The air can then condense and turn into liquid, which falls as rain.
Estimates for another mountain-building project in the Netherlands came in as high as $230 billion.
Other ideas for getting more water to the UAE have included building a pipeline from Pakistan and floating icebergs down from the Arctic.
The Facebook engineer was itching to know why his date hadn’t responded to his messages. Perhaps there was a simple explanation—maybe she was sick or on vacation.
So at 10 p.m. one night in the company’s Menlo Park headquarters, he brought up her Facebook profile on the company’s internal systems and began looking at her personal data. Her politics, her lifestyle, her interests—even her real-time location.
The engineer would be fired for his behavior, along with 51 other employees who had inappropriately abused their access to company data, a privilege that was then available to everyone who worked at Facebook, regardless of their job function or seniority. The vast majority of the 51 were just like him: men looking up information about the women they were interested in.
In September 2015, after Alex Stamos, the new chief security officer, brought the issue to Mark Zuckerberg’s attention, the CEO ordered a system overhaul to restrict employee access to user data. It was a rare victory for Stamos, one in which he convinced Zuckerberg that Facebook’s design was to blame, rather than individual behavior.
So begins An Ugly Truth, a new book about Facebook written by veteran New York Times reporters Sheera Frenkel and Cecilia Kang. With Frenkel’s expertise in cybersecurity, Kang’s expertise in technology and regulatory policy, and their deep well of sources, the duo provide a compelling account of Facebook’s years spanning the 2016 and 2020 elections.
Stamos would no longer be so lucky. The issues that derived from Facebook’s business model would only escalate in the years that followed but as Stamos unearthed more egregious problems, including Russian interference in US elections, he was pushed out for making Zuckerberg and Sheryl Sandberg face inconvenient truths. Once he left, the leadership continued to refuse to address a whole host of profoundly disturbing problems, including the Cambridge Analytica scandal, the genocide in Myanmar, and rampant covid misinformation.
Frenkel and Kang argue that Facebook’s problems today are not the product of a company that lost its way. Instead they are part of its very design, built atop Zuckerberg’s narrow worldview, the careless privacy culture he cultivated, and the staggering ambitions he chased with Sandberg.
When the company was still small, perhaps such a lack of foresight and imagination could be excused. But since then, Zuckerberg’s and Sandberg’s decisions have shown that growth and revenue trump everything else.
In a chapter titled “Company Over Country,” for example, the authors chronicle how the leadership tried to bury the extent of Russian election interference on the platform from the US intelligence community, Congress, and the American public. They censored the Facebook security team’s multiple attempts to publish details of what they had found, and cherry-picked the data to downplay the severity and partisan nature of the problem. When Stamos proposed a redesign of the company’s organization to prevent a repeat of the issue, other leaders dismissed the idea as “alarmist” and focused their resources on getting control of the public narrative and keeping regulators at bay.
In 2014, a similar pattern began to play out in Facebook’s response to the escalating violence in Myanmar, detailed in the chapter “Think Before You Share.” A year prior, Myanmar-based activists had already begun to warn the company about the concerning levels of hate speech and misinformation on the platform being directed at the country’s Rohingya Muslim minority. But driven by Zuckerberg’s desire to expand globally, Facebook didn’t take the warnings seriously.
When riots erupted in the country, the company further underscored their priorities. It remained silent in the face of two deaths and fourteen injured but jumped in the moment the Burmese government cut off Facebook access for the country. Leadership then continued to delay investments and platform changes that could have prevented the violence from getting worse because it risked reducing user engagement. By 2017, ethnic tensions had devolved into a full-blown genocide, which the UN later found had been “substantively contributed to” by Facebook, resulting in the killing of more than 24,000 Rohingya Muslims.
This is what Frenkel and Kang call Facebook’s “ugly truth.” Its “irreconcilable dichotomy” of wanting to connect people to advance society but also enrich its bottom line. Chapter after chapter makes abundantly clear that it isn’t possible to satisfy both—and Facebook has time again chosen the latter at the expense of the former.
The book is as much a feat of storytelling as it is reporting. Whether you have followed Facebook’s scandals closely as I have, or only heard bits and pieces at a distance, Frenkel and Kang weave it together in a way that leaves something for everyone. The detailed anecdotes take readers behind the scenes into Zuckerberg’s conference room known as “Aquarium,” where key decisions shaped the course of the company. The pacing of each chapter guarantees fresh revelations with every turn of the page.
While I recognized each of the events that the authors referenced, the degree to which the company sought to protect itself at the cost of others was still worse than I had previously known. Meanwhile, my partner who read it side-by-side with me and squarely falls into the second category of reader repeatedly looked up stunned by what he had learned.
The authors keep their own analysis light, preferring to let the facts speak for themselves. In this spirit, they demur at the end of their account from making any hard conclusions about what to do with Facebook, or where this leaves us. “Even if the company undergoes a radical transformation in the coming year,” they write, “that change is unlikely to come from within.” But between the lines, the message is loud and clear: Facebook will never fix itself.
Life under covid has messed with our brains. Luckily, they were designed to bounce back.
Dana Smith – July 16, 2021
Orgies are back. Or at least that’s what advertisers want you to believe. One commercial for chewing gum—whose sales tanked during 2020 because who cares what your breath smells like when you’re wearing a mask—depicts the end of the pandemic as a raucous free-for-all with people embracing in the streets and making out in parks.
The reality is a little different. Americans are slowly coming out of the pandemic, but as they reemerge, there’s still a lot of trauma to process. It’s not just our families, our communities, and our jobs that have changed; our brains have changed too. We’re not the same people we were 18 months ago.
During the winter of 2020, more than 40% of Americans reported symptoms of anxiety or depression, double the rate of the previous year. That number dropped to 30% in June 2021 as vaccinations rose and covid-19 cases fell, but that still leaves nearly one in three Americans struggling with their mental health. In addition to diagnosable symptoms, plenty of people reported experiencing pandemic brain fog, including forgetfulness, difficulty concentrating, and general fuzziness.
Now the question is, can our brains change back? And how can we help them do that?
How stress affects the brain
Every experience changes your brain, either helping you to gain new synapses—the connections between brain cells—or causing you to lose them. This is known as neuroplasticity, and it’s how our brains develop through childhood and adolescence. Neuroplasticity is how we continue to learn and create memories in adulthood, too, although our brains become less flexible as we get older. The process is vital for learning, memory, and general healthy brain function.
But many experiences also cause the brain to lose cells and connections that you wanted or needed to keep. For instance, stress—something almost everyone experienced during the pandemic—can not only destroy existing synapses but also inhibit the growth of new ones.
One way stress does this is by triggering the release of hormones called glucocorticoids, most notably cortisol. In small doses, glucocorticoids help the brain and body respond to a stressor (think: fight or flight) by changing heart rate, respiration, inflammation, and more to increase one’s odds of survival. Once the stressor is gone, the hormone levels recede. With chronic stress, however, the stressor never goes away, and the brain remains flooded with the chemicals. In the long term, elevated levels of glucocorticoids can cause changes that may lead to depression, anxiety, forgetfulness, and inattention.
Scientists haven’t been able to directly study these types of physical brain changes during the pandemic, but they can make inferences from the many mental health surveys conducted over the last 18 months and what they know about stress and the brain from years of previous research.
For example, one study showed that people who experienced financial stressors, like a job loss or economic insecurity, during the pandemic were more likely to develop depression. One of the brain areas hardest hit by chronic stress is the hippocampus, which is important for both memory and mood. These financial stressors would have flooded the hippocampus with glucocorticoids for months, damaging cells, destroying synapses, and ultimately shrinking the region. A smaller hippocampus is one of the hallmarks of depression.
Chronic stress can also alter the prefrontal cortex, the brain’s executive control center, and the amygdala, the fear and anxiety hub. Too many glucocorticoids for too long can impair the connections both within the prefrontal cortex and between it and the amygdala. As a result, the prefrontal cortex loses its ability to control the amygdala, leaving the fear and anxiety center to run unchecked. This pattern of brain activity (too much action in the amygdala and not enough communication with the prefrontal cortex) is common in people who have post-traumatic stress disorder (PTSD), another condition that spiked during the pandemic, particularly among frontline health-care workers.
The social isolation brought on by the pandemic was also likely detrimental to the brain’s structure and function. Loneliness has been linked to reduced volume in the hippocampus and amygdala, as well as decreased connectivity in the prefrontal cortex. Perhaps unsurprisingly, people who lived alone during the pandemic experienced higher rates of depression and anxiety.
Finally, damage to these brain areas affects people not only emotionally but cognitively as well. Many psychologists have attributed pandemic brain fog to chronic stress’s impact on the prefrontal cortex, where it can impair concentration and working memory.
So that’s the bad news. The pandemic hit our brains hard. These negative changes ultimately come down to a stress-induced decrease in neuroplasticity—a loss of cells and synapses instead of the growth of new ones. But don’t despair; there’s some good news. For many people, the brain can spontaneously recover its plasticity once the stress goes away. If life begins to return to normal, so might our brains.
“In a lot of cases, the changes that occur with chronic stress actually abate over time,” says James Herman, a professor of psychiatry and behavioral neuroscience at the University of Cincinnati. “At the level of the brain, you can see a reversal of a lot of these negative effects.”
“If you create for yourself a more enriched environment where you have more possible inputs and interactions and stimuli, then [your brain] will respond to that.”
Rebecca Price, associate professor of psychiatry and psychology at the University of Pittsburgh
In other words, as your routine returns to its pre-pandemic state, your brain should too. The stress hormones will recede as vaccinations continue and the anxiety about dying from a new virus (or killing someone else) subsides. And as you venture out into the world again, all the little things that used to make you happy or challenged you in a good way will do so again, helping your brain to repair the lost connections that those behaviors had once built. For example, just as social isolation is bad for the brain, social interaction is especially good for it. People with larger social networks have more volume and connections in the prefrontal cortex, amygdala, and other brain regions.
Even if you don’t feel like socializing again just yet, maybe push yourself a little anyway. Don’t do anything that feels unsafe, but there is an aspect of “fake it till you make it” in treating some mental illness. In clinical speak, it’s called behavioral activation, which emphasizes getting out and doing things even if you don’t want to. At first, you might not experience the same feelings of joy or fun you used to get from going to a bar or a backyard barbecue, but if you stick with it, these activities will often start to feel easier and can help lift feelings of depression.
Rebecca Price, an associate professor of psychiatry and psychology at the University of Pittsburgh, says behavioral activation might work by enriching your environment, which scientists know leads to the growth of new brain cells, at least in animal models. “Your brain is going to react to the environment that you present to it, so if you are in a deprived, not-enriched environment because you’ve been stuck at home alone, that will probably cause some decreases in the pathways that are available,” she says. “If you create for yourself a more enriched environment where you have more possible inputs and interactions and stimuli, then [your brain] will respond to that.” So get off your couch and go check out a museum, a botanical garden, or an outdoor concert. Your brain will thank you.
Exercise can help too. Chronic stress depletes levels of an important chemical called brain-derived neurotrophic factor (BDNF), which helps promote neuroplasticity. Without BDNF, the brain is less able to repair or replace the cells and connections that are lost to chronic stress. Exercise increases levels of BDNF, especially in the hippocampus and prefrontal cortex, which at least partially explains why exercise can boost both cognition and mood.
Not only does BDNF help new synapses grow, but it may help produce new neurons in the hippocampus, too. For decades, scientists thought that neurogenesis in humans stopped after adolescence, but recent research has shown signs of neuron growth well into old age (though the issue is still hotly contested). Regardless of whether it works through neurogenesis or not, exercise has been shown time and again to improve people’s mood, attention, and cognition; some therapists even prescribe it to treat depression and anxiety. Time to get out there and start sweating.
Turn to treatment
There’s a lot of variation in how people’s brains recover from stress and trauma, and not everyone will bounce back from the pandemic so easily.
“Some people just seem to be more vulnerable to getting into a chronic state where they get stuck in something like depression or anxiety,” says Price. In these situations, therapy or medication might be required.
Some scientists now think that psychotherapy for depression and anxiety works at least in part by changing brain activity, and that getting the brain to fire in new patterns is a first step to getting it to wire in new patterns. A review paper that assessed psychotherapy for different anxiety disorders found that the treatment was most effective in people who displayed more activity in the prefrontal cortex after several weeks of therapy than they did beforehand—particularly when the area was exerting control over the brain’s fear center.
Other researchers are trying to change people’s brain activity using video games. Adam Gazzaley, a professor of neurology at the University of California, San Francisco, developed the first brain-training game to receive FDA approval for its ability to treat ADHD in kids. The game has also been shown to improve attention span in adults. What’s more, EEG studies revealed greater functional connectivity involving the prefrontal cortex, suggesting a boost in neuroplasticity in the region.
Now Gazzaley wants to use the game to treat people with pandemic brain fog. “We think in terms of covid recovery there’s an incredible opportunity here,” he says. “I believe that attention as a system can help across the breadth of [mental health] conditions and symptoms that people are suffering, especially due to covid.”
While the effects of brain-training games on mental health and neuroplasticity are still up for debate, there’s abundant evidence for the benefits of psychoactive medications. In 1996, psychiatrist Yvette Sheline, now a professor at the University of Pennsylvania, was the first to show that people with depression had significantly smaller hippocampi than non-depressed people, and that the size of that brain region was related to how long and how severely they had been depressed. Seven years later, she found that if people with depression took antidepressants, they had less volume loss in the region.
That discovery shifted many researchers’ perspectives on how traditional antidepressants, particularly selective serotonin reuptake inhibitors (SSRIs), help people with depression and anxiety. As their name suggests, SSRIs target the neurochemical serotonin, increasing its levels in synapses. Serotonin is involved in several basic bodily functions, including digestion and sleep. It also helps to regulate mood, and scientists long assumed that was how the drugs worked as antidepressants. However, recent research suggests that SSRIs may also have a neuroplastic effect by boosting BDNF, especially in the hippocampus, which could help restore healthy brain function in the area. One of the newest antidepressants approved in the US, ketamine, also appears to increase BDNF levels and promote synapse growth in the brain, providing additional support for the neuroplasticity theory.
The next frontier in pharmaceutical research for mental illness involves experimental psychedelics like MDMA and psilocybin, the active ingredient in hallucinogenic mushrooms. Some researchers think that these drugs also enhance plasticity in the brain and, when paired with psychotherapy, can be a powerful treatment.
Not all the changes to our brains from the past year are negative. Neuroscientist David Eagleman, author of the book Livewired: The Inside Story of the Ever-Changing Brain, says that some of those changes may actually have been beneficial. By forcing us out of our ruts and changing our routines, the pandemic may have caused our brains to stretch and grow in new ways.
“This past 14 months have been full of tons of stress, anxiety, depression—they’ve been really hard on everybody,” Eagleman says. “The tiny silver lining is from the point of view of brain plasticity, because we have challenged our brains to do new things and find new ways of doing things. If we hadn’t experienced 2020, we’d still have an old internal model of the world, and we wouldn’t have pushed our brains to make the changes they’ve already made. From a neuroscience point of view, this is most important thing you can do—constantly challenge it, build new pathways, find new ways of seeing the world.”
How to help your brain help itself
While everyone’s brain is different, try these activities to give your brain the best chance of recovering from the pandemic.
Try working out. Exercise increases levels of a protein called BDNF that helps promote neuroplasticity and may even contribute to the growth of new neurons.
Talk to a therapist. Therapy can help you view yourself from a different perspective, and changing your thought patterns can change your brain patterns.
Enrich your environment. Get out of your pandemic rut and stimulate your brain with a trip to the museum, a botanical garden, or an outdoor concert.
Take some drugs—but make sure they’re prescribed! Both classic antidepressant drugs, such as SSRIs, and more experimental ones like ketamine and psychedelics are thought to work in part by boosting neuroplasticity.
Strengthen your prefrontal cortex by exercising your self-control. If you don’t have access to an (FDA-approved) attention-boosting video game, meditation can have a similar benefit.
Floods swept Germany, fires ravaged the American West and another heat wave loomed, driving home the reality that the world’s richest nations remain unprepared for the intensifying consequences of climate change.
July 17, 2021
Some of Europe’s richest countries lay in disarray this weekend, as raging rivers burst through their banks in Germany and Belgium, submerging towns, slamming parked cars against trees and leaving Europeans shellshocked at the intensity of the destruction.
Only days before in the Northwestern United States, a region famed for its cool, foggy weather, hundreds had died of heat. In Canada, wildfire had burned a village off the map. Moscow reeled from record temperatures. And this weekend the northern Rocky Mountains were bracing for yet another heat wave, as wildfires spread across 12 states in the American West.
The extreme weather disasters across Europe and North America have driven home two essential facts of science and history: The world as a whole is neither prepared to slow down climate change, nor live with it. The week’s events have now ravaged some of the world’s wealthiest nations, whose affluence has been enabled by more than a century of burning coal, oil and gas — activities that pumped the greenhouse gases into the atmosphere that are warming the world.
“I say this as a German: The idea that you could possibly die from weather is completely alien,” said Friederike Otto, a physicist at Oxford University who studies the links between extreme weather and climate change. “There’s not even a realization that adaptation is something we have to do right now. We have to save people’s lives.”
The floods in Europe have killed at least 165 people, most of them in Germany, Europe’s most powerful economy. Across Germany, Belgium, and the Netherlands, hundreds have been reported as missing, which suggests the death toll could rise. Questions are now being raised about whether the authorities adequately warned the public about risks.
The bigger question is whether the mounting disasters in the developed world will have a bearing on what the world’s most influential countries and companies will do to reduce their own emissions of planet-warming gases. They come a few months ahead of United Nations-led climate negotiations in Glasgow in November, effectively a moment of reckoning for whether the nations of the world will be able to agree on ways to rein in emissions enough to avert the worst effects of climate change.
Disasters magnified by global warming have left a long trail of death and loss across much of the developing world, after all, wiping out crops in Bangladesh, leveling villages in Honduras, and threatening the very existence of small island nations. Typhoon Haiyan devastated the Philippines in the run-up to climate talks in 2013, which prompted developing-country representatives to press for funding to deal with loss and damage they face over time for climate induced disasters that they weren’t responsible for. That was rejected by richer countries, including the United States and Europe.
“Extreme weather events in developing countries often cause great death and destruction — but these are seen as our responsibility, not something made worse by more than a hundred years of greenhouse gases emitted by industrialized countries,” said Ulka Kelkar, climate director at the India office of the World Resources Institute. These intensifying disasters now striking richer countries, she said, show that developing countries seeking the world’s help to fight climate change “have not been crying wolf.”
Indeed, even since the 2015 Paris Agreement was negotiated with the goal of averting the worst effects of climate change, global emissions have kept increasing. China is the world’s biggest emitter today. Emissions have been steadily declining in both the United States and Europe, but not at the pace required to limit global temperature rise.
A reminder of the shared costs came from Mohamed Nasheed, the former president of the Maldives, an island nation at acute risk from sea level rise.
“While not all are affected equally, this tragic event is a reminder that, in the climate emergency, no one is safe, whether they live on a small island nation like mine or a developed Western European state,” Mr. Nasheed said in a statement on behalf of a group of countries that call themselves the Climate Vulnerable Forum.
The ferocity of these disasters is as notable as their timing, coming ahead of the global talks in Glasgow to try to reach agreement on fighting climate change. The world has a poor track record on cooperation so far, and, this month, new diplomatic tensions emerged.
Among major economies, the European Commission last week introduced the most ambitious road map for change. It proposed laws to ban the sale of gas and diesel cars by 2035, require most industries to pay for the emissions they produce, and most significantly, impose a tax on imports from countries with less stringent climate policies.
But those proposals are widely expected to meet vigorous objections both from within Europe and from other countries whose businesses could be threatened by the proposed carbon border tax, potentially further complicating the prospects for global cooperation in Glasgow.
The events of this summer come after decades of neglect of science. Climate models have warned of the ruinous impact of rising temperatures. An exhaustive scientific assessment in 2018 warned that a failure to keep the average global temperature from rising past 1.5 degrees Celsius, compared to the start of the industrial age, could usher in catastrophic results, from the inundation of coastal cities to crop failures in various parts of the world.
The report offered world leaders a practical, albeit narrow path out of chaos. It required the world as a whole to halve emissions by 2030. Since then, however, global emissions have continued rising, so much so that global average temperature has increased by more than 1 degree Celsius (about 2 degrees Fahrenheit) since 1880, narrowing the path to keep the increase below the 1.5 degree Celsius threshold.
As the average temperature has risen, it has heightened the frequency and intensity of extreme weather events in general. In recent years, scientific advances have pinpointed the degree to which climate change is responsible for specific events.
And even though it will take extensive scientific analysis to link climate change to last week’s cataclysmic floods in Europe, a warmer atmosphere holds more moisture and is already causing heavier rainfall in many storms around the world. There is little doubt that extreme weather events will continue to be more frequent and more intense as a consequence of global warming. A paper published Friday projected a significant increase in slow-moving but intense rainstorms across Europe by the end of this century because of climate change.
“We’ve got to adapt to the change we’ve already baked into the system and also avoid further change by reducing our emissions, by reducing our influence on the climate,” said Richard Betts, a climate scientist at the Met Office in Britain and a professor at the University of Exeter.
That message clearly hasn’t sunk in among policymakers, and perhaps the public as well, particularly in the developed world, which has maintained a sense of invulnerability.
The result is a lack of preparation, even in countries with resources. In the United States, flooding has killed more than 1,000 people since 2010 alone, according to federal data. In the Southwest, heat deaths have spiked in recent years.
Sometimes that is because governments have scrambled to respond to disasters they haven’t experienced before, like the heat wave in Western Canada last month, according to Jean Slick, head of the disaster and emergency management program at Royal Roads University in British Columbia. “You can have a plan, but you don’t know that it will work,” Ms. Slick said.
Other times, it’s because there aren’t political incentives to spend money on adaptation.
“By the time they build new flood infrastructure in their community, they’re probably not going to be in office anymore,” said Samantha Montano, a professor of emergency management at the Massachusetts Maritime Academy. “But they are going to have to justify millions, billions of dollars being spent.”
By Carolyn Gramling July 9, 2021 at 6:00 am 22-28 minutos
Massive projects need much more planning and follow-through to succeed – and other tree protections need to happen too
Trees are symbols of hope, life and transformation. They’re also increasingly touted as a straightforward, relatively inexpensive, ready-for-prime-time solution to climate change.
When it comes to removing human-caused emissions of the greenhouse gas carbon dioxide from Earth’s atmosphere, trees are a big help. Through photosynthesis, trees pull the gas out of the air to help grow their leaves, branches and roots. Forest soils can also sequester vast reservoirs of carbon.
Earth holds, by one estimate, as many as 3 trillion trees. Enthusiasm is growing among governments, businesses and individuals for ambitious projects to plant billions, even a trillion more. Such massive tree-planting projects, advocates say, could do two important things: help offset current emissions and also draw out CO2 emissions that have lingered in the atmosphere for decades or longer.
Even in the politically divided United States, large-scale tree-planting projects have broad bipartisan support, according to a spring 2020 poll by the Pew Research Center. And over the last decade, a diverse garden of tree-centric proposals — from planting new seedlings to promoting natural regrowth of degraded forests to blending trees with crops and pasturelands — has sprouted across the international political landscape.
Trees “are having a bit of a moment right now,” says Joe Fargione, an ecologist with The Nature Conservancy who is based in Minneapolis. It helps that everybody likes trees. “There’s no anti-tree lobby. [Trees] have lots of benefits for people. Not only do they store carbon, they help provide clean air, prevent soil erosion, shade and shelter homes to reduce energy costs and give people a sense of well-being.”
Conservationists are understandably eager to harness this enthusiasm to combat climate change. “We’re tapping into the zeitgeist,” says Justin Adams, executive director of the Tropical Forest Alliance at the World Economic Forum, an international nongovernmental organization based in Geneva. In January 2020, the World Economic Forum launched the One Trillion Trees Initiative, a global movement to grow, restore and conserve trees around the planet. One trillion is also the target for other organizations that coordinate global forestation projects, such as Plant-for-the-Planet’s Trillion Tree Campaign and Trillion Trees, a partnership of the World Wildlife Fund, the Wildlife Conservation Society and other conservation groups.
A carbon-containing system
Forests store carbon aboveground and below. That carbon returns to the atmosphere by microbial activity in the soil, or when trees are cut down and die.
SOURCE: MINNESOTA BOARD OF WATER AND SOIL RESOURCES 2019; images: T. Tibbitts
Yet, as global eagerness for adding more trees grows, some scientists are urging caution. Before moving forward, they say, such massive tree projects must address a range of scientific, political, social and economic concerns. Poorly designed projects that don’t address these issues could do more harm than good, the researchers say, wasting money as well as political and public goodwill. The concerns are myriad: There’s too much focus on numbers of seedlings planted, and too little time spent on how to keep the trees alive in the long term, or in working with local communities. And there’s not enough emphasis on how different types of forests sequester very different amounts of carbon. There’s too much talk about trees, and not enough about other carbon-storing ecosystems.
“There’s a real feeling that … forests and trees are just the idea we can use to get political support” for many, perhaps more complicated, types of landscape restoration initiatives, says Joseph Veldman, an ecologist at Texas A&M University in College Station. But that can lead to all kinds of problems, he adds. “For me, the devil is in the details.”
The root of the problem
The pace of climate change is accelerating into the realm of emergency, scientists say. Over the last 200 years, human-caused emissions of greenhouse gases, including CO2 and methane, have raised the average temperature of the planet by about 1 degree Celsius (SN: 12/22/18 & 1/5/19, p. 18).
The world’s oceans and land-based ecosystems, such as forests, absorb about half of the carbon emissions from fossil fuel burning and other industrial activities. The rest goes into the atmosphere. So “the majority of the solution to climate change will need to come from reducing our emissions,” Fargione says. To meet climate targets set by the 2015 Paris Agreement, much deeper and more painful cuts in emissions than nations have pledged so far will be needed in the next 10 years.
We invest a lot in tree plantings, but we are not sure what happens after that.
But increasingly, scientists warn that reducing emissions alone won’t be enough to bring Earth’s thermostat back down. “We really do need an all-hands-on-deck approach,” Fargione says. Specifically, researchers are investigating ways to actively remove that carbon, known as negative emissions technologies. Many of these approaches, such as removing CO2 directly from the air and converting it into fuel, are still being developed.
But trees are a ready kind of negative emissions “technology,” and many researchers see them as the first line of defense. In its January 2020 report, “CarbonShot,” the World Resources Institute, a global nonprofit research organization, suggested that large and immediate investments in reforestation within the United States will be key for the country to have any hope of reaching carbon neutrality — in which ongoing carbon emissions are balanced by carbon withdrawals — by 2050. The report called for the U.S. government to invest $4 billion a year through 2030 to support tree restoration projects across the United States. Those efforts would be a bridge to a future of, hopefully, more technologies that can pull large amounts of carbon out of the atmosphere.
The numbers game
Earth’s forests absorb, on average, 16 billion metric tons of CO2 annually, researchers reported in the March Nature Climate Change. But human activity can turn forests into sources of carbon: Thanks to land clearing, wildfires and the burning of wood products, forests also emit an estimated 8.1 billion tons of the gas back to the atmosphere.
That leaves a net amount of 7.6 billion tons of CO2 absorbed by forests per year — roughly a fifth of the 36 billion tons of CO2 emitted by humans in 2019. Deforestation and forest degradation are rapidly shifting the balance. Forests in Southeast Asia now emit more carbon than they absorb due to clearing for plantations and uncontrolled fires. The Amazon’s forests may flip from carbon sponge to carbon source by 2050, researchers say (SN Online: 1/10/20). The priority for slowing climate change, many agree, should be saving the trees we have.
Forests in flux
While global forests were a net carbon sink of about 7.6 gigatons of carbon dioxide per year from 2001 to 2019, forests in areas such as Southeast Asia and parts of the Amazon began releasing more carbon than they store. Tap map to enlarge
Net annual average contribution of carbon dioxide from Earth’s forests, 2001–2019
Just how many more trees might be mustered for the fight is unclear, however. In 2019, Thomas Crowther, an ecologist at ETH Zurich, and his team estimated in Science that around the globe, there are 900 million hectares of land — an area about the size of the United States — available for planting new forests and reviving old ones (SN: 8/17/19, p. 5). That land could hold over a trillion more trees, the team claimed, which could trap about 206 billion tons of carbon over a century.
That study, led by Jean-Francois Bastin, then a postdoc in Crowther’s lab, was sweeping, ambitious and hopeful. Its findings spread like wildfire through media, conservationist and political circles. “We were in New York during Climate Week , and everybody’s talking about this paper,” Adams recalls. “It had just popped into people’s consciousness, this unbelievable technology solution called the tree.”
To channel that enthusiasm, the One Trillion Trees Initiative incorporated the study’s findings into its mission statement, and countless other tree-planting efforts have cited the report.
But critics say the study is deeply flawed, and that its accounting — of potential trees, of potential carbon uptake — is not only sloppy, but dangerous. In 2019, Science published five separate responses outlining numerous concerns. For example, the study’s criteria for “available” land for tree planting were too broad, and the carbon accounting was inaccurate because it assumes that new tree canopy cover equals new carbon storage. Savannas and natural grasslands may have relatively few trees, critics noted, but these regions already hold plenty of carbon in their soils. When that carbon is accounted for, the carbon uptake benefit from planting trees drops to perhaps a fifth of the original estimate.
Trees are having a bit of a moment right now.
There’s also the question of how forests themselves can affect the climate. Adding trees to snow-covered regions, for example, could increase the absorption of solar radiation, possibly leading to warming.
“Their numbers are just so far from anything reasonable,” Veldman says. And focusing on the number of trees planted also sets up another problem, he adds — an incentive structure that is prone to corruption. “Once you set up the incentive system, behaviors change to basically play that game.”
Adams acknowledges these concerns. But, the One Trillion Trees Initiative isn’t really focused on “the specifics of the math,” he says, whether it’s the number of trees or the exact amount of carbon sequestered. The goal is to create a powerful climate movement to “motivate a community behind a big goal and a big vision,” he says. “It could give us a fighting chance to get restoration right.”
Other nonprofit conservation groups, like the World Resources Institute and The Nature Conservancy, are trying to walk a similar line in their advocacy. But some scientists are skeptical that governments and policy makers tasked with implementing massive forest restoration programs will take note of such nuances.
“I study how government bureaucracy works,” says Forrest Fleischman, who researches forest and environmental policy at the University of Minnesota in St. Paul. Policy makers, he says, are “going to see ‘forest restoration,’ and that means planting rows of trees. That’s what they know how to do.”
How much carbon a forest can draw from the atmosphere depends on how you define “forest.” There’s reforestation — restoring trees to regions where they used to be — and afforestation — planting new trees where they haven’t historically been. Reforestation can mean new planting, including crop trees; allowing forests to regrow naturally on lands previously cleared for agriculture or other purposes; or blending tree cover with croplands or grazing areas.
In the past, the carbon uptake potential of letting forests regrow naturally was underestimated by 32 percent, on average — and by as much as 53 percent in tropical forests, according to a 2020 study in Nature. Now, scientists are calling for more attention to this forestation strategy.
If it’s just a matter of what’s best for the climate, natural forest regrowth offers the biggest bang for the buck, says Simon Lewis, a forest ecologist at University College London. Single-tree commercial crop plantations, on the other hand, may meet the technical definition of a “forest” — a certain concentration of trees in a given area — but factor in land clearing to plant the crop and frequent harvesting of the trees, and such plantations can actually release more carbon than they sequester.
Comparing the carbon accounting between different restoration projects becomes particularly important in the framework of international climate targets and challenges. For example, the 2011 Bonn Challenge is a global project aimed at restoring 350 million hectares by 2030. As of 2020, 61 nations had pledged to restore a total of 210 million hectares of their lands. The potential carbon impact of the stated pledges, however, varies widely depending on the specific restoration plans.
Levels of protection
The Bonn Challenge aims to globally reforest 350 million hectares of land. Allowing all to regrow naturally would sequester 42 gigatons of carbon by 2100. Pledges of 43 tropical and subtropical nations that joined by 2019 — a mix of plantations and natural regrowth — would sequester 16 gigatons of carbon. If some of the land is later converted to biofuel plantations, sequestration is 3 gigatons. With only plantations, carbon storage is 1 gigaton.
Amount of carbon sequestered by 2100 in four Bonn Challenge scenarios
SOURCE: S.L. LEWIS ET AL/NATURE 2019; graphs: T. Tibbitts
In a 2019 study in Nature, Lewis and his colleagues estimated that if all 350 million hectares were allowed to regrow natural forest, those lands would sequester about 42 billion metric tons (gigatons in chart above) of carbon by 2100. Conversely, if the land were to be filled with single-tree commercial crop plantations, carbon storage drops to about 1 billion metric tons. And right now, plantations make up a majority of the restoration plans submitted under the Bonn Challenge.
Striking the right balance between offering incentives to landowners to participate while also placing certain restrictions remains a tricky and long-standing challenge, not just for combating the climate emergency but also for trying to preserve biodiversity (SN: 8/1/20, p. 18). Since 1974, Chile, for example, has been encouraging private landowners to plant trees through subsidies. But landowners are allowed to use these subsidies to replace native forestlands with profitable plantations. As a result, Chile’s new plantings not only didn’t increase carbon storage, they also accelerated biodiversity losses, researchers reported in the September 2020 Nature Sustainability.
The reality is that plantations are a necessary part of initiatives like the Bonn Challenge, because they make landscape restoration economically viable for many nations, Lewis says. “Plantations can play a part, and so can agroforestry as well as areas of more natural forest,” he says. “It’s important to remember that landscapes provide a whole host of services and products to people who live there.”
But he and others advocate for increasing the proportion of forestation that is naturally regenerated. “I’d like to see more attention on that,” says Robin Chazdon, a forest ecologist affiliated with the University of the Sunshine Coast in Australia as well as with the World Resources Institute. Naturally regenerated forests could be allowed to grow in buffer regions between farms, creating connecting green corridors that could also help preserve biodiversity, she says. And “it’s certainly a lot less expensive to let nature do the work,” Chazdon says.
Indeed, massive tree-planting projects may also be stymied by pipeline and workforce issues. Take seeds: In the United States, nurseries produce about 1.3 billion seedlings per year, Fargione and colleagues calculated in a study reported February 4 in Frontiers in Forests and Global Change. To support a massive tree-planting initiative, U.S. nurseries would need to at least double that number.
A tree-planting report card
From China to Turkey, countries around the world have launched enthusiastic national tree-planting efforts. And many of them have become cautionary tales.
China kicked off a campaign in 1978 to push back the encroaching Gobi Desert, which has become the fastest-growing desert on Earth due to a combination of mass deforestation and overgrazing, exacerbated by high winds that drive erosion. China’s Three-North Shelter Forest Program, nicknamed the Great Green Wall, aims to plant a band of trees stretching 4,500 kilometers across the northern part of the country. The campaign has involved millions of seeds dropped from airplanes and millions more seedlings planted by hand. But a 2011 analysis suggested that up to 85 percent of the plantings had failed because the nonnative species chosen couldn’t survive in the arid environments they were plopped into.
More recently, Turkey launched its own reforestation effort. On November 11, 2019, National Forestation Day, volunteers across the country planted 11 million trees at more than 2,000 sites. In Turkey’s Çorum province, 303,150 saplings were planted in a single hour, setting a new world record.
Within three months, however, up to 90 percent of the new saplings inspected by Turkey’s agriculture and forestry trade union were dead, according to the union’s president, Şükrü Durmuş, speaking to the Guardian (Turkey’s minister of agriculture and forestry denied that this was true). The saplings, Durmuş said, died due to a combination of insufficient water and because they were planted at the wrong time of year, and not by experts.
Some smaller-scale efforts also appear to be failing, though less spectacularly. Tree planting has been ongoing for decades in the Kangra district of Himachal Pradesh in northern India, says Eric Coleman, a political scientist at Florida State University in Tallahassee, who’s been studying the outcomes. The aim is to increase the density of the local forests and provide additional forest benefits for communities nearby, such as wood for fuel and fodder for grazing animals. How much money was spent isn’t known, Coleman says, because there aren’t records of how much was paid for seeds. “But I imagine it was in the millions and millions of dollars.”
Coleman and his colleagues analyzed satellite images and interviewed members of the local communities. They found that the tree planting had very little impact one way or the other. Forest density didn’t change much, and the surveys suggested that few households were gaining benefits from the planted forests, such as gathering wood for fuel, grazing animals or collecting fodder.
But massive tree-planting efforts don’t have to fail. “It’s easy to point to examples of large-scale reforestation efforts that weren’t using the right tree stock, or adequately trained workforces, or didn’t have enough investment in … postplanting treatments and care,” Fargione says. “We … need to learn from those efforts.”
Speak for the trees
Forester Lalisa Duguma of World Agroforestry in Nairobi, Kenya, and colleagues explored some of the reasons for the very high failure rates of these projects in a working paper in 2020. “Every year there are billions of dollars invested [in tree planting], but forest cover is not increasing,” Duguma says. “Where are those resources going?”
In 2019, Duguma raised this question at the World Congress on Agroforestry in Montpellier, France. He asked the audience of scientists and conservationists: “How many of you have ever planted a tree seedling?” To those who raised their hands, he asked, “Have they grown?”
Some respondents acknowledged that they weren’t sure. “Very good! That’s what I wanted,” he told them. “We invest a lot in tree plantings, but we are not sure what happens after that.”
It comes down to a deceptively simple but “really fundamental” point, Duguma says. “The narrative has to change — from tree planting to tree growing.”
The good news is that this point has begun to percolate through the conservationist world, he says. To have any hope of success, restoration projects need to consider the best times of year to plant seeds, which seeds to plant and where, who will care for the seedlings as they grow into trees, how that growth will be monitored, and how to balance the economic and environmental needs of people in developing countries where the trees might be planted.
“That is where we need to capture the voice of the people,” Duguma says. “From the beginning.”
Even as the enthusiasm for tree planting takes root in the policy world, there’s a growing awareness among researchers and conservationists that local community engagement must be built into these plans; it’s indispensable to their success.
“It will be almost impossible to meet these targets we all care so much about unless small farmers and communities benefit more from trees,” as David Kaimowitz of the United Nations’ Food and Agriculture Organization wrote March 19 in a blog post for the London-based nonprofit International Institute for Environment and Development.
For one thing, farmers and villagers managing the land need incentives to care for the plantings and that includes having clear rights to the trees’ benefits, such as food or thatching or grazing. “People who have insecure land tenure don’t plant trees,” Fleischman says.
The old cliché — think globally, act locally — may offer the best path forward for conservationists and researchers trying to balance so many different needs and still address climate change.
“There are a host of sociologically and biologically informed approaches to conservation and restoration that … have virtually nothing to do with tree planting,” Veldman says. “An effective global restoration agenda needs to encompass the diversity of Earth’s ecosystems and the people who use them.”
Três grandes descobertas feitas nos últimos dias nos obrigam a repensar as origens da humanidade
Três descobertas nos últimos dias acabam de mudar o que sabíamos sobre a origem da raça humana e da nossa própria espécie, Homo sapiens. Talvez − dizem alguns especialistas − precisemos abandonar esse conceito para nos referir a nós mesmos, pois as novas descobertas sugerem que somos uma criatura de Frankenstein com partes de outras espécies humanas com as quais, não muito tempo atrás, compartilhamos planeta, sexo e filhos.
As descobertas da última semana indicam que cerca de 200.000 anos atrás havia até oito espécies ou grupos humanos diferentes. Todos faziam parte do gênero Homo, que nos engloba. Os recém-chegados apresentam uma interessante mistura de traços primitivos − arcos enormes acima das sobrancelhas, cabeça achatada − e modernos. O “homem dragão” da China tinha uma capacidade craniana tão grande quanto a dos humanos atuais, ou até superior. O Homo de Nesher Ramla, encontrado em Israel, pode ter sido o que deu origem aos neandertais e aos denisovanos que ocuparam, respectivamente, a Europa e a Ásia e com quem nossa espécie teve repetidos encontros sexuais, dos quais nasceram filhos mestiços que foram aceitos em suas respectivas tribos como mais um.
Agora sabemos que devido àqueles cruzamentos todas as pessoas de fora da África têm 3% de DNA neandertal, ou que os habitantes do Tibete têm genes transmitidos pelos denisovanos para poder viver em grandes altitudes. Algo muito mais inquietante foi revelado pela análise genética das populações atuais da Nova Guiné: é possível que os denisovanos − um ramo irmão dos neandertais − tenham vivido até apenas 15.000 anos atrás, uma distância muito pequena em termos evolutivos.
A terceira grande descoberta dos últimos dias é quase detetivesca. Na análise de DNA conservado no solo da caverna de Denisova, na Sibéria, foi encontrado material genético dos humanos autóctones, os denisovanos, de neandertais e de sapiens em períodos tão próximos que poderiam até se sobrepor. Lá foram encontrados há três anos os restos do primeiro híbrido entre espécies humanas que se conhece: uma menina filha de uma neandertal e de um denisovano.
O paleoantropólogo Florent Detroit descobriu para a ciência outra dessas novas espécies humanas: o Homo luzonensis, que viveu em uma ilha das Filipinas há 67.000 anos e que apresenta uma estranha mistura de traços que poderiam ser o resultado de sua longa evolução em isolamento durante mais de um milhão de anos. É um pouco parecido com o que experimentou seu contemporâneo Homo floresiensis, ou “homem de Flores”, um humano de um metro e meio que viveu em uma ilha indonésia. Tinha um cérebro do tamanho do de um chimpanzé, mas se for aplicado a ele o teste de inteligência mais usado pelos paleoantropólogos, podemos dizer que era tão avançado quanto o sapiens, pois suas ferramentas de pedra eram igualmente evoluídas.
A esses dois habitantes insulares se soma o Homo erectus, o primeiro Homo viajante que saiu da África há cerca de dois milhões de anos. Ele conquistou a Ásia e lá viveu até pelo menos 100.000 anos atrás. O oitavo passageiro desta história seria o Homo daliensis, um fóssil encontrado na China com uma mistura de erectus e sapiens, embora seja possível que acabe sendo incluído na nova linhagem do Homo longi.
“Não me surpreende que houvesse várias espécies humanas vivas ao mesmo tempo”, afirma Detroit. “Se considerarmos o último período geológico que começou há 2,5 milhões de anos, sempre houve diferentes gêneros e espécies de hominídeos compartilhando o planeta. A grande exceção é a atualidade, nunca havia existido apenas uma espécie humana na Terra”, reconhece. Por que nós, os sapiens, somos os únicos sobreviventes?
Para Juan Luis Arsuaga, paleoantropólogo do sítio arqueológico de Atapuerca, no norte da Espanha, a resposta é que “somos uma espécie hipersocial, os únicos capazes de construir laços além do parentesco, ao contrário dos demais mamíferos”. “Compartilhamos ficções consensuais como pátria, religião, língua, times de futebol; e chegamos a sacrificar muitas coisas por elas”, assinala. Nem mesmo a espécie humana mais próxima de nós, os neandertais, que criavam adornos, símbolos e arte, tinham esse comportamento. Arsuaga resume assim: “Os neandertais não tinham bandeira”. Por razões ainda desconhecidas, essa espécie se extinguiu há cerca de 40.000 anos.
Os sapiens não eram “estritamente superiores” a seus congêneres, opina Antonio Rosas, paleoantropólogo do Conselho Superior de Pesquisas Científicas da Espanha. “Agora sabemos que somos o resultado de hibridações com outras espécies, e o conjunto de características que temos foi o perfeito para aquele momento”, explica. Uma possível vantagem adicional é que os grupos sapiens eram mais numerosos que os neandertais, o que significa menos endogamia e melhor saúde das populações.
Detroit acredita que parte da explicação está na própria essência da nossa espécie sapiens, “sábio” em latim. “Temos um cérebro enorme que devemos alimentar, por isso precisamos de muitos recursos e, portanto, de muito território”, assinala. “O Homo sapiens teve uma expansão demográfica enorme e é bem possível que a disputa pelo território fosse muito dura para as demais espécies”, acrescenta.
María Martinón-Torres, diretora do Centro Nacional de Pesquisa sobre Evolução Humana, com sede em Burgos, acredita que o segredo seja a “hiperadaptabilidade”. “A nossa é uma espécie invasiva, não necessariamente mal-intencionada, mas somos como o cavalo de Átila da evolução”, compara. “Por onde passamos, e com nosso estilo de vida, diminui a diversidade biológica, incluindo a humana. Somos uma das forças ecológicas de maior impacto do planeta e essa história, a nossa, começou a se delinear no Pleistoceno [o período que começou há 2,5 milhões de anos e terminou há cerca de 10.000, quando o sapiens já era a única espécie humana que restava no planeta]”, acrescenta.
As descobertas dos últimos dias voltam a expor um problema crescente: os cientistas estão denominando cada vez mais espécies humanas. Tem sentido fazer isso? Para o paleoantropólogo israelense Israel Hershkovitz, autor da descoberta do Homo de Nesher Ramla, não. “Há muitas espécies”, afirma. “A definição clássica diz que duas espécies diferentes não podem ter filhos férteis. O DNA nos diz que sapiens, neandertais e denisovanos tiveram, por isso deveriam ser considerados a mesma espécie”, aponta.
“Se somos sapiens, então essas espécies que são nossos ancestrais por meio da miscigenação também são”, reforça João Zilhão, professor da Instituição Catalã de Pesquisa e Estudos Avançados na Universidade de Barcelona.
Essa questão é objeto de discórdia entre especialistas. “A hibridação é muito comum em espécies atuais, especialmente no mundo vegetal”, lembra José María Bermúdez de Castro, codiretor das pesquisas em Atapuerca. “Pode-se matizar o conceito de espécie, mas acho que não podemos abandoná-lo, porque é muito útil para podermos nos entender”, ressalta.
Muitas nuances entram em jogo nessa questão. A evidente diferença entre sapiens e neandertais não é a mesma coisa que a identidade como espécie do Homo luzonensis, do qual só conhecemos alguns poucos ossos e dentes, ou dos denisovanos, dos quais a maioria das informações vem do DNA extraído de fósseis minúsculos.
“Curiosamente, apesar dos cruzamentos frequentes, tanto os sapiens como os neandertais foram espécies perfeitamente reconhecíveis e distinguíveis até o fim”, destaca Martinón-Torres. “Os traços do neandertal tardio são mais marcados que os dos anteriores, em vez de terem se apagado como consequência do cruzamento. Houve trocas biológicas, e talvez culturais também, mas nenhuma das espécies deixou de ser ela, distintiva, reconhecível em sua biologia, seu aspecto, suas adaptações específicas, seu nicho ecológico ao longo de sua história evolutiva. Acredito que esse é o melhor exemplo de que a hibridação não colide necessariamente com o conceito de espécie”, conclui. Seu colega Hershkovitz alerta que o debate continuará: “Estamos fazendo escavações em outras três cavernas em Israel onde encontramos fósseis humanos que nos darão uma nova perspectiva sobre a evolução humana”.
“Mergulhar na vida das plantas permitir-nos-á dar os passos para nos descentrarmos do antropocentrismo vicioso. Sair de lá é um imperativo do nosso tempo, mas para isso é preciso redescobrir essa estranha proximidade e essa distância infinita com a alteridade do mundo vegetal. Sair do estatuto da classificação para respirar o ar que vem delas e retorna a elas. Restabelecer e reinventar a relação”, escreve Pedro Pablo Achondo Moya, teólogo e poeta, professor da Universidad Federico Santa María (Valparaíso, Chile), em artigo publicado por Endémico, 24-06-2021. A tradução é de Wagner Fernandes de Azevedo.
Eis o artigo.
O que somos diante de uma planta? O que é uma planta diante do humano e o que é capaz de transmiti-lo, ensiná-lo, comunicá-lo? As plantas estiveram ali desde muitíssimo antes de nós, mas parecia que apenas há algumas décadas e em poucos casos, séculos; o pensamento filosófico, a reflexão antropológica e a ecologia em relação ao humano, começaram a levá-las a sério. Hoje falamos de redes e vínculos, de entrelaçamentos e emaranhados. O não-humano e a alteridade das plantas cada vez mais vai tomando protagonismo no momento de compreender o mundo e de interpretar as relações que se estabelecem com tudo o que nos rodeia. Afinal de contas: não deveríamos começar a dialogar com elas e estabelecer novas alianças e vínculos?
Nesta breve reflexão queríamos compartilhar como esse outro – essa alteridade não-humana – pode ser hoje uma das engrenagens para repensar a vida na sua totalidade e redes diversas. Vale a pena voltar isso e dar o espalho necessário nestes tempos de crise multidimensional, onde como povo pensamos em um processo constituinte. Farei aludindo a alguns autores que se posicionam desde este lugar, o das plantas e da alteridade.
As plantas, segundo o filósofo canadense Michael Marder, que recentemente foi entrevistado em Endémico, têm sido estranhamente marginalizadas no pensamento. Tanto a filosofia como outras vertentes do pensamento ocidental, sobretudo as que mantiveram em uma margem conceitual a diferença dos animais e outros coabitantes do território. As plantas apenas foram consideradas na hora de interpretar o mundo ou, ao menos, conhecê-lo melhor. No entanto, isso foi mudando profundamente nas últimas décadas. No início de sua obra “Tu e Eu”, o filósofo judeu Martin Buber, considerado um dos precursores da filosofia da alteridade – que posteriormente conhecerá um dos seus pontos mais altos com Emmanuel Levinas – diz-nos: “contemplo uma árvore. Posso registrá-la como imagem […] Posso percebê-la como movimento […] posso classificá-la em uma espécie e observá-la…” e continua demonstrando que, em qualquer destes casos e outros, a árvore permanece como um objeto, seu objeto: de análise, de contemplação, de classificação. Porém, (e aqui começa a virada própria da filosofia da alteridade) é possível “que eu, ao contemplar a árvore, por graça própria e vontade, veja-me levado a entrar em relação com ele, e então, a árvore não será mais um isso” (2013, p. 14).
É o ato fundacional da relação. A árvore, a planta, o arbusto; esse outro começa recém a aparecer. Deixa de ser um objeto humanizado, um isso domesticado pelo olhar humano, o pensar humano, o dizer humano. Isso não é coisa banal, nem simples. Pois nossas próprias categorias de pensamento foram domesticadas por crenças, linguagens, ideias e conceitos. Fomos formados desde certas ontologias. Elas nos habitam e desde elas conhecemos e compreendemos o mundo. Não é aventurado dizer, que um dos fatores do colapso climático, as aberrações em matéria de justiça eco-social e a imobilidade estrutural para mudar política e eticamente, tem a ver com isto. Vemos como nos ensinaram a ver e nos parece impossível seguir aprendendo a olhar de outras maneiras e a partir dos outros e do outro.
E continua Buber “a árvore não é uma impressão, nem um jogo de minha imaginação, nem um valor que depende do estado de ânimo, mas sim que existe ante a mim e tem a ver comigo, como eu com ele, só que de outra forma” (2013, p. 15). Esta reflexão quase intuitiva é de uma força extraordinária. Esse outro avanço de mim me vê como outro. É inevitável pensar nas propostas do Perspectivismo Ameríndio do antropólogo Eduardo Viveiros de Castro. Nem tudo olha como os humanos olham.
A alteridade, pensar na alteridade e a partir da alteridade, nos coloca no lugar da estranheza e da familiaridade. Tem algo que nos une, que nos liga àquele outro, àquela planta, àquela floresta. Mas ao mesmo tempo há uma estranheza, uma distância infinita – diria Levinas – uma diferença absoluta. Foi Stefano Mancuso, um renomado neurobiólogo vegetal, que se surpreendeu com o fato de as ficções humanas sobre possíveis seres extraterrestres serem mais ou menos humanoides: com braços, alguns órgãos para olhar, uma espécie de boca ou pelo menos algo parecido com uma cabeça. Seu espanto consistia naquela projeção “humana” para esses seres de outros mundos, quando, se de fato existe “algo” totalmente “extraterrestre”, são as plantas. Uma alteridade na forma, aparência, funcionamento, ciclos, processos, adaptabilidade, longevidade, genética. Elas são os alienígenas que nos cercam e nos encantam. Lá estão elas em sua variedade e diversidade, em seu mistério e silêncio. Lá estão elas com sua outra inteligência, sua comunicação vegetal, sua extraordinária metamorfose – como Goethe percebeu em suas observações das plantas: “para frente ou para trás, a planta é sempre uma folha” (2015, p.117) – e seus enganos e seduções para conquistar e sobreviver.
Em “A Vida das Plantas”, o filósofo italiano Emanuele Coccia mergulha de forma notável na vida dessa alteridade vegetal. Para ele, questionar as plantas é conhecer o mundo, porque elas são as construtoras dele. Elas geram seu próprio mundo, “tudo que tocam, transformam em vida; da matéria, do ar, da luz do sol, fazem o que para os demais seres vivos seja um espaço a habitar, um mundo” (2017, p. 22). Tanto é que chega a dizer que “de um certo ponto de vista, as plantas nunca saíram do mar: trouxeram-no para onde não existia. Elas transformaram o universo em um imenso mar atmosférico e transmitiram seus hábitos marinhos a todos os seres. A fotossíntese nada mais é do que o processo cósmico de fluidificação do universo” (2017, p. 46).
Mergulhar na vida das plantas permitir-nos-á dar os passos para nos descentrarmos do antropocentrismo vicioso. Sair de lá é um imperativo do nosso tempo, mas para isso é preciso redescobrir essa estranha proximidade e essa distância infinita com a alteridade do mundo vegetal. Sair do estatuto da classificação para respirar o ar que vem delas e retorna a elas. Restabelecer e reinventar a relação. Compreender, com o auxílio da ciência, mas também de outros conhecimentos e abordagens, o que Marder denomina processos sub-orgânicos e agenciamentos supra-orgânicos (2016, p. 65), ou seja, seres que vivem embaixo da terra falando e se comunicando; enquanto balançam de seus inúmeros vidros e infinidade de folhas gerando um superorganismo vegetal. Aí, o próprio Marder sugere uma interpretação: nós também habitamos o micro e o macro, também geramos e somos gerados em sub e supra inter-relações. Somos individualidades, unicidades e, ao mesmo tempo, coletivos, enxames, massas, povos e tribos. Somos e nos configuramos nessas nossas redes, onde o não-humano (plantas, neste caso) é parte fundamental. Nunca mais deveríamos esquecer disto.
Reconhecer-nos nesta rede de alteridades permitirá uma melhor polinização humano-planta, uma fluidez na corresponsabilidade e na fecundação mútua. Se for verdade “que me realizo no tu; virando-me a ti, digo tu”, afirma Buber (2013, p.17); então também posso me voltar para o tu da árvore. A planta e sua alteridade me fazem quem eu sou, ao entrar em relação com ela e permitir que apareça, realmente, em seu ser vegetal, eu a deixo ser quem ela é. E aí, a planta se revela para nós.
Melhor citar o poeta Rilke: “Se quiseres conquistar a existência de uma árvore / Reveste-a de espaço interno, esse espaço / Que tem em seu ser em ti. Cerca-a de coações / Ela não tem limite, e só se torna realmente uma árvore / Quando se ordena no seio da tua renúncia”. Na renúncia do eu que projeta, do eu que domestica, do eu que transgride; essa alteridade simplesmente aparece: é uma árvore. Comentando este texto, Gastón Bachelard torna mais complexa a nossa reflexão ao dizer que: “a árvore precisa de ti para dar-lhe as tuas imagens superabundantes, alimentadas pelo teu espaço íntimo, por ‘esse espaço que tem o seu ser em ti’. Então a árvore e seu sonhador, juntos, são ordenados, eles crescem. No mundo dos sonhos, a árvore nunca é considerada acabada” (2000, p.176). Mais do que complicar, na realidade, ele explica a relação que se estabelece, desse jogo de idas e vindas entre o eu humano e o tu da árvore, entre o eu vegetal e o tu humano. Entre isso e tu. Um no outro, tentando “terminar”, completo, compreender aquela alteridade intransponível que nos escapa. Alteridade que de alguma forma nos habita. Esse tu da planta nunca é totalmente desconhecido. Pois ela tem parte do seu ser em mim. Pois bem, não será que ela, ao mesmo tempo, possui em seu ser uma parte do humano? Dessa maneira a relação se torna possível.
Mudar o rumo do antropocentrismo e da ruptura ou negação da relação é um dever para, como povo de povos, como nação humano-vegetal ou, melhor ainda, como território em co-construção e disputa; chegar a algo como uma Constituição Ecossocial, que depois permita e abra processos de geração de novas alianças e propostas territoriais. A Constituição Ecológica que esperamos – e o interessante trabalho da Convenção em diálogo cidadão – tem a dupla tarefa: mudar a linguagem antropocêntrica e suscitar que a alteridade das plantas (e de tudo o outro-que-humano) seja reconhecida, reformulada e manifestada como uma potência transformadora e uma matriz de conhecimento, como semente que vai se abrindo.
Bacherald, Gastón. (2000). La poética del espacio. Buenos Aires: FCE.
Buber, Martin. (2013). Yo y Tú. Y otros ensayos. Buenos Aires: Prometeo Libros.
Coccia, Emanuel. (2017). La vida de las plantas. Una metafísica de la mixtura. Buenos Aires: Miño y Dávila Editorial.
Goethe, J.W. von. (2015). La metamorfosis de las plantas. Barcelona: Editorial Pau de Damasc.
Mancuso, Stefano. (2017). El futuro es vegetal. Barcelona: Galaxia Gutenberg.
Marder, Michael. (2016). Grafts. Writings on plants. Minneapolis: Univocal.
Estudo coordenado pela agência Purpose recomenda incluir brasileiros de baixa renda no debate sobre sustentabilidade e aponta argumentos e narrativas para dialogar com aqueles que já vivem as consequências do caos climático
O Brasil é importante estrategicamente nos debates sobre o futuro do planeta por ser o principal responsável legal por biomas importantes, especialmente a floresta amazônica. Mas há um segundo protagonismo no caso brasileiro: o país já foi afetado por um evento climático de longa duração e portanto os brasileiros já conhecem as consequências desse fenômeno.
O Brasil do início do século XXI é a paisagem provável do mundo no século seguinte, quando fenômenos climáticos forçarem o desenraizamento e o deslocamentos de grandes populações para as periferias das cidades. No cenário conservador apresentado em A terra inabitável (Cia das Letras, 2019), o jornalista David Wallace-Wells apresenta cenários em que as mudanças climáticas provocarão o deslocamento de 600 milhões a 2 bilhões de refugiados até o final do século.
O clima como assunto dos intelectuais
Se a terra fosse um carro, os cientistas seriam faróis indicando que estamos nos dirigindo em alta velocidade para um abismo. A surpresa é que o motorista e os passageiros —líderes políticos, empresariais e a sociedade— não estão reagindo da forma que deveriam considerando a catástrofe que afetará todo mundo.
Uma pesquisa recente coordenada pela agência Purpose e realizada pela Behup, uma startup de pesquisa, sugere que brasileiros pobres podem ser mobilizados para atuar em defesa da sustentabilidade. Eles são mais da metade da população do país e sabem na pele como vai ser o mundo no futuro, porque eles representam mais da metade da população e já vivem os efeitos da catástrofe climática. Mas para isso funcionar, nós temos que partir das referências e das experiências deles em relação a esse assunto.
Um problema real, atual e econômico
A seguir, estão listados alguns insights sobre como populações menos privilegiadas percebem e falam sobre sustentabilidade.
1) Real e atual – Nos debates científicos sobre o aquecimento global, as consequências virão em algum momento do futuro, mas no Brasil popular ele é palpável e acontece hoje. O alagamento —causado pelo aumento ou pela irregularidade das chuvas— é a maneira mais evidente como o caos climático se mostra para essas pessoas. E ele aponta para dois problemas reais: a falta de infraestrutura de escoamento de água da chuva e a falta de serviços regulares de coleta de lixo. Outro problema é causado pelo clima seco, que acentua os casos de doenças respiratórias —relativamente fáceis de serem tratadas pela medicina, mas complicadas para quem depende do atendimento público.
2) O lixo tangibiliza o problema. Quando o pobre urbano fala sobre sustentabilidade, a primeira associação é com o lixo. Lixo coletado sem regularidade pelo serviço público nas periferias, muitas vezes descartado nas ruas — o “papel de bala” jogado no chão, lixo dispensado fora de hora e espalhado na rua por gatos, cachorros e outros animais. O problema do lixo materializa esse assunto para quem vê o lixo acumulado, a coleta irregular de dejetos, sacos de lixo resgados e espalhados nas ruas por animais; o impacto de se viver em locais sujos, desprezados pelos governantes; lixo que se acumula em espaços sem iluminação pública e que são ocupados por assaltantes ou por traficantes.
3) A metáfora do consumismo predatório. O lixo representa ou metaforiza a sociedade de consumo que descarta o que ainda é útil. O lixo talvez seja algo “naturalizado” para as camadas urbanas médias e altas, mas isso é menos claro para vem de uma lógica de reuso —o lixo orgânico alimenta os animais, a lata vira lamparina, a garrafa PET tem mil e uma utilidades. Para alguns respondentes do estudo, é moralmente incômodo descartar aquilo que pode ser reutilizado. Classificar algo como “lixo” é uma decisão, uma escolha, que mostra uma percepção sobre desperdício e responsabilidade conjunta para cuidar do lugar em que se vive.
4) Ser sustentável é ser econômico. Geralmente ouvimos falar da preservação do meio ambiente como algo que tem uma motivação altruísta: “zelar pelo futuro das crianças, das florestas etc.” Mas essa abstração não é prioridade para quem vive em situação de vulnerabilidade e está preocupada com o que vai acontecer amanhã. De onde vem o alimento, o emprego, o remédio; como se defender do crime, o que fazer em relação à escola fechada, por exemplo. Para esse brasileiro, sustentabilidade é uma boa ação que traz vantagem econômica. Plásticos e latas podem se tornar utensílios e brinquedos. Usar lâmpadas LED e controlar o uso da água diminuem os gastos. Pneus, tijolos e outros produtos de demolição são mais baratos para quem quer construir. E finalmente há o tema do trabalho: recolher lixo reciclável é uma fonte de renda para quem não tem outra fonte de renda.
O tema da sustentabilidade geralmente é debatido em círculos intelectualizados entre brasileiros das camadas médias e altas. O brasileiro pobre não é convidado a participar dessa conversa, pelo preconceito que associa baixa escolaridade a incapacidade de pensar e entender o mundo. Mas em um mundo com muito mais pobres do que ricos, essa discussão se fortalecerá se dialogar com as milhares de pessoas —no Brasil e no mundo— que já vivem as consequências do caos climático.
Estudo de caso
No início do mês de junho, portanto, pouco tempo depois de eu escrever este artigo, recebi pelo WhatsApp o vídeo incluído adiante, feito pela ativista Duda Salabert, vereadora em Belo Horizonte, sobre instalação de uma mineradora da empresa Tamisa na Serra do Curral, próxima à capital mineira. O vídeo argumenta que a mineração afetará as nascentes de água que servem a cidade, levantará poeira causando problemas respiratórios na população de BH, particularmente para uma comunidade/bairro chamado Taquaril, que fica a três quilômetros de onde o empreendimento será instalado caso seja aprovado.
O vídeo tem argumento convincente, imagens registradas por drone para dar ideia das distâncias entre os locais indicados. Fui mobilizado e por isso, parei o que estava fazendo e repassei o vídeo para… ambientalistas amigos meus — já me desculpando por achar que eles possivelmente já conheciam a situação ou teriam recebido o vídeo de outras pessoas. Mas escrevendo as mensagens, examinei como o argumento do vídeo — à luz do que eu mesmo escrevi acima — é feito para circular entre pessoas das camadas médias e altas, principalmente mais escolarizadas e identificadas com valores progressistas.
A vereadora Duda, em um trecho do vídeo, aponta para a comunidade/bairro do Taquaril e diz que os moradores não foram ouvidos mas sofrerão diretamente os impactos ambientais da mineração. Para a vereadora, essa atitude configura um caso de “racismo ambiental”. Esse argumento é convincente e deve soar “natural” para leitores e leitoras do EL PAÍS, mas falar dessa forma:
Compara esse bairro pobre ao recurso natural, sugerindo passividade dos moradores, como se eles não tivessem capacidade —por falta de estudos e situação econômica adversa— para participar do debate.
Ao fazer isso, os criadores do vídeo cometem o mesmo erro que estão denunciando, que é não envolver os moradores nesse debate.
Debater com os moradores do Taquaril, visitar o bairro e conversar com líderes comunitários. Mas escutar como pessoas comuns como eles percebem o empreendimento minerador — inclusive considerar a possibilidade de que a mineração abrirá oportunidades de emprego para várias dessas famílias. E, a partir dessa conversa interessada, atenta e continuada, que procura entender o problema a partir da ótica dessas pessoas, dialogar com elas sobre o assunto, conforme este artigo propõe.
O movimento ambientalista está se dando conta que precisa dialogar com outras audiências se quiser —mais do que ter razão— ser eficiente e produzir os resultados que mitigarão o caos climático. O caso da Serra do Curral em BH mostra como essa reflexão é urgente; se essa mudança de atitude não acontece em relação a um problema que acontece tão próximo a uma cidade grande, como então agir em relação ao que acontece nos rincões do país?
Juliano Spyer é antropólogo digital, escritor e educador. Mestre e doutor pela University College London, é autor de Povo de Deus: Quem são os evangélicos e por que eles importam (Geração Editorial), entre outros livros. Este texto foi publicado originalmente aqui.
Primatologista Frans De Waal fala sobre a inteligência e as emoções dos macacos
O encontro entre a chimpanzé idosa, dias antes de morrer, e seu amigo da vida toda, cientista também idoso, é uma cena inesquecível: a alegria irradiante de Mama, 59, ao abraçar o primatologista Jan Van Hooff, já octogenário, é um gesto reconhecível por milhões de espectadores do Youtube, em todos os cantos do planeta.
O ensaísta Frans de Waal, autor de best-sellers como “A Era da Empatia” e outros estudos sobre comportamentos e emoções dos macacos, usou a cena como mote e título de seu novo livro, “O Último Abraço da Matriarca” (Zahar, 452 págs.).
De Waal foi aluno de Van Hoof e conhecia muito bem Mama, a quem ele estudou e acompanhou por meio século de estudos do comportamento animal.
Como em seus outros livros, o conteúdo é um permanente diálogo entre o comportamento animal e o dos homens. Os chimpanzés e bonobos, que ele define como nossos “parentes” mais próximos, são usados para entender comportamentos humanos e destacar aquelas características que perdemos ou esquecemos ao longo do processo evolutivo.
Algumas delas, qualidades essenciais, atualíssimas, como a tolerância com os indivíduos que tem comportamentos diferentes.
Nesta entrevista, ele antecipa que seu novo livro terá como tema a questão de gêneros nas sociedades de primatas. E antecipa uma conclusão: “Creio que nós humanos podemos aprender muito sobre tolerância com eles”.
A revista “National Geographic” recentemente publicou uma capa sobre os chimpanzés cujo título era: ‘Sapiens?’, com uma interrogação. O senhor crê que os grandes primatas são sapiens? Eles são muito inteligentes e nós, humanos, nos orgulhamos de nossa inteligência também. Mas quanto mais estudamos e aprendemos sobre os chimpanzés ao longo dos últimos 25 anos, mais encontramos manifestações do mesmo tipo de inteligência. Por exemplo, os chimpanzés são capazes de pensar adiante, podem pensar no futuro, podem planejar o futuro. Também pensam no passado, se lembram de eventos específicos do passado. Eles testar coisas, criar ferramentas e podem se reconhecer no espelho. Então, existem muitos sinais de que eles têm alto nível de inteligência, que os diferencia dos outros animais.
Em seus livros, o senhor descreve vários rituais e formas de mediação de conflitos entre chimpanzés, como fazer cafuné após uma briga. Quais são as formas similares com que os humanos fazem isso? Por exemplo, depois de uma briga, eles se beijam e se abraçam. Normalmente, depois de 10 minutos eles se aproximam e têm algum contato e depois disso eles fazem carinhos como cafunés. Nós humanos normalmente somos menos físicos: pedimos desculpas, dizemos alguma coisa ou fazemos algo gentil, como trazer um café, como forma de reconciliação. Mas é claro que se for em uma família, pode ter também uma dimensão física, pode ser até sexual, como acontece em certas espécies de primatas. E abraçar e beijar são comportamentos muito humanos e os humanos também fazem isso.
Então, qual é a principal diferença entre os humanos e os outros primatas? Há muitas semelhanças entre os pontos básicos de nossa inteligência humana e a desses animais. Há uma área em que temos uma diferença, que é a linguagem. É claro que os macacos se comunicam, como outros animais também, eles têm sinais que fazem uns para os outros. Mas, a comunicação simbólica, que pode se desenvolver, mudar, variar, pois o homem tem tantas linguagens diferentes, essa é uma propriedade unicamente humana. E é uma capacidade muito importante, porque podemos nos comunicar com pessoas que estão à distância, como estávamos fazendo agora, sobre coisas que não estão nem aqui e nem aí, isso é algo impossível para outros animais.
Pensando no caso da gorila Koko, que tinha domínio da língua de sinais e com ela se comunicava com humanos, o senhor diria que ela tinha um domínio humano da linguagem? Não, eu não diria isso. Veja, existem hoje muitos macacos treinados para compreender as línguas de sinais e gestos com as mãos, inclusive comunicação simbólica. Mas os resultados são realmente desapontadores. Eles podem fazer algumas coisas, podem aprender uma centena de símbolos, mas a comunicação com eles continua sendo muito limitada. É mais limitada do que aquela que você pode ter com uma criança de dois anos, aproximadamente. Então, os experimentos de linguagem com macacos já não são muito populares, porque não apresentaram bons resultados.
Suponha que um casal humano tenha um filho e no mesmo momento adote um bebê chimpanzé e decida criar os dois juntos como filhos e irmãos. Até quando o desenvolvimento deles será idêntico? Essa é uma pergunta interessante, porque pessoas já tentaram isso. Houve famílias nas décadas de 1950 e 1960 que tentaram criar seus filhos na companhia de bebês chimpanzés. O curioso é que esses projetos foram interrompidos porque as crianças humanas começaram a imitar os macacos, ao invés do contrário. As crianças começaram a se comportar como chimpanzés, pulando pra cima e pra baixo e grunhindo como macacos, por isso o programa foi interrompido. Mas os filhotes de macacos, se criados em uma família de humanos, eles fazem muitas das mesmas coisas: eles vêm televisão, gostam de jogar jogos. Algumas vezes eles se comportam fora das regras humanas, escalam as cortinas, sobem no telhado, coisa que as pessoas não gostam nada. Mas, em geral, quando são novos, eles se comportaram como crianças e brincam como crianças.
É correto dizer que só os humanos matam por razões como vingança, ódio, rancor, ambição, inveja e outras razões que não estão ligadas à alimentação ou ao instinto de sobrevivência? Eu creio que isso seja verdade, porque chimpanzés são animais muito agressivos e eles podem algumas vezes matar uns aos outros por poder, por exemplo, disputa de comando sobre o grupo ou por território, quando eles defendem seus territórios contra outros. Nós temos um outro parente próximo, o bonobo. Eles são tão próximos de nós quanto os chimpanzés. Eles são muito mais amigáveis, não são tão agressivos. Mas há espécies de primatas que matam por outras questões que não só por alimento, sobrevivência ou coisas como essas.
Eu entendo que os chimpanzés tendem a resolver seus conflitos brigando, enquanto os bonobos têm uma diplomacia mais relacionada à sexualidade e à afetividade. O senhor diria que os homens têm um lado chimpanzé mais desenvolvido ou temos características desses dois parentes, dessas duas tendências? Nós temos os dois lados: nós podemos ser eróticos e sexuais como os bonobos mas também podemos nos tornar violentos como os chimpanzés. Entre os chimpanzés, os homens são os dominantes enquanto os bonobos são dominados pelas mulheres. Por isso algumas pessoas dizem que somos mais parecidos com os chimpanzés. Eu não tenho essa certeza, eu acredito que temos muito da empatia e da sexualidade dos bonobos. Então, eu creio que somos uma mistura das duas espécies. Além disso, nós temos nossa própria evolução, a evolução humana, que se desenvolve há muito tempo. Nós desenvolvemos coisas novas, como a linguagem e o modelo de famílias, formadas por Pai, Mãe e crianças. Isso não vemos em nenhum outro macaco.
Em seus livros o senhor mostra que os macacos são capazes de entender a linguagem corporal dos outros, muito mais do que nós humanos conseguimos. O senhor acredita que o predomínio da linguagem verbal deteriorou nossa capacidade de entender as expressões do corpo? É uma questão interessante: nós humanos confiamos tanto na linguagem verbal, prestamos tanta atenção ao que uma pessoa diz que muitas vezes esquecemos o quanto somos sensíveis a questões como a expressão facial, o tom de voz, o corpo. Nós somos de fato muito bons na leitura da linguagem corporal mas muitas vezes esquecemos isso. Por exemplo: quando eu vejo debates entre políticos na TV, frequentemente tiro o som, não quero ouvir o que eles dizem porque eles estão sempre mentindo, quero apenas ver sua linguagem corporal, que ela é muito mais informativa do que a linguagem verbal.
E ao observá-lo, o senhor diria que Donald Trump é um macho alfa, se comporta como um líder chimpanzé? O problema com isso é que eu usei a expressão “macho alfa” para definir machos chimpanzés e muitos dos “machos alfa” que eu conheço são bons líderes: eles mantêm o grupo unido, eles unem as partes quando se dividem, garantem a preservação da ordem na sociedade, eles têm empatia pelos outros. Essas são qualidades que muitos líderes do mundo humano não têm. Nós os chamamos algumas vezes de “alfa” porque eles são dominantes, eles comandam a cena política mas não agem como “machos alfa” em termos de liderança. Liderança, e isso vale também para as mulheres, que podem ser líderes também, é juntar as partes, mantê-las unidas, preservar a ordem na sociedade e nem todos os “machos alfa” são bons nisso.
Seus livros costumam tratar das emoções dos animais e suas relações com as emoções e comportamentos humanos. Quanto nós podemos aprender com os macacos e com isso obter um comportamento melhor de nossa sociedade? Meus livros não dizem como organizar uma sociedade humana, porque eu falo sobre bonobos, chimpanzés e outros primatas. Eu não sinto que podemos tomar lições diretamente daí. Mas o que eu posso dizer é que a psicologia humana é muito antiga. Nós costumamos pensar que inventamos tudo. De fato nós inventamos muitas coisas de tecnologia: o telefone celular, o avião etc. Mas nosso comportamento e nossa psicologia são muito antigos. Então, a mensagem dos meus livros é que muitas das tendências que nós temos são ancestrais, elas são como as dos primatas. E nesse sentido é que podemos aprender com os primatas. Podemos aprender que em suas comunidades eles resolvem conflitos, são muito bons em se reconciliar depois, em dividir alimentos… Essas são coisas que podemos aprender com os animais.
Seu livro “A Era da Empatia” me deixou a impressão de que o senhor tem o desejo de empoderar o lado bonobo que temos dentro de nós humanos. Estou certo? Empatia é uma característica muito antiga dos mamíferos. Muitos mamíferos têm empatia, seu cachorro tem empatia. Os cientistas fizeram experiências: pediram para os adultos em uma família chorarem, para observar como os cachorros e as crianças reagem. E ambos reagem procurando se aproximar da pessoa que está chorando para consolá-la e dar conforto. Essa é uma atitude de empatia que podemos observar em todos os mamíferos. Nós humanos temos uma enorme capacidade de exercer a empatia, mas às vezes nos esquecemos disso. Especialmente, com estranhos, com gente de fora de nosso círculo, nós às vezes não revelamos esse tipo de empatia.
Falando da cena que serve de título a seu livro, o abraço final da chimpanzé Mama e do cientista que ela conheceu a vida toda: ela sabia que estava morrendo, que iria morrer em duas semanas? Os chimpanzés enfrentam a morte? Nesta cena, meu professor, Jan van Hooff, com oitenta anos, se aproximou da chimpanzé Mama, que estava com 59 anos e estava morrendo. Ele entrou em sua jaula; ela vivia em uma área grande, com um grande grupo de chimpanzés, mas dormia em uma jaula. Ele entrou na jaula, o que nós nunca, nunca fazemos porque os macacos são muito mais fortes do que nós. Mas ele fez isso, porque ela estava morrendo. E ela o cumprimentou com um abraço. Ele sabia que ela iria morrer, estava muito fraca, e nós a conhecíamos muito bem. E ela logo o acolheu, o abraçou. O professor Van Hooff entrou lá sabendo que ela estava morrendo, mas não sabemos se ela sabia que ia morrer. Nós não sabemos se os animais têm um senso de mortalidade. Ela evidentemente sabia que estava fraca, mas não podemos afirmar que ela tinha consciência da morte. O encontro era uma oportunidade do professor se despedir dela, não sabemos se ela via aquele momento do mesmo jeito. O motivo de eu trazer esse encontro para o título do livro foi porque aquele momento, além de deixar as pessoas muito emocionadas, nos deixa muito surpresos: como os gestos são parecidos com gestos humanos, como suas expressões são parecidas com humanas. E essa reação das pessoas me surpreendeu. Nós estamos dizendo há cerca de 50 anos que os bonobos e chimpanzés são muito próximos dos seres humanos; então por que as pessoas ainda se surpreendem com suas emoções e suas expressões que parece humanas? Então por isso decidi tomar essa cena para explicar que todas as expressões faciais que nós humanos temos bem como todas as emoções que temos podem ser encontradas em nossos parentes próximos, os primatas.
Em seu livro você narra a história de uma mãe chimpanzé cujo filhote morre e ela segue carregando seu corpo por um longo período. Ela achava que ele estava vivo ou fingia que ele estava vivo? Isso acontece com frequência. Os laços entre mãe e filho são muito fortes. Então, quando a criança morre, as mães não os abandonam. Isso é verdade com humanos, com orcas e golfinhos, ocorre com os primatas. As mães carregam os corpos de seus bebês mortos com elas. Eu penso que para elas é uma forma de manter o contato com eles. Eu acho que sim, elas sabem que seus filhos morreram, elas sabem que ele está morto, mesmo assim querem mantê-los juntos. Creio que isso é se deve à força dos laços fortíssimos entre eles e essa é uma forma de tornar gradual o processo de separação.
Podemos dizer que humanos demonstram isso com fotos e outros objetos? Entre humanos, nós esperamos que a mãe, quando o filho morre, se separe do corpo. Mas muitas mães têm a tendência de segurá-lo e provavelmente elas manifestam isso mantendo as memórias vivas. Nunca é uma separação completa. Quando perdemos uma pessoa, nunca nos separamos completamente dela.
O senhor tem um livro inédito no Brasil cujo título é uma pergunta: “Somos Inteligentes o Suficiente para Entender Como os Animais são Inteligentes” (Are We Smart Enough to Know How Smart Animals Are, 2016)? Qual é sua resposta: somos? Há um longo tempo nas pesquisas em inteligência animal durante a qual nós, humanos, apresentamos desafios muito simples para os animais. Tipo: colocamos um rato em uma caixa e o rato tem que apertar várias vezes uma alavanca para receber recompensas por isso e essa é a forma como testamos sua inteligência. Mas o rato é um animal muito mais inteligente do que isso, ele pode fazer muito mais coisas do que apertar uma alavanca. Então, nós não temos sido muito inteligentes no jeito de testar a inteligência animal. Especialmente com os macacos, os elefantes, os golfinhos, esses animais muito inteligentes, nós não devemos submetê-los a testes simples, devemos fazer testes apropriados para suas capacidades. Algumas vezes é muito difícil; por exemplo, a capacidade do olfato de um elefante é cem vezes maior do que a de um cachorro, que é cem vezes melhor do que nós somos. Então, temos que fazer testes que desafiem o olfato do elefante, mas isso é muito difícil criarmos, porque somos uma espécie muito visual. É complicado para os humanos trabalharem no mesmo nível das capacidades desses animais.
O sermos visuais e verbais reduz as outras dimensões de nossa inteligência? Sim. Por exemplo, o senso de localização dos morcegos, que permite que eles voem no escuro e capturem insetos, é uma capacidade muito complexa, mas nós humanos não somos muito interessados nisso. Nós somos interessados no uso de ferramentas, em linguagens, porque somos muito bons nisso. As coisas que os morcegos fazem não nos interessam muito, porque não temos essas capacidades. Nós humanos somos muito antropocêntricos, temos viés humanos, admiramos como somos inteligentes. Então, pesquisamos o uso de ferramentas e as linguagens dos outros animais, porque somos bons nisso.
O senso comum criado pela influência das religiões diz que a linguagem é um monopólio do homem, um dom concedido unicamente ao homem. O senhor diria que nos próximos 25 anos poderemos ter surpresas nesse campo, quanto à capacidade de comunicação dos outros seres vivos? Os animais nos têm surpreendido ao longo dos últimos 25 anos. Todos os tipos de domínios, todos os estudos têm demonstrado isso. E há animais que têm formas de comunicação muito complexas, mesmo que não sejam como a nossa linguagem, mas tipos diferentes. Por exemplo: golfinhos têm muitos sons, embaixo d’água, que nós humanos temos dificuldade de ouvir, mas com sensores temos condições de ouvir e gravar, que revelam uma comunicação complexa. E quem consegue entender o que está acontecendo ali? Por isso, eu creio que sim, vamos nos surpreender com as descobertas que faremos sobre a sofisticação da comunicação de outros animais, que pode não ser exatamente como a linguagem humana mas ser muito complexa. Então, eu não creio que sejamos os únicos animais com capacidade de comunicar coisas complicadas uns para os outros.
O senhor tem um vídeo muito popular no Youtube que mostra um macaco que se irrita por ter recebido uma recompensa pior que outro indivíduo ao realizar a mesma tarefa. Lutar por justiça é uma característica primata, antes de ser humana? Nesse vídeo há dois macacos-prego, que é uma espécie que existe no Brasil, um recebe passas ao realizar a tarefa e o outro recebe pedaços de pepino cortado. Normalmente, se você dá pepinos aos dois macacos, eles vão achar ótimo. Mas se você dá passas a um e pepino para o outro, o que recebe o pepino vai ficar muito bravo. Nós chamamos isso de aversão pela desigualdade mas você pode chamar de senso de justiça. Eles são sensíveis quanto ao que recebem pelo que realizam, em comparação com o que outra pessoa recebe. Eu creio que isso é a raiz do senso de justiça na sociedade humana. Nós também ficamos irritados se alguém ganha um pagamento maior pelo mesmo trabalho.
O senhor já está trabalhando em um novo livro? Sim, estou trabalhando em um livro sobre gênero, as diferenças entre os sexos. Em todos os primatas vemos diferenças, como nas sociedades humanas. Eu estou estudando isso.
Há outras espécies de primatas em que se pode encontrar mais de dois gêneros? Sim, há sempre indivíduos em sociedades primatas que são diferentes dos outros. Por exemplo: fêmeas que agem mais como machos ou machos que agem mais como fêmeas; há também indivíduos que não se encaixam em nenhum desses estereótipos. Então, de fato, tipos de diferenças que observamos na sociedade humana aparecem também em outros animais.
Então podemos aprender também com os outros primatas sobre respeito aos transgêneros? Eu também escrevi sobre homossexualidade entre os primatas. O mais interessante para mim é que eles toleram qualquer comportamento, sem qualquer problema. Eles não criam agitação em torno do assunto, não é uma questão importante. Se você tem um indivíduo em uma sociedade que não se comporta como outros machos do grupo, ninguém vai se perturbar por isso. Creio que nós humanos podemos aprender muito sobre tolerância com eles, sim.
Fifty years of patient advocacy, including the shocking discovery of mass graves at Kamloops, have secured once-unthinkable gains.
June 17, 2021
When an Indigenous community in Canada announced recently that it had discovered a mass burial site with the remains of 215 children, the location rang with significance.
Not just because it was on the grounds of a now-shuttered Indian Residential School, whose forcible assimilation of Indigenous children a 2015 truth and reconciliation report called “a key component of a Canadian government policy of cultural genocide.”
That school is in Kamloops, a city in British Columbia from which, 52 years ago, Indigenous leaders started a global campaign to reverse centuries of colonial eradication and reclaim their status as sovereign nations.
Their effort, waged predominantly in courts and international institutions, has accumulated steady gains ever since, coming further than many realize.
It has brought together groups from the Arctic to Australia. Those from British Columbia, in Canada’s mountainous west, have been at the forefront throughout.
Only two years ago, the provincial government there became the world’s first to adopt into law United Nations guidelines for heightened Indigenous sovereignty. On Wednesday, Canada’s Parliament passed a law, now awaiting a final rubber stamp, to extend those measures nationwide.
It was a stunning victory, decades in the making, that activists are working to repeat in New Zealand — and, perhaps one day, in more recalcitrant Australia, Latin America and even the United States.
“There’s been a lot of movement in the field. It’s happening with different layers of courts, with different legislatures,” said John Borrows, a prominent Canadian legal scholar and a member of the Chippewa of the Nawash Unceded First Nation.
The decades-long push for sovereignty has come with a rise in activism, legal campaigning and historical reckonings like the discovery at Kamloops. All serve the movement’s ultimate aim, which is nothing less than overturning colonial conquests that the world has long accepted as foregone.
No one is sure precisely what that will look like or how long it might take. But advances once considered impossible “are happening now,” Dr. Borrows said, “and in an accelerating way.”
A Generational Campaign
The Indigenous leaders who gathered in 1969 had been galvanized by an array of global changes.
The harshest assimilation policies were rolled back in most countries, but their effects remained visible in everyday life. Extractive and infrastructure megaprojects were provoking whole communities in opposition. The civil rights era was energizing a generation.
But two of the greatest motivators were gestures of ostensible reconciliation.
In 1960, world governments near-unanimously backed a United Nations declaration calling to roll back colonialism. European nations began withdrawing overseas, often under pressure from the Cold War powers.
But the declaration excluded the Americas, Australia and New Zealand, where colonization was seen as too deep-rooted to reverse. It was taken as effectively announcing that there would be no place in the modern world for Indigenous peoples.
Then, at the end of the decade, Canada’s progressive government issued a fateful “white paper” announcing that it would dissolve colonial-era policies, including reserves, and integrate Indigenous peoples as equal citizens. It was offered as emancipation.
Other countries were pursuing similar measures, with the United States’ inauspiciously named “termination policy.”
To the government’s shock, Indigenous groups angrily rejected the proposal. Like the United Nations declaration, it implied that colonial-era conquests were to be accepted as forgone.
Indigenous leaders gathered in Kamloops to organize a response. British Columbia was a logical choice. Colonial governments had never signed treaties with its original inhabitants, unlike in other parts of Canada, giving special weight to their claim to live under illegal foreign occupation.
“It’s really Quebec and British Columbia that have been the two epicenters, going back to the ’70s,” said Jérémie Gilbert, a human rights lawyer who works with Indigenous groups. Traditions of civil resistance run deep in both.
The Kamloops group began what became a campaign to impress upon the world that they were sovereign peoples with the rights of any nation, often by working through the law.
They linked up with others around the world, holding the first meeting of The World Council of Indigenous Peoples on Vancouver Island. Its first leader, George Manuel, had passed through the Kamloops residential school as a child.
The council’s charter implicitly treated countries like Canada and Australia as foreign powers. It began lobbying the United Nations to recognize Indigenous rights.
It was nearly a decade before the United Nations so much as established a working group. Court systems were little faster. But the group’s ambitions were sweeping.
Legal principles like terra nullius — “nobody’s land” — had long served to justify colonialism. The activists sought to overturn these while, in parallel, establishing a body of Indigenous law.
“The courts are very important because it’s part of trying to develop our jurisprudence,” Dr. Borrows said.
The movement secured a series of court victories that, over decades, stitched together a legal claim to the land, not just as its owners but as sovereign nations. One, in Canada, established that the government had an obligation to settle Indigenous claims to territory. In Australia, the high court backed a man who argued that his family’s centuries-long use of their land superseded the government’s colonial-era conquest.
Activists focused especially on Canada, Australia and New Zealand, which each draw on a legal system inherited from Britain. Laws and rulings in one can become precedent in the others, making them easier to present to the broader world as a global norm.
Irene Watson, an Australian scholar of international Indigenous law and First Nations member, described this effort, in a 2016 book, as “the development of international standards” that would pressure governments to address “the intergenerational impact of colonialism, which is a phenomenon that has never ended.”
It might even establish a legal claim to nationhood. But it is the international arena that ultimately confers acceptance on any sovereign state.
Steps Toward Sovereignty
By the mid-1990s, the campaign was building momentum.
The United Nations began drafting a declaration of Indigenous rights. Several countries formally apologized, often alongside promises to settle old claims.
This period of truth and reconciliation was meant to address the past and, by educating the broader public, create support for further advances.
Judicial advances have followed a similar process: yearslong efforts that bring incremental gains. But these add up. Governments face growing legal obligations to defer to Indigenous autonomy.
The United States has lagged. Major court rulings have been fewer. The government apologized only in 2010 for “past ill-conceived policies” against Indigenous people and did not acknowledge direct responsibility. Public pressure for reconciliation has been lighter.
Still, efforts are growing. In 2016, activists physically impeded construction of a North Dakota pipeline whose environmental impact, they said, would infringe on Sioux sovereignty. They later persuaded a federal judge to pause the project.
Latin America has often lagged as well, despite growing activism. Militaries in several countries have targeted Indigenous communities in living memory, leaving governments reluctant to self-incriminate.
In 2007, after 40 years of maneuvering, the United Nations adopted the declaration on Indigenous rights. Only the United States, Australia, New Zealand and Canada opposed, saying it elevated some Indigenous claims above those of other citizens. All four later reversed their positions.
“The Declaration’s right to self-determination is not a unilateral right to secede,” Dr. Claire Charters, a New Zealand Māori legal expert, wrote in a legal journal. However, its recognition of “Indigenous peoples’ collective land rights” could be “persuasive” in court systems, which often treat such documents as proof of an international legal principle.
Few have sought formal independence. But an Australian group’s 2013 declaration, brought to the United Nations and the International Court of Justice, inspired several others to follow. All failed. But, by demonstrating widening legal precedent and grass roots support, they highlighted that full nationhood is not as unthinkable as it once was.
It may not have seemed like a step in that direction when, in 2019, British Columbia enshrined the U.N. declaration’s terms into provincial law.
But Dr. Borrows called its provisions “quite significant,” including one requiring that the government win affirmative consent from Indigenous communities for policies that affect them. Conservatives and legal scholars have argued it would amount to an Indigenous veto, though Justin Trudeau, Canada’s prime minister, and his liberal government dispute this.
Mr. Trudeau promised to pass a similar law nationally in 2015, but faced objections from energy and resource industries that it would allow Indigenous communities to block projects. He continued trying, and Wednesday’s passage in Parliament all but ensures that Canada will fully adopt the U.N. terms.
Mr. Gilbert said that activists’ current focus is “getting this into the national systems.” Though hardly Indigenous independence, it would bring them closer than any step in generations.
As the past 50 years show, this could help pressure others to follow (New Zealand is considered a prime candidate), paving the way for the next round of gradual but quietly historical advances.
It is why, Mr. Gilbert said, “All the eyes are on Canada.”
The period preceding the emergence of behaviourally modern humans was characterised by dramatic climatic and environmental variability – it is these pressures, occurring over hundreds of thousands of years that shaped human evolution.
New research published today in the Cambridge Archaeological Journal proposes a new theory of human cognitive evolution entitled ‘Complementary Cognition’ which suggests that in adapting to dramatic environmental and climactic variabilities our ancestors evolved to specialise in different, but complementary, ways of thinking.
Lead author Dr Helen Taylor, Research Associate at the University of Strathclyde and Affiliated Scholar at the McDonald Institute for Archaeological Research, University of Cambridge, explained: “This system of complementary cognition functions in a way that is similar to evolution at the genetic level but instead of underlying physical adaptation, may underlay our species’ immense ability to create behavioural, cultural and technological adaptations. It provides insights into the evolution of uniquely human adaptations like language suggesting that this evolved in concert with specialisation in human cognition.”
The theory of complementary cognition proposes that our species cooperatively adapt and evolve culturally through a system of collective cognitive search alongside genetic search which enables phenotypic adaptation (Darwin’s theory of evolution through natural selection can be interpreted as a ‘search’ process) and cognitive search which enables behavioural adaptation.
Dr Taylor continued, “Each of these search systems is essentially a way of adapting using a mixture of building on and exploiting past solutions and exploring to update them; as a consequence, we see evolution in those solutions over time. This is the first study to explore the notion that individual members of our species are neurocognitively specialised in complementary cognitive search strategies.”
Complementary cognition could lie at the core of explaining the exceptional level of cultural adaptation in our species and provides an explanatory framework for the emergence of language. Language can be viewed as evolving both as a means of facilitating cooperative search and as an inheritance mechanism for sharing the more complex results of complementary cognitive search. Language is viewed as an integral part of the system of complementary cognition.
The theory of complementary cognition brings together observations from disparate disciplines, showing that they can be viewed as various faces of the same underlying phenomenon.
Dr Taylor continued: “For example, a form of cognition currently viewed as a disorder, dyslexia, is shown to be a neurocognitive specialisation whose nature in turn predicts that our species evolved in a highly variable environment. This concurs with the conclusions of many other disciplines including palaeoarchaeological evidence confirming that the crucible of our species’ evolution was highly variable.”
Nick Posford, CEO, British Dyslexia Association said, “As the leading charity for dyslexia, we welcome Dr Helen Taylor’s ground-breaking research on the evolution of complementary cognition. Whilst our current education and work environments are often not designed to make the most of dyslexia-associated thinking, we hope this research provides a starting point for further exploration of the economic, cultural and social benefits the whole of society can gain from the unique abilities of people with dyslexia.”
At the same time, this may also provide insights into understanding the kind of cumulative cultural evolution seen in our species. Specialisation in complementary search strategies and cooperatively adapting would have vastly increased the ability of human groups to produce adaptive knowledge, enabling us to continually adapt to highly variable conditions. But in periods of greater stability and abundance when adaptive knowledge did not become obsolete at such a rate, it would have instead accumulated, and as such Complementary Cognition may also be a key factor in explaining cumulative cultural evolution.
Complementary cognition has enabled us to adapt to different environments, and may be at the heart of our species’ success, enabling us to adapt much faster and more effectively than any other highly complex organism. However, this may also be our species’ greatest vulnerability.
Dr Taylor concluded: “The impact of human activity on the environment is the most pressing and stark example of this. The challenge of collaborating and cooperatively adapting at scale creates many difficulties and we may have unwittingly put in place a number of cultural systems and practices, particularly in education, which are undermining our ability to adapt. These self-imposed limitations disrupt our complementary cognitive search capability and may restrict our capacity to find and act upon innovative and creative solutions.”
“Complementary cognition should be seen as a starting point in exploring a rich area of human evolution and as a valuable tool in helping to create an adaptive and sustainable society. Our species may owe our spectacular technological and cultural achievements to neurocognitive specialisation and cooperative cognitive search, but our adaptive success so far may belie the importance of attaining an equilibrium of approaches. If this system becomes maladjusted, it can quickly lead to equally spectacular failures to adapt – and to survive, it is critical that this system be explored and understood further.”
At the mercy of natural selection since the dawn of life, our ancestors adapted, mated and died, passing on tiny genetic mutations that eventually made humans what we are today.
But evolution isn’t bound strictly to genes anymore, a new study suggests. Instead, human culture may be driving evolution faster than genetic mutations can work.
In this conception, evolution no longer requires genetic mutations that confer a survival advantage being passed on and becoming widespread. Instead, learned behaviors passed on through culture are the “mutations” that provide survival advantages.
This so-called cultural evolution may now shape humanity’s fate more strongly than natural selection, the researchers argue.
“When a virus attacks a species, it typically becomes immune to that virus through genetic evolution,” study co-author Zach Wood, a postdoctoral researcher in the School of Biology and Ecology at the University of Maine, told Live Science.
Such evolution works slowly, as those who are more susceptible die off and only those who survive pass on their genes.
But nowadays, humans mostly don’t need to adapt to such threats genetically. Instead, we adapt by developing vaccines and other medical interventions, which are not the results of one person’s work but rather of many people building on the accumulated “mutations” of cultural knowledge.
By developing vaccines, human culture improves its collective “immune system,” said study co-author Tim Waring, an associate professor of social-ecological systems modeling at the University of Maine.
And sometimes, cultural evolution can lead to genetic evolution. “The classic example is lactose tolerance,” Waring told Live Science. “Drinking cow’s milk began as a cultural trait that then drove the [genetic] evolution of a group of humans.”
In that case, cultural change preceded genetic change, not the other way around.
The concept of cultural evolution began with the father of evolution himself, Waring said. Charles Darwin understood that behaviors could evolve and be passed to offspring just as physical traits are, but scientists in his day believed that changes in behaviors were inherited. For example, if a mother had a trait that inclined her to teach a daughter to forage for food, she would pass on this inherited trait to her daughter. In turn, her daughter might be more likely to survive, and as a result, that trait would become more common in the population.
Waring and Wood argue in their new study, published June 2 in the journal Proceedings of the Royal Society B, that at some point in human history, culture began to wrest evolutionary control from our DNA. And now, they say, cultural change is allowing us to evolve in ways biological change alone could not.
Here’s why: Culture is group-oriented, and people in those groups talk to, learn from and imitate one another. These group behaviors allow people to pass on adaptations they learned through culture faster than genes can transmit similar survival benefits.
An individual can learn skills and information from a nearly unlimited number of people in a small amount of time and, in turn, spread that information to many others. And the more people available to learn from, the better. Large groups solve problems faster than smaller groups, and intergroup competition stimulates adaptations that might help those groups survive.
As ideas spread, cultures develop new traits.
In contrast, a person only inherits genetic information from two parents and racks up relatively few random mutations in their eggs or sperm, which takes about 20 years to be passed on to their small handful of children. That’s just a much slower pace of change.
“This theory has been a long time coming,” said Paul Smaldino, an associate professor of cognitive and information sciences at the University of California, Merced who was not affiliated with this study. “People have been working for a long time to describe how evolutionary biology interacts with culture.”
It’s possible, the researchers suggest, that the appearance of human culture represents a key evolutionary milestone.
“Their big argument is that culture is the next evolutionary transition state,” Smaldino told Live Science.
Throughout the history of life, key transition states have had huge effects on the pace and direction of evolution. The evolution of cells with DNA was a big transitional state, and then when larger cells with organelles and complex internal structures arrived, it changed the game again. Cells coalescing into plants and animals was another big sea change, as was the evolution of sex, the transition to life on land and so on.
Each of these events changed the way evolution acted, and now humans might be in the midst of yet another evolutionary transformation. We might still evolve genetically, but that may not control human survival very much anymore.
“In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,” Waring said in a statement.
But genetics drives bee colonies, while the human superorganism will exist in a category all its own. What that superorganism looks like in the distant future is unclear, but it will likely take a village to figure it out.
Em agosto, o Instituto Nacional de Pesquisas Espaciais (Inpe) deve desligar o supercomputador chamado Tupã, responsável por prever o tempo, emitir alertas climáticos, coletar e monitorar dados para pesquisas e desenvolvimento científico.
Segundo o Instituto, o desligamento — o primeiro da história — será realizado por falta de recursos. Neste ano, o Inpe recebeu o menor orçamento vindo do Governo Federal, totalizando R$ 44,7 milhões. No total, eram previstos o encaminhamento de R$ 76 milhões de verba. Para efeito de comparação, só o supercomputador consome R$ 5 milhões por ano de energia elétrica.
Como resposta, o Instituto Brasileiro de Proteção Ambiental (Proam) enviou um documento ao Ministério Público, pedindo a manutenção do monitoramento e um plano urgente para a gestão da crise. O mesmo documento também foi enviado ao Tribunal de Contas da União (TCU) e às defensorias públicas das regiões Sudeste, Sul e Centro-Oeste.
“É inaceitável que em um momento como esse, diante da crise hídrica esperada no segundo semestre, com aumento dos preços da energia e risco de racionamento de água, o supercomputador seja desligado, com o argumento de falta de verbas”, afirma Carlos Bocuhy, presidente do Proam.
A professora da Universidade de São Paulo, Yara Schaeffer-Novelli, explica que o desligamento será extremamente prejudicial para os estudos do clima, dificultando, inclusive, o monitoramento de queimadas, estiagens e mudanças climáticas no Brasil.
Um parlamento indígena aberto, para dar voz e visibilidade política aos 305 povos originários do país, é o objectivo do Parlaíndio, fundado este mês no Brasil, anunciado nesta quarta-feira, 26 de Maio, e que terá assembleias mensais.
O Parlaíndio integra as lideranças indígenas brasileiras e tem já um portal com fotos dos seus líderes e notícias de assembleias ou de acontecimentos directa ou indirectamente relacionados com os povos indígenas.
O cacique Raoni Metuktire, um importante líder indígena brasileiro, conhecido em todo o mundo pela sua luta pela preservação da Amazónia e dos povos nativos, é o seu presidente de honra, enquanto a coordenação executiva é da responsabilidade do cacique Almir Narayamoga Suruí, principal liderança do povo Paiter Suruí, da Rondónia, reconhecido internacionalmente pelos seus projectos de sustentabilidade em terras indígenas.
A primeira assembleia do Parlaíndio Brasil, noticia a Lusa citada pela TSF, decorreu virtualmente na última quinta-feira, 20 de Maio. Nessa altura, as lideranças indígenas discutiram os objectivos do movimento, bem como a sua estruturação e o modo como decorrerão as assembleias mensais.
Entre as principais questões que o movimento abordará, ainda de acordo com a Lusa, estão a desflorestação e invasões das terras indígenas, projectos de mineração e hidroeléctricas em terras dos povos nativos, garimpo ilegal, poluição dos rios por mercúrio e contaminação das populações originárias e ribeirinhas.
O Parlaíndio tomou já uma primeira decisão política: a entrada com uma acção na justiça pedindo a exoneração do presidente da Funai (Fundação Nacional do Índio), órgão tutelado pelo Governo brasileiro, cuja missão deveria ser coordenar e pôr em prática políticas de protecção dos povos nativos.
“Foi aprovado, por unanimidade, o Parlaíndio Brasil entrar com uma acção na justiça pedindo a exoneração do presidente da Funai, delegado Marcelo Xavier, que à frente do órgão não tem cumprido a missão institucional de proteger e promover os direitos dos povos indígenas do país”, indicou o movimentou o em comunicado.
Em causa, refere a mesma fonte, está um pedido feito recentemente pelo presidente da Funai à Polícia Federal (PF), para a abertura de um inquérito contra lideranças indígenas, sob o pretexto de difamação do Governo de Jair Bolsonaro.
“A Funai é um órgão que deveria promover assistência, protecção e garantias dos direitos dos povos indígenas brasileiros e, actualmente, faz o inverso. O inquérito teve carácter de intimidação e criminalização a partir de uma determinação do presidente da Funai”, explicou Almir Suruí, coordenador executivo do Parlaíndio Brasil.
O mesmo responsável considera que esta estrutura será importante para construir uma política de defesa dos povos indígenas, depois de a Constituição de 1988 ter consagrado um conjunto de políticas públicas e direitos para os indígenas brasileiros. “Um dos nossos objectivos é debater a construção do presente e do futuro a partir de uma cuidadosa avaliação do passado. Vamos discutir também as políticas públicas e fornecer subsídios para as organizações que integram o movimento indígena”, acrescentou o responsável na sessão de lançamento do movimento.
A ideia de criar o Parlamento Indígena do Brasil, como se pode ler no portal do Parlaíndio, surgiu numa reunião de lideranças indígenas realizada em Outubro de 2017, no Conselho Indigenista Missionário, uma organização da Igreja Católica de apoio aos povos indígenas.
De acordo com a mesma informação, há actualmente no Brasil mais de 900 mil indígenas no Brasil, membros de 305 povos distintos, que falam mais de 180 línguas, segundo dados do Parlaíndio (a propósito do qual se pode ouvir aqui a crónica Outros Sinais, de Fernando Alves, na TSF, nesta quinta, 27).
Cada vez mais pobres e indígenas em Manaus
Esta notícia surge em simultâneo com a denúncia de um frade católico franciscano, segundo o qual muitos indígenas e outras pessoas do interior do Amazonas estão a chegar a Manaus, a capital do Estado, sem nada para viver.
“Temos famílias nos subúrbios que não têm nada para viver. Muitos vieram do interior do país e chegaram aqui na esperança de encontrar comida na cidade. Mas aqui só encontram fome e desemprego. Para cúmulo, agora nem sequer têm uma horta para cultivar ou o rio para pescar”, diz o padre Paolo Maria Braghini, franciscano capuchinho italiano, citado pela Ajuda à Igreja que Sofre.
“No meio de tanta pobreza, escolhemos certas localidades na periferia e, com a ajuda de líderes comunitários locais, identificámos as famílias mais carenciadas”, explica frei Paolo, sobre o modo como a comunidade de franciscanos está a procurar minorar a situação.
Manaus, um dos principais centros financeiros, industriais e económicos de toda a região norte, tem mais de dois milhões de habitantes e continua a atrair as populações da região. A cidade, que já tinha muitas bolsas de pobreza, viu a situação agravar-se com a pandemia do novo coronavírus e o colapso dos serviços de saúde.
As populações pobres e indígenas do Amazonas foram alguns dos sectores mais atingidos pela falta de estruturas. Em Janeiro, num dos picos da crise, o bispo de Manaus chegou mesmo a pedir ajuda para que fosse enviado oxigénio para os hospitais.
In a new study, University of Maine researchers found that culture helps humans adapt to their environment and overcome challenges better and faster than genetics.
After conducting an extensive review of the literature and evidence of long-term human evolution, scientists Tim Waring and Zach Wood concluded that humans are experiencing a “special evolutionary transition” in which the importance of culture, such as learned knowledge, practices and skills, is surpassing the value of genes as the primary driver of human evolution.
Culture is an under-appreciated factor in human evolution, Waring says. Like genes, culture helps people adjust to their environment and meet the challenges of survival and reproduction. Culture, however, does so more effectively than genes because the transfer of knowledge is faster and more flexible than the inheritance of genes, according to Waring and Wood.
Culture is a stronger mechanism of adaptation for a couple of reasons, Waring says. It’s faster: gene transfer occurs only once a generation, while cultural practices can be rapidly learned and frequently updated. Culture is also more flexible than genes: gene transfer is rigid and limited to the genetic information of two parents, while cultural transmission is based on flexible human learning and effectively unlimited with the ability to make use of information from peers and experts far beyond parents. As a result, cultural evolution is a stronger type of adaptation than old genetics.
Waring, an associate professor of social-ecological systems modeling, and Wood, a postdoctoral research associate with the School of Biology and Ecology, have just published their findings in a literature review in the Proceedings of the Royal Society B, the flagship biological research journal of The Royal Society in London.
“This research explains why humans are such a unique species. We evolve both genetically and culturally over time, but we are slowly becoming ever more cultural and ever less genetic,” Waring says.
Culture has influenced how humans survive and evolve for millenia. According to Waring and Wood, the combination of both culture and genes has fueled several key adaptations in humans such as reduced aggression, cooperative inclinations, collaborative abilities and the capacity for social learning. Increasingly, the researchers suggest, human adaptations are steered by culture, and require genes to accommodate.
Waring and Wood say culture is also special in one important way: it is strongly group-oriented. Factors like conformity, social identity and shared norms and institutions — factors that have no genetic equivalent — make cultural evolution very group-oriented, according to researchers. Therefore, competition between culturally organized groups propels adaptations such as new cooperative norms and social systems that help groups survive better together.
According to researchers, “culturally organized groups appear to solve adaptive problems more readily than individuals, through the compounding value of social learning and cultural transmission in groups.” Cultural adaptations may also occur faster in larger groups than in small ones.
With groups primarily driving culture and culture now fueling human evolution more than genetics, Waring and Wood found that evolution itself has become more group-oriented.
“In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,” Waring says. “The ‘society as organism’ metaphor is not so metaphorical after all. This insight can help society better understand how individuals can fit into a well-organized and mutually beneficial system. Take the coronavirus pandemic, for example. An effective national epidemic response program is truly a national immune system, and we can therefore learn directly from how immune systems work to improve our COVID response.”
Waring is a member of the Cultural Evolution Society, an international research network that studies the evolution of culture in all species. He applies cultural evolution to the study of sustainability in social-ecological systems and cooperation in organizational evolution.
Wood works in the UMaine Evolutionary Applications Laboratory managed by Michael Kinnison, a professor of evolutionary applications. His research focuses on eco-evolutionary dynamics, particularly rapid evolution during trophic cascades.
Everywhere from business to medicine to the climate, forecasting the future is a complex and absolutely critical job. So how do you do it—and what comes next?
February 26, 2020
Professor of atmospheric science, University of California, Berkeley
Prediction for 2030: We’ll light up the world… safely
I’ve spoken to people who want climate model information, but they’re not really sure what they’re asking me for. So I say to them, “Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?”
I joined Jim Hansen’s group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What we’ve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.
Still, there are things we’re missing. We still don’t have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through history—and now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium-thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.
I don’t see us reducing fossil fuels by 2030. I don’t see us reducing CO2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so I’m looking forward to the growth in alternative energy going to parts of the world that have no electricity. That’s important because it’s education, health, everything associated with a Western standard of living. That’s where I’m putting my hopes.
Anne Lise Kjaer
Futurist, Kjaer Global, London
Prediction for 2030: Adults will learn to grasp new ideas
As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.
When it comes to the future, you have two choices. You can sit back and think “It’s not happening to me” and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.
A lot of companies come to us and think they want to hear about the future, but really it’s just an exercise for them—let’s just tick that box, do a report, and put it on our bookshelf.
So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislation—how does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future?
What’s next? Obviously with technology we can educate much better than we could in the past. But it’s a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world?
Coauthor of Superforecasting and professor, University of Pennsylvania
Prediction for 2030: We’ll get better at being uncertain
At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which it’s usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstraction—but from our point of view, it’s more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didn’t happen: driverless Ubers weren’t picking people up for fares in any major American city at the end of 2017. Watson didn’t defeat the world’s best oncologists in a medical diagnosis tournament. So I don’t think we’re on a fast track toward the singularity, put it that way.
Forecasts have the potential to be either self-fulfilling or self-negating—Y2K was arguably a self-negating forecast. But it’s possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that?
What I’ve seen over the last 10 years, and it’s a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think there’s a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.
Associate professor of economics, UCLA
Prediction for 2030: We’ll be more—and less—private
When I worked on Uber’s surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable times—like New Year’s—when we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. It’s like trying to predict the weather. Yes, the amount of weather data that we collect today—temperature, wind speed, barometric pressure, humidity data—is 10,000 times greater than what we were collecting 20 years ago. But we still can’t predict the weather 10,000 times further out than we could back then. And social movements—even in a very specific setting, such as where riders want to go at any given point in time—are, if anything, even more chaotic than weather systems.
These days what I’m doing is a little bit more like forensic economics. We look to see what we can find and predict from people’s movement patterns. We’re just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information?
I think the next big social tipping point is people actually starting to really care about their privacy. It’ll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I don’t quite know how to reconcile the two.
Science fiction and nonfiction author, San Francisco
Prediction for 2030: We’re going to see a lot more humble technology
Every era has its own ideas about the future. Go back to the 1950s and you’ll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.
Science fiction writers can’t actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we can’t say what’s definitely going to happen, is offer a range of scenarios informed by history.
There are a lot of myths about the future that people believe are going to come true right now. I think a lot of people—not just science fiction writers but people who are working on machine learning—believe that relatively soon we’re going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.
It seems unlikely that a human-equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.
I’m not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.
We’re going to have to develop much better technologies around disaster relief and emergency response, because we’ll be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I don’t mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.
Associate professor of computer science, Harvard
Prediction for 2030: Humans and machines will make decisions together
In my lab, we’re trying to answer questions like “How might this patient respond to this antidepressant?” or “How might this patient respond to this vasopressor?” So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.
Some of it might be relevant to making predictions about their illnesses, some not, and we don’t know which is which. That’s why we ask for the large data set with everything.
There’s been about a decade of work trying to get unsupervised machine-learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.
We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.
I’m excited about combining humans and AI to make predictions. Let’s say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.
All these predictive models were built and deployed and people didn’t think enough about potential biases. I’m hopeful that we’re going to have a future where these human-machine teams are making decisions that are better than either alone.
Abdoulaye Banire Diallo
Professor, director of the bioinformatics lab, University of Quebec at Montreal
Prediction for 2030: Machine-based forecasting will be regulated
When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. I’m involved in projects that add a layer of genetic and genomic data to help forecasting—to help decision makers like the farmer to have a full picture when they’re thinking about replacing cows, improving management, resilience, and animal welfare.
With the emergence of machine learning and AI, what we’re showing is that we can help tackle problems in a way that hasn’t been done before. We are adapting it to the dairy sector, where we’ve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.
Until now AI and machine learning have been associated with domain expertise. It’s not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.
In a race to cure his daughter, a Google programmer enters the world of hyper-personalized drugs.
Erika Check Hayden
February 26, 2020
To create atipeksen, Yu borrowed from recent biotech successes like gene therapy. Some new drugs, including cancer therapies, treat disease by directly manipulating genetic information inside a patient’s cells. Now doctors like Yu find they can alter those treatments as if they were digital programs. Change the code, reprogram the drug, and there’s a chance of treating many genetic diseases, even those as unusual as Ipek’s.
The new strategy could in theory help millions of people living with rare diseases, the vast majority of which are caused by genetic typos and have no treatment. US regulators say last year they fielded more than 80 requests to allow genetic treatments for individuals or very small groups, and that they may take steps to make tailor-made medicines easier to try. New technologies, including custom gene-editing treatments using CRISPR, are coming next.
Where it had taken decades for Ionis to perfect its drug, Yu now set a record: it took only eight months for Yu to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.
“I never thought we would be in a position to even contemplate trying to help these patients,” says Stanley Crooke, a biotechnology entrepreneur and founder of Ionis Pharmaceuticals, based in Carlsbad, California. “It’s an astonishing moment.”
Right now, though, insurance companies won’t pay for individualized gene drugs, and no company is making them (though some plan to). Only a few patients have ever gotten them, usually after heroic feats of arm-twisting and fundraising. And it’s no mistake that programmers like Mehmet Kuzu, who works on data privacy, are among the first to pursue individualized drugs. “As computer scientists, they get it. This is all code,” says Ethan Perlstein, chief scientific officer at the Christopher and Dana Reeve Foundation.
A nonprofit, the A-T Children’s Project, funded most of the cost of designing and making Ipek’s drug. For Brad Margus, who created the foundation in 1993 after his two sons were diagnosed with A-T, the change between then and now couldn’t be more dramatic. “We’ve raised so much money, we’ve funded so much research, but it’s so frustrating that the biology just kept getting more and more complex,” he says. “Now, we’re suddenly presented with this opportunity to just fix the problem at its source.”
Ipek was only a few months old when her father began looking for a cure. A geneticist friend sent him a paper describing a possible treatment for her exact form of A-T, and Kuzu flew from Sunnyvale, California, to Los Angeles to meet the scientists behind the research. But they said no one had tried the drug in people: “We need many more years to make this happen,” they told him.
Kuzu didn’t have years. After he returned from Los Angeles, Margus handed him a thumb drive with a video of a talk by Yu, a doctor at Boston Children’s Hospital, who described how he planned to treat a young girl with Batten disease (a different neurodegenerative condition) in what press reports would later dub “a stunning illustration of personalized genomic medicine.” Kuzu realized Yu was using the very same gene technology the Los Angeles scientists had dismissed as a pipe dream.
That technology is called “antisense.” Inside a cell, DNA encodes information to make proteins. Between the DNA and the protein, though, come messenger molecules called RNA that ferry the gene information out of the nucleus. Think of antisense as mirror-image molecules that stick to specific RNA messages, letter for letter, blocking them from being made into proteins. It’s possible to silence a gene this way, and sometimes to overcome errors, too.
Though the first antisense drugs appeared 20 years ago, the concept achieved its first blockbuster success only in 2016. That’s when a drug called nusinersen, made by Ionis, was approved to treat children with spinal muscular atrophy, a genetic disease that would otherwise kill them by their second birthday.
Yu, a specialist in gene sequencing, had not worked with antisense before, but once he’d identified the genetic error causing Batten disease in his young patient, Mila Makovec, it became apparent to him he didn’t have to stop there. If he knew the gene error, why not create a gene drug? “All of a sudden a lightbulb went off,” Yu says. “Couldn’t one try to reverse this? It was such an appealing idea, and such a simple idea, that we basically just found ourselves unable to let that go.”
Yu admits it was bold to suggest his idea to Mila’s mother, Julia Vitarello. But he was not starting from scratch. In a demonstration of how modular biotech drugs may become, he based milasen on the same chemistry backbone as the Ionis drug, except he made Mila’s particular mutation the genetic target. Where it had taken decades for Ionis to perfect a drug, Yu now set a record: it took only eight months for him to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.
“What’s different now is that someone like Tim Yu can develop a drug with no prior familiarity with this technology,” says Art Krieg, chief scientific officer at Checkmate Pharmaceuticals, based in Cambridge, Massachusetts.
As word got out about milasen, Yu heard from more than a hundred families asking for his help. That’s put the Boston doctor in a tough position. Yu has plans to try antisense to treat a dozen kids with different diseases, but he knows it’s not the right approach for everyone, and he’s still learning which diseases might be most amenable. And nothing is ever simple—or cheap. Each new version of a drug can behave differently and requires costly safety tests in animals.
Kuzu had the advantage that the Los Angeles researchers had already shown antisense might work. What’s more, Margus agreed that the A-T Children’s Project would help fund the research. But it wouldn’t be fair to make the treatment just for Ipek if the foundation was paying for it. So Margus and Yu decided to test antisense drugs in the cells of three young A-T patients, including Ipek. Whichever kid’s cells responded best would get picked.
While he waited for the test results, Kuzu raised about $200,000 from friends and coworkers at Google. One day, an email landed in his in-box from another Google employee who was fundraising to help a sick child. As he read it, Kuzu felt a jolt of recognition: his coworker, Jennifer Seth, was also working with Yu.
Seth’s daughter Lydia was born in December 2018. The baby, with beautiful chubby cheeks, carries a mutation that causes seizures and may lead to severe disabilities. Seth’s husband Rohan, a well-connected Silicon Valley entrepreneur, refers to the problem as a “tiny random mutation” in her “source code.” The Seths have raised more than $2 million, much of it from co-workers.
By then, Yu was ready to give Kuzu the good news: Ipek’s cells had responded the best. So last September the family packed up and moved from California to Cambridge, Massachusetts, so Ipek could start getting atipeksen. The toddler got her first dose this January, under general anesthesia, through a lumbar puncture into her spine.
After a year, the Kuzus hope to learn whether or not the drug is helping. Doctors will track her brain volume and measure biomarkers in Ipek’s cerebrospinal fluid as a readout of how her disease is progressing. And a team at Johns Hopkins will help compare her movements with those of other kids, both with and without A-T, to observe whether the expected disease symptoms are delayed.
One serious challenge facing gene drugs for individuals is that short of a healing miracle, it may ultimately be impossible to be sure they really work. That’s because the speed with which diseases like A-T progress can vary widely from person to person. Proving a drug is effective, or revealing that it’s a dud, almost always requires collecting data from many patients, not just one. “It’s important for parents who are ready to pay anything, try anything, to appreciate that experimental treatments often don’t work,” says Holly Fernandez Lynch, a lawyer and ethicist at the University of Pennsylvania. “There are risks. Trying one could foreclose other options and even hasten death.”
Kuzu says his family weighed the risks and benefits. “Since this is the first time for this kind of drug, we were a little scared,” he says. But, he concluded, “there’s nothing else to do. This is the only thing that might give hope to us and the other families.”
Another obstacle to ultra-personal drugs is that insurance won’t pay for them. And so far, pharmaceutical companies aren’t interested either. They prioritize drugs that can be sold thousands of times, but as far as anyone knows, Ipek is the only person alive with her exact mutation. That leaves families facing extraordinary financial demands that only the wealthy, lucky, or well connected can meet. Developing Ipek’s treatment has already cost $1.9 million, Margus estimates.
Some scientists think agencies such as the US National Institutes of Health should help fund the research, and will press their case at a meeting in Bethesda, Maryland, in April. Help could also come from the Food and Drug Administration, which is developing guidelines that may speed the work of doctors like Yu. The agency will receive updates on Mila and other patients if any of them experience severe side effects.
The FDA is also considering giving doctors more leeway to modify genetic drugs to try in new patients without securing new permissions each time. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, likens traditional drug manufacturing to factories that mass-produce identical T-shirts. But, he points out, it’s now possible to order an individual basic T-shirt embroidered with a company logo. So drug manufacturing could become more customized too, Marks believes.
Custom drugs carrying exactly the message a sick kid’s body needs? If we get there, credit will go to companies like Ionis that developed the new types of gene medicine. But it should also go to the Kuzus—and to Brad Margus, Rohan Seth, Julia Vitarello, and all the other parents who are trying save their kids. In doing so, they are turning hyper-personalized medicine into reality.
Erika Check Hayden is director of the science communication program at the University of California, Santa Cruz.
Cash is gradually dying out. Will we ever have a digital alternative that offers the same mix of convenience and freedom?
January 3, 2020
If you’d rather keep all that to yourself, you’re in luck. The person in the store (or on the street corner) may remember your face, but as long as you didn’t reveal any identifying information, there is nothing that links you to the transaction.
This is a feature of physical cash that payment cards and apps do not have: freedom. Called “bearer instruments,” banknotes and coins are presumed to be owned by whoever holds them. We can use them to transact with another person without a third party getting in the way. Companies cannot build advertising profiles or credit ratings out of our data, and governments cannot track our spending or our movements. And while a credit card can be declined and a check mislaid, handing over money works every time, instantly.
We shouldn’t take this freedom for granted. Much of our commerce now happens online. It relies on banks and financial technology companies to serve as middlemen. Transactions are going digital in the physical world, too: electronic payment tools, from debit cards to Apple Pay to Alipay, are increasingly replacing cash. While notes and coins remain popular in many countries, including the US, Japan, and Germany, in others they are nearing obsolescence.
This trend has civil liberties groups worried. Without cash, there is “no chance for the kind of dignity-preserving privacy that undergirds an open society,” writes Jerry Brito, executive director of Coin Center, a policy advocacy group based in Washington, DC. In a recent report, Brito contends that we must “develop and foster electronic cash” that is as private as physical cash and doesn’t require permission to use.
The central question is who will develop and control the electronic payment systems of the future. Most of the existing ones, like Alipay, Zelle, PayPal, Venmo, and Kenya’s M-Pesa, are run by private firms. Afraid of leaving payments solely in their hands, many governments are looking to develop some sort of electronic stand-in for notes and coins. Meanwhile, advocates of stateless, ownerless cryptocurrencies like Bitcoin say they’re the only solution as surveillance-proof as cash—but can they be feasible at large scales?
We tend to take it for granted that new technologies work better than old ones—safer, faster, more accurate, more efficient, more convenient. Purists may extol the virtues of vinyl records, but nobody can dispute that a digital music collection is easier to carry and sounds almost exactly as good. Cash is a paradox—a technology thousands of years old that may just prove impossible to re-create in a more advanced form.
In (government) money we trust?
We call banknotes and coins “cash,” but the term really refers to something more abstract: cash is essentially money that your government owes you. In the old days this was a literal debt. “I promise to pay the bearer on demand the sum of …” still appears on British banknotes, a notional guarantee that the Bank of England will hand over the same value in gold in exchange for your note. Today it represents the more abstract guarantee that you will always be able to use that note to pay for things.
The digits in your bank account, on the other hand, refer to what your bank owes you. When you go to an ATM, you are effectively converting the bank’s promise to pay into a government promise.
Most people would say they trust the government’s promise more, says Gabriel Söderberg, an economist at the Riksbank, the central bank of Sweden. Their bet—correct, in most countries—is that their government is much less likely to go bust.
That’s why it would be a problem if Sweden were to go completely “cashless,” Söderberg says. He and his colleagues fear that if people lose the option to convert their bank money to government money at will and use it to pay for whatever they need, they might start to lose trust in the whole money system. A further worry is that if the private sector is left to dominate digital payments, people who can’t or won’t use these systems could be shut out of the economy.
This is fast becoming more than just a thought experiment in Sweden. Nearly everyone there uses a mobile app called Swish to pay for things. Economists have estimated that retailers in Sweden could completely stop accepting cash by 2023.
Creating an electronic version of Sweden’s sovereign currency—an “e-krona”—could mitigate these problems, Söderberg says. If the central bank were to issue digital money, it would design it to be a public good, not a profit-making product for a corporation. “Easily accessible, simple and user-friendly versions could be developed for those who currently have difficulty with digital technology,” the bank asserted in a November report covering Sweden’s payment landscape.
The Riksbank plans to develop and test an e-krona prototype. It has examined a number of technologies that might underlie it, including cryptocurrency systems like Bitcoin. But the central bank has also called on the Swedish government to lead a broad public inquiry into whether such a system should ever go live. “In the end, this decision is too big for a central bank alone, at least in the Swedish context,” Söderberg says.
The death of financial privacy
China, meanwhile, appears to have made its decision: the digital renminbi is coming. Mu Changchun, head of the People’s Bank of China’s digital currency research institute, said in September that the currency, which the bank has been working on for years, is “close to being out.” In December, a local news report suggested that the PBOC is nearly ready to start tests in the cities of Shenzhen and Suzhou. And the bank has been explicit about its intention to use it to replace banknotes and coins.
Cash is already dying out on its own in China, thanks to Alipay and WeChat Pay, the QR-code-based apps that have become ubiquitous in just a few years. It’s been estimated that mobile payments made up more than 80% of all payments in China in 2018, up from less than 20% in 2013.
It’s not clear how much access the government currently has to transaction data from WeChat Pay and Alipay. Once it issues a sovereign digital currency—which officials say will be compatible with those two services—it will likely have access to a lot more. Martin Chorzempa, a research fellow at the Peterson Institute for International Economics in Washington, DC, told the New York Times in October that the system will give the PBOC “extraordinary power and visibility into the financial system, more than any central bank has today.”
We don’t know for sure what technology the PBOC plans to use as the basis for its digital renminbi, but we have at least two revealing clues. First, the bank has been researching blockchain technology since 2014, and the government has called the development of this technology a priority. Second, Mu said in September that China’s system will bear similarities to Libra, the electronic currency Facebook announced last June. Indeed, PBOC officials have implied in public statements that the unveiling of Libra inspired them to accelerate the development of the digital renminbi, which has been in the works for years.
As currently envisioned, Libra will run on a blockchain, a type of accounting ledger that can be maintained by a network of computers instead of a single central authority. However, it will operate very differently from Bitcoin, the original blockchain system.
The computers in Bitcoin’s network use open-source software to automatically verify and record every single transaction. In the process, they generate a permanent public record of the currency’s entire transaction history: the blockchain. As envisioned, Libra’s network will do something similar. But whereas anyone with a computer and an internet connection can participate anonymously in Bitcoin’s network, the “nodes” that make up Libra’s network will be companies that have been vetted and given membership in a nonprofit association.
Unlike Bitcoin, which is notoriously volatile, Libra will be designed to maintain a stable value. To pull this off, the so-called Libra Association will be responsible for maintaining a reserve (pdf) of government-issued currencies (the latest plan is for it to be half US dollars, with the other half composed of British pounds, euros, Japanese yen, and Singapore dollars). This reserve is supposed to serve as backing for the digital units of value.
Both Libra and the digital renminbi, however, face serious questions about privacy. To start with, it’s not clear if people will be able to use them anonymously.
With Bitcoin, although transactions are public, users don’t have to reveal who they really are; each person’s “address” on the public blockchain is just a random string of letters and numbers. But in recent years, law enforcement officials have grown skilled at combining public blockchain data with other clues to unmask people using cryptocurrencies for illicit purposes. Indeed, in a July blog post, Libra project head David Marcus argued that the currency would be a boon for law enforcement, since it would help “move more cash transactions—where a lot of illicit activities happen—to a digital network.”
As for the Chinese digital currency, Mu has said it will feature some level of anonymity. “We know the demand from the general public is to keep anonymity by using paper money and coins … we will give those people who demand it anonymity,” he said at a November conference in Singapore. “But at the same time we will keep the balance between ‘controllable anonymity’ and anti-money-laundering, CTF [counter-terrorist financing], and also tax issues, online gambling, and any electronic criminal activities,” he added. He did not, however, explain how that “balance” would work.
Sweden and China are leading the charge to issue consumer-focused electronic money, but according to John Kiff, an expert on financial stability for the International Monetary Fund, more than 30 countries have explored or are exploring the idea. In some, the rationale is similar to Sweden’s: dwindling cash and a growing private-sector payments ecosystem. Others are countries where commercial banks have decided not to set up shop. Many see an opportunity to better monitor for illicit transactions. All will have to wrestle with the same thorny privacy issues that Libra and the digital renminbi are raising.
Robleh Ali, a research scientist at MIT’s Digital Currency Initiative, says digital currency systems from central banks may need to be designed so that the government can “consciously blind itself” to the information. Something like that might be technically possible thanks to cutting-edge cryptographic tools like zero-knowledge proofs, which are used in systems like Zcash to shield blockchain transaction information from public view.
However, there’s no evidence that any governments are even thinking about deploying tools like this. And regardless, can any government—even Sweden’s—really be trusted to blind itself?
Cryptocurrency: A workaround for freedom
That’s wishful thinking, says Alex Gladstein, chief strategy officer for the Human Rights Foundation. While you may trust your government or think you’ve got nothing to hide, that might not always remain true. Politics evolves, governments get pushed out by elections or other events, what constitutes a “crime” changes, and civil liberties are not guaranteed. “Financial privacy is not going to be gifted to you by your government, regardless of how ‘free’ they are,” Gladstein says. He’s convinced that it has to come in the form of a stateless, decentralized digital currency like Bitcoin.
In fact, “electronic cash” was what Bitcoin’s still-unknown inventor, the pseudonymous Satoshi Nakamoto, claimed to be trying to create (before disappearing). Eleven years into its life, Nakamoto’s technology still lacks some of the signature features of cash. It is difficult to use, transactions can take more than an hour to process, and the currency’s value can fluctuate wildly. And as already noted, the supposedly anonymous transactions it enables can sometimes be traced.
But in some places people just need something that works, however imperfectly. Take Venezuela. Cash in the crisis-ridden country is scarce, and the Venezuelan bolivar is constantly losing value to hyperinflation. Many Venezuelans seek refuge in US dollars, storing them under the proverbial (and literal) mattress, but that also makes them vulnerable to thieves.
What many people want is access to stable cash in digital form, and there’s no easy way to get that, says Alejandro Machado, cofounder of the Open Money Initiative. Owing to government-imposed capital controls, Venezuelan banks have largely been cut off from foreign banks. And due to restrictions by US financial institutions, digital money services like PayPal and Zelle are inaccessible to most people. So a small number of tech-savvy Venezuelans have turned to a service called LocalBitcoins.
It’s like Craigslist, except that the only things for sale are bitcoins and bolivars. On Venezuela’s LocalBitcoins site, people advertise varying quantities of currency for sale at varying exchange rates. The site holds the money in escrow until trades are complete, and tracks the sellers’ reputations.
It’s not for the masses, but it’s “very effective” for people who can make it work, says Machado. For instance, he and his colleagues met a young woman who mines Bitcoin and keeps her savings in the currency. She doesn’t have a foreign bank account, so she’s willing to deal with the constant fluctuations in Bitcoin’s price. Using LocalBitcoins, she can cash out into bolivars whenever she needs them—to buy groceries, for example. “Niche power users” like this are “leveraging the best features of Bitcoin, which is to be an asset that is permissionless and that is very easy to trade electronically,” Machado says.
However, this is possible only because there are enough people using LocalBitcoins to create what finance people call “local liquidity,” meaning you can easily find a buyer for your bitcoins or bolivars. Bitcoin is the only cryptocurrency that has achieved this in Venezuela, says Machado, and it’s mostly thanks to LocalBitcoins.
This is a long way from the dream of cryptocurrency as a widely used substitute for stable, government-issued money. Most Venezuelans can’t use Bitcoin, and few merchants there even know what it is, much less how to accept it.
Still, it’s a glimpse of what a cryptocurrency can offer—a functional financial system that anyone can join and that offers the kind of freedom cash provides in most other places.
Could something like Bitcoin ever be as easy to use and reliable as today’s cash is for everyone else? The answer is philosophical as well as technical.
To begin with, what does it even mean for something to be like Bitcoin? Central banks and corporations will adapt certain aspects of Bitcoin and apply them to their own ends. Will those be cryptocurrencies? Not according to purists, who say that though Libra or some future central bank-issued digital currency may run on blockchain technology, they won’t be cryptocurrencies because they will be under centralized control.
True cryptocurrencies are “decentralized”—they have no one entity in charge and no single points of failure, no weak spots that an adversary (including a government) could attack. With no middleman like a bank attesting that a transaction took place, each transaction has to be validated by the nodes in a cryptocurrency’s network, which can number many thousands. But this requires an immense expenditure of computing power, and it’s the reason Bitcoin transactions can take more than an hour to settle.
A currency like Libra wouldn’t have this problem, because only a few authorized entities would be able to operate nodes. The trade-off is that its users wouldn’t be able to trust those entities to guarantee their privacy, any more than they can trust a bank, a government, or Facebook.
Is it technically possible to achieve Bitcoin’s level of decentralization and the speed, scale, privacy, and ease of use that we’ve come to expect from traditional payment methods? That’s a problem many talented researchers are still trying to crack. But some would argue that shouldn’t necessarily be the goal.
In a recent essay, Jill Carlson, cofounder of the Open Money Initiative, argued that perhaps decentralized cryptocurrency systems were “never supposed to go mainstream.” Rather, they were created explicitly for “censored transactions,” from paying for drugs or sex to supporting political dissidents or getting money out of countries with restrictive currency controls. Their slowness is inherent, not a design flaw; they “forsake scale, speed, and cost in favor of one key feature: censorship resistance.” A world in which they went mainstream would be “a very scary place indeed,” she wrote.
In summary, we have three avenues for the future of digital money, none of which offers the same mix of freedom and ease of use that characterizes cash. Private companies have an obvious incentive to monetize our data and pursue profits over public interest. Digital government money may still be used to track us, even by well-intentioned governments, and for less benign ones it’s a fantastic tool for surveillance. And cryptocurrency can prove useful when freedoms are at risk, but it likely won’t work at scale anytime soon, if ever.
How big a problem is this? That depends on where you live, how much you trust your government and your fellow citizens, and why you wish to use cash. And if you’d rather keep that to yourself, you’re in luck. For now.
Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”
These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.
Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.
His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.
As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.
But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.
Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.
An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.
The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant—the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.
Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.
Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.
Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.
Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.
In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.
Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.
Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.
One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).
The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.
The last mile
Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.
He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.
Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”
He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.
Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”
That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.
He also doesn’t think it’s that far off: “This is the last mile before the victory.”
Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.
As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.
For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.
You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.”
Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.
Moore’s argument was an economic one. Integrated circuits, with multiple transistors and other electronic devices interconnected with aluminum metal lines on a tiny square of silicon wafer, had been invented a few years earlier by Robert Noyce at Fairchild Semiconductor. Moore, the company’s R&D director, realized, as he wrote in 1965, that with these new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” It was a beautiful bargain—in theory, the more transistors you added, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip.
Soon these cheaper, more powerful chips would become what economists like to call a general purpose technology—one so fundamental that it spawns all sorts of other innovations and advances in multiple industries. A few years ago, leading economists credited the information technology made possible by integrated circuits with a third of US productivity growth since 1974. Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore’s prediction. It has also fueled today’s breakthroughs in artificial intelligence and genetic medicine, by giving machine-learning techniques the ability to chew through massive amounts of data to find answers.
But how did a simple prediction, based on extrapolating from a graph of the number of transistors by year—a graph that at the time had only a few data points—come to define a half-century of progress? In part, at least, because the semiconductor industry decided it would.
Moore wrote that “cramming more components onto integrated circuits,” the title of his 1965 article, would “lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.” In other words, stick to his road map of squeezing ever more transistors onto chips and it would lead you to the promised land. And for the following decades, a booming industry, the government, and armies of academic and industrial researchers poured money and time into upholding Moore’s Law, creating a self-fulfilling prophecy that kept progress on track with uncanny accuracy. Though the pace of progress has slipped in recent years, the most advanced chips today have nearly 50 billion transistors.
Every year since 2001, MIT Technology Review has chosen the 10 most important breakthrough technologies of the year. It’s a list of technologies that, almost without exception, are possible only because of the computation advances described by Moore’s Law.
For some of the items on this year’s list the connection is obvious: consumer devices, including watches and phones, infused with AI; climate-change attribution made possible by improved computer modeling and data gathered from worldwide atmospheric monitoring systems; and cheap, pint-size satellites. Others on the list, including quantum supremacy, molecules discovered using AI, and even anti-aging treatments and hyper-personalized drugs, are due largely to the computational power available to researchers.
But what happens when Moore’s Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time?
“It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.” Numerous other prominent computer scientists have also declared Moore’s Law dead in recent years. In early 2019, the CEO of the large chipmaker Nvidia agreed.
In truth, it’s been more a gradual decline than a sudden death. Over the decades, some, including Moore himself at times, fretted that they could see the end in sight, as it got harder to make smaller and smaller transistors. In 1999, an Intel researcher worried that the industry’s goal of making transistors smaller than 100 nanometers by 2005 faced fundamental physical problems with “no known solutions,” like the quantum effects of electrons wandering where they shouldn’t be.
For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.
Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.
Finding successors to today’s silicon chips will take years of research.If you’re worried about what will replace moore’s Law, it’s time to panic.
Nonetheless, Intel—one of those three chipmakers—isn’t expecting a funeral for Moore’s Law anytime soon. Jim Keller, who took over as Intel’s head of silicon engineering in 2018, is the man with the job of keeping it alive. He leads a team of some 8,000 hardware engineers and chip designers at Intel. When he joined the company, he says, many were anticipating the end of Moore’s Law. If they were right, he recalls thinking, “that’s a drag” and maybe he had made “a really bad career move.”
But Keller found ample technical opportunities for advances. He points out that there are probably more than a hundred variables involved in keeping Moore’s Law going, each of which provides different benefits and faces its own limits. It means there are many ways to keep doubling the number of devices on a chip—innovations such as 3D architectures and new transistor designs.
These days Keller sounds optimistic. He says he has been hearing about the end of Moore’s Law for his entire career. After a while, he “decided not to worry about it.” He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.
Still, even if Intel and the other remaining chipmakers can squeeze out a few more generations of even more advanced microchips, the days when you could reliably count on faster, cheaper chips every couple of years are clearly over. That doesn’t, however, mean the end of computational progress.
Time to panic
Neil Thompson is an economist, but his office is at CSAIL, MIT’s sprawling AI and computer center, surrounded by roboticists and computer scientists, including his collaborator Leiserson. In a new paper, the two document ample room for improving computational performance through better software, algorithms, and specialized chip architecture.
One opportunity is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.
Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.
That sounds like good news for continuing progress, but Thompson worries it also signals the decline of computers as a general purpose technology. Rather than “lifting all boats,” as Moore’s Law has, by offering ever faster and cheaper chips that were universally available, advances in software and specialized architecture will now start to selectively target specific problems and business opportunities, favoring those with sufficient money and resources.
Indeed, the move to chips designed for specific applications, particularly in AI, is well under way. Deep learning and other AI applications increasingly rely on graphics processing units (GPUs) adapted from gaming, which can handle parallel operations, while companies like Google, Microsoft, and Baidu are designing AI chips for their own particular needs. AI, particularly deep learning, has a huge appetite for computer power, and specialized chips can greatly speed up its performance, says Thompson.
But the trade-off is that specialized chips are less versatile than traditional CPUs. Thompson is concerned that chips for more general computing are becoming a backwater, slowing “the overall pace of computer improvement,” as he writes in an upcoming paper, “The Decline of Computers as a General Purpose Technology.”
At some point, says Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon, those developing AI and other applications will miss the decreases in cost and increases in performance delivered by Moore’s Law. “Maybe in 10 years or 30 years—no one really knows when—you’re going to need a device with that additional computation power,” she says.
The problem, says Fuchs, is that the successors to today’s general purpose chips are unknown and will take years of basic research and development to create. If you’re worried about what will replace Moore’s Law, she suggests, “the moment to panic is now.” There are, she says, “really smart people in AI who aren’t aware of the hardware constraints facing long-term advances in computing.” What’s more, she says, because application–specific chips are proving hugely profitable, there are few incentives to invest in new logic devices and ways of doing computing.
Wanted: A Marshall Plan for chips
In 2018, Fuchs and her CMU colleagues Hassan Khan and David Hounshell wrote a paper tracing the history of Moore’s Law and identifying the changes behind today’s lack of the industry and government collaboration that fostered so much progress in earlier decades. They argued that “the splintering of the technology trajectories and the short-term private profitability of many of these new splinters” means we need to greatly boost public investment in finding the next great computer technologies.
If economists are right, and much of the growth in the 1990s and early 2000s was a result of microchips—and if, as some suggest, the sluggish productivity growth that began in the mid-2000s reflects the slowdown in computational progress—then, says Thompson, “it follows you should invest enormous amounts of money to find the successor technology. We’re not doing it. And it’s a public policy failure.”
There’s no guarantee that such investments will pay off. Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.
MANAUS, Brazil (AP) — Rivers around the biggest city in Brazil’s Amazon rainforest have swelled to levels unseen in over a century of record-keeping, according to data published Tuesday by Manaus’ port authorities, straining a society that has grown weary of increasingly frequent flooding.
The Rio Negro was at its highest level since records began in 1902, with a depth of 29.98 meters (98 feet) at the port’s measuring station. The nearby Solimoes and Amazon rivers were also nearing all-time highs, flooding streets and houses in dozens of municipalities and affecting some 450,000 people in the region.
Higher-than-usual precipitation is associated with the La Nina phenomenon, when currents in the central and eastern Pacific Ocean affect global climate patterns. Environmental experts and organizations including the U.S. Environmental Protection Agency and the National Oceanic and Atmospheric Administration say there is strong evidence that human activity and global warming are altering the frequency and intensity of extreme weather events, including La Nina.
Seven of the 10 biggest floods in the Amazon basin have occurred in the past 13 years, data from Brazil’s state-owned Geological Survey shows.
“If we continue to destroy the Amazon the way we do, the climatic anomalies will become more and more accentuated,” said Virgílio Viana, director of the Sustainable Amazon Foundation, a nonprofit. ” Greater floods on the one hand, greater droughts on the other.”
Large swaths of Brazil are currently drying up in a severe drought, with a possible shortfall in power generation from the nation’s hydroelectric plants and increased electricity prices, government authorities have warned.
But in Manaus, 66-year-old Julia Simas has water ankle-deep in her home. Simas has lived in the working-class neighborhood of Sao Jorge since 1974 and is used to seeing the river rise and fall with the seasons. Simas likes her neighborhood because it is safe and clean. But the quickening pace of the floods in the last decade has her worried.
“From 1974 until recently, many years passed and we wouldn’t see any water. It was a normal place,” she said.
When the river does overflow its banks and flood her street, she and other residents use boards and beams to build rudimentary scaffolding within their homes to raise their floors above the water.
“I think human beings have contributed a lot (to this situation,” she said. “Nature doesn’t forgive. She comes and doesn’t want to know whether you’re ready to face her or not.”
Flooding also has a significant impact on local industries such as farming and cattle ranching. Many family-run operations have seen their production vanish under water. Others have been unable to reach their shops, offices and market stalls or clients.
“With these floods, we’re out of work,” said Elias Gomes, a 38-year-old electrician in Cacau Pirera, on the other side of the Rio Negro, though noted he’s been able to earn a bit by transporting neighbors in his small wooden boat.
Gomes is now looking to move to a more densely populated area where floods won’t threaten his livelihood.
Limited access to banking in remote parts of the Amazon can make things worse for residents, who are often unable to get loans or financial compensation for lost production, said Viana, of the Sustainable Amazon Foundation. “This is a clear case of climate injustice: Those who least contributed to global warming and climate change are the most affected.”
Meteorologists say Amazon water levels could continue to rise slightly until late June or July, when floods usually peak.