On Monday, a weighty draft report on how to halt and reverse human-caused global warming will hit the inboxes of government experts. This is the final review before the Intergovernmental Panel on Climate Change (IPCC) issues its official summary of the science.
While part of the brief was to identify barriers to climate action, critics say there is little space given to the obstructive role of fossil fuel lobbying – and that’s a problem.
Robert Brulle, an American sociologist who has long studied institutions that promote climate denial, likened it to “trying to tell the story of Star Wars, but omitting Darth Vader”.
Tweeting in November, Brulle explained he declined an invitation to contribute to the working group three (WG3) report. “It became clear to me that institutionalized efforts to obstruct climate action was a peripheral concern. So I didn’t consider it worth engaging in this effort. It really deserves its own chapter & mention in the summary.”
In an email exchange with Climate Home News, Brulle expressed a hope the final version would nonetheless reflect his feedback. The significance of obstruction efforts should be reflected in the summary for policymakers and not “buried in an obscure part of the report,” he wrote.
His tweet sparked a lively conversation among scientists, with several supporting his concerns and others defending the IPCC, which aims to give policymakers an overview of the scientific consensus.
David Keith, a Harvard researcher into solar geoengineering, agreed the IPCC “tells a bloodless story, and abstract numb version of the sharp political conflict that will shape climate action”.
Social ecology and ecological economics professor Julia Steinberger, a lead author on WG3, said “there is a lot of self-censorship” within the IPCC. Where authors identify enemies of climate action, like fossil fuel companies, that content is “immediately flagged as political or normative or policy-prescriptive”.
The next set of reports is likely to be “a bit better” at covering the issue than previous efforts, Steinberger added, “but mainly because the world and outside publications have overwhelmingly moved past this, and the IPCC is catching up: not because the IPCC is leading.”
Politics professor Matthew Paterson was a lead author on WG3 for the previous round of assessment reports, published in 2014. He told Climate Home that Brulle is “broadly right” lobbying hasn’t been given enough attention although there is a “decent chunk” in the latest draft on corporations fighting for their interests and slowing down climate action.
Paterson said this was partly because the expertise of authors didn’t cover fossil fuel company lobbying and partly because governments would oppose giving the subject greater prominence. “Not just Saudi Arabia,” he said. “They object to everything. But the Americans [and others too]”.
While the IPCC reports are produced by scientists, government representatives negotiate the initial scope and have some influence over how the evidence is summarised before approving them for publication. “There was definitely always a certain adaptation – or an internalised sense of what governments are and aren’t going to accept – in the report,” said Paterson.
The last WG3 report in 2014 was nearly 1,500 pages long. Lobbying was not mentioned in its 32-page ‘summary for policymakers’ but lobbying against carbon taxes is mentioned a few times in the full report.
On page 1,184, the report says some companies “promoted climate scepticism by providing financial resources to like-minded think-tanks and politicians”. The report immediately balances this by saying “other fossil fuel companies adopted a more supportive position on climate science”.
One of the co-chairs of WG3, Jim Skea, rejected the criticisms as “completely unfair”. He told Climate Home News: “The IPCC produces reports very slowly because the whole cycle lasts seven years… we can’t respond on a 24/7 news cycle basis to ideas that come up.”
Skea noted there was a chapter on policies and institutions in the 2014 report which covered lobbying from industry and from green campaigners and their influence on climate policy. “The volume of climate change mitigation literature that comes out every year is huge and I would say that the number of references to articles which talk about lobbying of all kinds – including industrial lobbying and whether people had known about the science – it is in there and about the right proportions”, he said.
“We’re not an advocacy organisation, we’re a scientific organisation, it’s not our job to take up arms and take one side or another” he said. “That’s the strength of the IPCC. If if oversteps its role, it will weaken its influence” and “undermine the scientific statements it makes”.
A broader, long-running criticism of the IPCC is that it downplays subjects like political science, development studies, sociology and anthropology and over-relies on economists and the people who put together ‘integrated assessment models’ (IAMs), which attempt to answer big questions like how the world can keep to 1.5C of global warming.
Paterson said the IPCC is “largely dominated by large-scale modellers or economists and the representations of others sorts of social scientists’ expertise is very thin”. A report he co-authored on the social make-up of that IPCC working group found that nearly half the authors were engineers or economists but just 15% were from social sciences other than economics. This dominance was sharper among the more powerful authors. Of the 35 Contributing Lead Authors, 20 were economists or engineers, there was one each from political science, geography and law and none from the humanities.
Wim Carton, a lecturer in the political economy of climate change mitigation at Lund University, said that the IPCC (and scientific research in general) has been caught up in “adulation” of IAMs and this has led to “narrow techno-economic conceptualisations of future mitigation pathways”.
Skea said that there has been lots of material on political science and international relations and even “quite a bit” on moral philosophy. He told Climate Home: “It’s not the case that IPCC is only economics and modelling. Frankly, a lot of that catches attention because these macro numbers are eye-catching. There’s a big difference in the emphasis in [media] coverage of IPCC reports and the balance of materials when you go into the reports themselves.”
According to Skea’s calculations, the big models make up only 6% of the report contents, about a quarter of the summary and the majority of the press coverage. “But there’s an awful lot of bread-and-butter material in IPCC reports which is just about how you get on with it,” he added. “It’s not sexy material but it’s just as important because that’s what needs to be done to mitigate climate change.”
While saying their dominance had been amplified by the media, Skea defended the usefulness of IAMs. “Our audience are governments. Their big question is how you connect all this human activity with actual impacts on the climate. It’s very difficult to make that leap without actually modelling it. You can’t do it with lots of little micro-studies. You need models and you need scenarios to think your way through that connection.”
The IPCC has also been accused of placing too much faith in negative emissions technologies and geo-engineering. Carton calls these technologies ‘carbon unicorns’ because he says they “do not exist at any meaningful scale” and probably never will.
In a recent book chapter, Carton argues: “If one is to believe recent IPCC reports, then gone are the days when the world could resolve the climate crisis merely by reducing emissions. Avoiding global warming in excess of 2°C/1.5°C now also involves a rather more interventionist enterprise: to remove vast amounts of carbon dioxide from the atmosphere, amounts that only increase the longer emissions refuse to fall.”
When asked about carbon capture technologies, Skea said that in terms of deployment, “they haven’t moved on very much” since the last big IPCC report in 2014. He added that carbon capture and storage and bio-energy are “all things that have been done commercially somewhere in the world.”
“What has never been done”, he said, “is to connect the different parts of the system together and run them over all. That’s led many people looking at the literature to conclude that the main barriers to the adoption of some technologies are the lack of policy incentives and the lack of working out good business models to put what would be complex supply chains together – rather than anything that’s standing in the way technically.”
The next set of three IPCC assessment reports was originally due to be published in 2021, but work was delayed by the coronavirus pandemic. Governments and experts will have from 18 January to 14 March to read and comment on the draft for WG3. Dates for a final government review have yet to be set.
Somewhere in Kent, tucked anonymously into acres of warehouses and light-industrial workshops, the first full-service human-composting funeral home in the United States is operational.
After nearly a decade of planning, research and fundraising — not to mention a successful campaign to change state law — Recompose is finally converting people into soil.
Outside, the entrance to Recompose looks like most of its neighbors — just another unit in a tall, almost block-sized building with plain metal siding and big, roll-up warehouse doors. But inside, it feels like an environmentalist’s version of a sleek, futuristic spaceship: spare, calm, utilitarian, with silvery ductwork above, a few soil-working tools (shovels, rakes, pitchforks) on racks, bags of tightly packaged straw neatly stacked on shelves, fern-green walls, potted plants of various sizes.
One immense object dominates the space, looking like an enormous fragment of white honeycomb. These are Recompose’s 10 “vessels,” each a hexagon enclosing a steel cylinder full of soil. One day in mid-January, eight decedents were already inside eight vessels, undergoing the process of natural organic reduction (NOR) or, more colloquially, human composting.
One vessel contained the remains of Ernest “Ernie” Brooks II, a renowned underwater photographer. Organic-farming pioneer Robert “Amigo Bob” Cantisano lay in a second. A third held Paulie Bontrager, a committed environmentalist, vegan and nature lover from West Virginia who died unexpectedly while visiting her daughter in Burien.
Charlotte Bontrager, Paulie’s daughter, had read about Recompose a couple of years ago in a newspaper article.
“I discussed it with my mom,” she said. “We talked about how cool it was and why it took so long to get a service like this. I remember her saying: ‘If it’s at all possible when I die, I want to go that way.’ Longevity runs in my family — her uncle died a year ago at 104 — and I said: ‘Oh mom, you’ll be around another 30 years. I’m sure it’ll be in place by then.”
Two years later, her mother was in a Seattle hospital with a mortal, previously undetected lung condition. Bontrager refused to search for disposition options until her mother had passed. Once she had, at 5:45 a.m., a month before what would’ve been her 75th birthday, Bontrager googled “Seattle” and “human composting” — and found that Recompose was ready.
“My mom was a very humble, loving person and would not want any kind of spotlight,” Bontrager said. “But she’d be thrilled to know she was among this first group of pioneers.”
The first bodies were “laid in” on Dec. 20, 2020, a landmark moment on a nearly 10-year journey for Recompose founder and CEO Katrina Spade. She first began mulling funerary alternatives during a minor mortality crisis of her own, as an architecture student at the University of Massachusetts, Amherst, with a partner and two young children.
Spade researched her options, which were limited to traditional burial (too toxic and expensive), cremation (too carbon-intensive) and rural green burial (too rare and inconvenient for most city dwellers). She started thinking about composting as a kind of soil-based cremation and, in 2013, finished her Master’s thesis: “Of Dirt and Decomposition: Proposing a Place for the Urban Dead.”
Other mileposts followed: feasibility studies in 2015 (with the Department of Forensic Anthropology at Western Carolina University) and 2018 (with soil scientist Lynne Carpenter-Boggs at Washington State University), a push to change state law allowing NOR to be a legal means of disposition for human remains (signed by Gov. Jay Inslee in May 2019) and raising $6.75 million in capital to get Recompose going.
In 2020, two other NOR competitors emerged: Herland Forest, a natural-burial cemetery in Klickitat County with one vessel (which it calls a “cradle”) and Return Home, which plans to open its Auburn facility with dozens of vessels in April.
Recompose costs $5,500 for everything: the body pickup (in King, Pierce and Snohomish counties), the paperwork, the process itself and an optional service. (Body transport from further away can be arranged, for an extra fee, and Recompose has already accepted bodies from California and the East Coast.)
Death care prices in the U.S. tend to be extremely inconsistent and often opaque, with few funeral homes listing costs online — a situation consumer-rights advocates have been shouting about for years. Recompose pricing is transparent and not especially expensive, but it’s not especially cheap. According to a 2020 price survey by local nonprofit the People’s Memorial Association, cremation prices in Washington state vary by 745% (in King County, the range is $525-$4,165) and burial prices by more than 400% (again, in King Country, from $1,390 for the most frugal, direct, no-service burial to $11,100 for a complete, high-end funeral service).
The Recompose process takes 30 days in a vessel full of wood chips and straw, then another few weeks in “curing bins,” large boxes (one per person) where soil is allowed to rest and continue exhaling carbon dioxide. Once that process is complete, friends and chosen family can either retrieve the soil themselves, or donate it to an ecological restoration project at Bells Mountain near Vancouver, Washington. So far, most have elected to donate.
Each vessel, Spade explained, is carefully monitored for temperature and moisture content — sensors take temperature readings every 10 minutes — to make sure the microbes inside are getting what they need for safe, efficient composting. Each vessel is slowly rotated a few times during the process. (All compost needs turning.) State regulations say the soil must maintain a temperature of 131 degrees Fahrenheit for 72 hours to safely cook away pathogens like fecal coliform and salmonella. The state also requires Recompose — and a third party — to test for those pathogens in the resulting soil, as well as heavy metals, including arsenic, lead and mercury. (The state also prohibits people who have contracted certain diseases — tuberculosis, prion infections like Creutzfeldt-Jakob disease — from undergoing NOR.)
“This is a very controlled process, completely driven by microbes,” Spade said. “It’s fueled by plant material and monitored in a very rigorous way.”
Oxygen is another necessary ingredient. Air is blown into each vessel through one set of tubes while exhaust is released through another set, passing through carbon-activated filters.
Steve Van Slyke, compliance director for the Puget Sound Clean Air Agency said the emissions and odors from NOR are expected to be minimal compared to other operations they review, including cremations, demolitions of asbestos-filled buildings and marijuana cultivation. Recompose’s air permit requires no visible emissions from the facility, adequate filters, no detectable odors and independent review by a third party every three months.
Carpenter-Boggs, the soil scientist from WSU, is on hand during the first months of Recompose to keep a careful watch on the soil — and to help Spade and her team care for the dead. (Carpenter-Boggs has served as an unpaid adviser to Recompose for several years but, in her current capacity, she’s working as a paid adviser.)Before each body is laid into its vessel, Spade said, Carpenter-Boggs is usually the one who reminds everybody to take a moment and a few deep breaths. She often recites a poem by the 13th-century Sufi poet Rumi.
“It’s been quite an incredible experience for me,” Carpenter-Boggs said. “I don’t come from the funeral-care world at all and I’ve learned a lot over the past five or six years.”
Friends and chosen family of the deceased can watch that laying-in process over a livestream — or, once coronavirus restrictions are lifted, in person. So far, about 30% of the bereaved have chosen that option, including the Bontrager family, who assembled a soundtrack of their mother’s favorite music. The final song, Charlotte Bontrager said, was “Under the Boardwalk” by the Drifters.
“As I’ve learned more about Recompose, I’ve found it to be a very graceful and beautiful way to go,” Bontrager said. “It’s the natural way, the way every living thing in history has eventually been cared for, from an apple core to a human — you’re not being burned up, not being pumped full of embalming chemicals and taking up space in a container. It seems like a peaceful way for the body to move on to the next phase.”
A ideia da inteligência artificial derrubar a humanidade tem sido discutida por muitas décadas, e os cientistas acabaram de dar seu veredicto sobre se seríamos capazes de controlar uma superinteligência de computador de alto nível. A resposta? Quase definitivamente não.
O problema é que controlar uma superinteligência muito além da compreensão humana exigiria uma simulação dessa superinteligência que podemos analisar. Mas se não formos capazes de compreendê-lo, é impossível criar tal simulação.
Regras como ‘não causar danos aos humanos’ não podem ser definidas se não entendermos o tipo de cenário que uma IA irá criar, sugerem os pesquisadores. Uma vez que um sistema de computador está trabalhando em um nível acima do escopo de nossos programadores, não podemos mais estabelecer limites.
“Uma superinteligência apresenta um problema fundamentalmente diferente daqueles normalmente estudados sob a bandeira da ‘ética do robô’”, escrevem os pesquisadores.
“Isso ocorre porque uma superinteligência é multifacetada e, portanto, potencialmente capaz de mobilizar uma diversidade de recursos para atingir objetivos que são potencialmente incompreensíveis para os humanos, quanto mais controláveis.”
Parte do raciocínio da equipe vem do problema da parada apresentado por Alan Turing em 1936. O problema centra-se em saber se um programa de computador chegará ou não a uma conclusão e responderá (para que seja interrompido), ou simplesmente ficar em um loop eterno tentando encontrar uma.
Como Turing provou por meio de uma matemática inteligente, embora possamos saber isso para alguns programas específicos, é logicamente impossível encontrar uma maneira que nos permita saber isso para cada programa potencial que poderia ser escrito. Isso nos leva de volta à IA, que, em um estado superinteligente, poderia armazenar todos os programas de computador possíveis em sua memória de uma vez.
Qualquer programa escrito para impedir que a IA prejudique humanos e destrua o mundo, por exemplo, pode chegar a uma conclusão (e parar) ou não – é matematicamente impossível para nós estarmos absolutamente seguros de qualquer maneira, o que significa que não pode ser contido.
“Na verdade, isso torna o algoritmo de contenção inutilizável”, diz o cientista da computação Iyad Rahwan, do Instituto Max-Planck para o Desenvolvimento Humano, na Alemanha.
A alternativa de ensinar alguma ética à IA e dizer a ela para não destruir o mundo – algo que nenhum algoritmo pode ter certeza absoluta de fazer, dizem os pesquisadores – é limitar as capacidades da superinteligência. Ele pode ser cortado de partes da Internet ou de certas redes, por exemplo.
O novo estudo também rejeita essa ideia, sugerindo que isso limitaria o alcance da inteligência artificial – o argumento é que se não vamos usá-la para resolver problemas além do escopo dos humanos, então por que criá-la?
Se vamos avançar com a inteligência artificial, podemos nem saber quando chega uma superinteligência além do nosso controle, tal é a sua incompreensibilidade. Isso significa que precisamos começar a fazer algumas perguntas sérias sobre as direções que estamos tomando.
“Uma máquina superinteligente que controla o mundo parece ficção científica”, diz o cientista da computação Manuel Cebrian, do Instituto Max-Planck para o Desenvolvimento Humano. “Mas já existem máquinas que executam certas tarefas importantes de forma independente, sem que os programadores entendam totalmente como as aprenderam.”
“Portanto, surge a questão de saber se isso poderia em algum momento se tornar incontrolável e perigoso para a humanidade.”
Machine learning algorithms serve us the news we read, the ads we see, and in some cases even drive our cars. But there’s an insidious layer to these algorithms: They rely on data collected by and about humans, and they spit our worst biases right back out at us. For example, job candidate screening algorithms may automatically reject names that sound like they belong to nonwhite people, while facial recognition software is often much worse at recognizing women or nonwhite faces than it is at recognizing white male faces. An increasing number of scientists and institutions are waking up to these issues, and speaking out about the potential for AI to cause harm.
Brian Nord is one such researcher weighing his own work against the potential to cause harm with AI algorithms. Nord is a cosmologist at Fermilab and the University of Chicago, where he uses artificial intelligence to study the cosmos, and he’s been researching a concept for a “self-driving telescope” that can write and test hypotheses with the help of a machine learning algorithm. At the same time, he’s struggling with the idea that the algorithms he’s writing may one day be biased against him—and even used against him—and is working to build a coalition of physicists and computer scientists to fight for more oversight in AI algorithm development.
This interview has been edited and condensed for clarity.
Gizmodo: How did you become a physicist interested in AI and its pitfalls?
Brian Nord: My Ph.d is in cosmology, and when I moved to Fermilab in 2012, I moved into the subfield of strong gravitational lensing. [Editor’s note: Gravitational lenses are places in the night sky where light from distant objects has been bent by the gravitational field of heavy objects in the foreground, making the background objects appear warped and larger.] I spent a few years doing strong lensing science in the traditional way, where we would visually search through terabytes of images, through thousands of candidates of these strong gravitational lenses, because they’re so weird, and no one had figured out a more conventional algorithm to identify them. Around 2015, I got kind of sad at the prospect of only finding these things with my eyes, so I started looking around and found deep learning.
Here we are a few years later—myself and a few other people popularized this idea of using deep learning—and now it’s the standard way to find these objects. People are unlikely to go back to using methods that aren’t deep learning to do galaxy recognition. We got to this point where we saw that deep learning is the thing, and really quickly saw the potential impact of it across astronomy and the sciences. It’s hitting every science now. That is a testament to the promise and peril of this technology, with such a relatively simple tool. Once you have the pieces put together right, you can do a lot of different things easily, without necessarily thinking through the implications.
Gizmodo: So what is deep learning? Why is it good and why is it bad?
BN: Traditional mathematical models (like the F=ma of Newton’s laws) are built by humans to describe patterns in data: We use our current understanding of nature, also known as intuition, to choose the pieces, the shape of these models. This means that they are often limited by what we know or can imagine about a dataset. These models are also typically smaller and are less generally applicable for many problems.
On the other hand, artificial intelligence models can be very large, with many, many degrees of freedom, so they can be made very general and able to describe lots of different data sets. Also, very importantly, they are primarily sculpted by the data that they are exposed to—AI models are shaped by the data with which they are trained. Humans decide what goes into the training set, which is then limited again by what we know or can imagine about that data. It’s not a big jump to see that if you don’t have the right training data, you can fall off the cliff really quickly.
The promise and peril are highly related. In the case of AI, the promise is in the ability to describe data that humans don’t yet know how to describe with our ‘intuitive’ models. But, perilously, the data sets used to train them incorporate our own biases. When it comes to AI recognizing galaxies, we’re risking biased measurements of the universe. When it comes to AI recognizing human faces, when our data sets are biased against Black and Brown faces for example, we risk discrimination that prevents people from using services, that intensifies surveillance apparatus, that jeopardizes human freedoms. It’s critical that we weigh and address these consequences before we imperil people’s lives with our research.
Gizmodo: When did the light bulb go off in your head that AI could be harmful?
BN: I gotta say that it was with the Machine Bias article from ProPublica in 2016, where they discuss recidivism and sentencing procedure in courts. At the time of that article, there was a closed-source algorithm used to make recommendations for sentencing, and judges were allowed to use it. There was no public oversight of this algorithm, which ProPublica found was biased against Black people; people could use algorithms like this willy nilly without accountability. I realized that as a Black man, I had spent the last few years getting excited about neural networks, then saw it quite clearly that these applications that could harm me were already out there, already being used, and we’re already starting to become embedded in our social structure through the criminal justice system. Then I started paying attention more and more. I realized countries across the world were using surveillance technology, incorporating machine learning algorithms, for widespread oppressive uses.
Gizmodo: How did you react? What did you do?
BN: I didn’t want to reinvent the wheel; I wanted to build a coalition. I started looking into groups like Fairness, Accountability and Transparency in Machine Learning, plus Black in AI, who is focused on building communities of Black researchers in the AI field, but who also has the unique awareness of the problem because we are the people who are affected. I started paying attention to the news and saw that Meredith Whittaker had started a think tank to combat these things, and Joy Buolamwini had helped found the Algorithmic Justice League. I brushed up on what computer scientists were doing and started to look at what physicists were doing, because that’s my principal community.
It became clear to folks like me and Savannah Thais that physicists needed to realize that they have a stake in this game. We get government funding, and we tend to take a fundamental approach to research. If we bring that approach to AI, then we have the potential to affect the foundations of how these algorithms work and impact a broader set of applications. I asked myself and my colleagues what our responsibility in developing these algorithms was and in having some say in how they’re being used down the line.
Gizmodo: How is it going so far?
BN: Currently, we’re going to write a white paper for SNOWMASS, this high-energy physics event. The SNOWMASS process determines the vision that guides the community for about a decade. I started to identify individuals to work with, fellow physicists, and experts who care about the issues, and develop a set of arguments for why physicists from institutions, individuals, and funding agencies should care deeply about these algorithms they’re building and implementing so quickly. It’s a piece that’s asking people to think about how much they are considering the ethical implications of what they’re doing.
We’ve already held a workshop at the University of Chicago where we’ve begun discussing these issues, and at Fermilab we’ve had some initial discussions. But we don’t yet have the critical mass across the field to develop policy. We can’t do it ourselves as physicists; we don’t have backgrounds in social science or technology studies. The right way to do this is to bring physicists together from Fermilab and other institutions with social scientists and ethicists and science and technology studies folks and professionals, and build something from there. The key is going to be through partnership with these other disciplines.
Gizmodo: Why haven’t we reached that critical mass yet?
BN: I think we need to show people, as Angela Davis has said, that our struggle is also their struggle. That’s why I’m talking about coalition building. The thing that affects us also affects them. One way to do this is to clearly lay out the potential harm beyond just race and ethnicity. Recently, there was this discussion of a paper that used neural networks to try and speed up the selection of candidates for Ph.D programs. They trained the algorithm on historical data. So let me be clear, they said here’s a neural network, here’s data on applicants who were denied and accepted to universities. Those applicants were chosen by faculty and people with biases. It should be obvious to anyone developing that algorithm that you’re going to bake in the biases in that context. I hope people will see these things as problems and help build our coalition.
Gizmodo: What is your vision for a future of ethical AI?
BN: What if there were an agency or agencies for algorithmic accountability? I could see these existing at the local level, the national level, and the institutional level. We can’t predict all of the future uses of technology, but we need to be asking questions at the beginning of the processes, not as an afterthought. An agency would help ask these questions and still allow the science to get done, but without endangering people’s lives. Alongside agencies, we need policies at various levels that make a clear decision about how safe the algorithms have to be before they are used on humans or other living things. If I had my druthers, these agencies and policies would be built by an incredibly diverse group of people. We’ve seen instances where a homogeneous group develops an app or technology and didn’t see the things that another group who’s not there would have seen. We need people across the spectrum of experience to participate in designing policies for ethical AI.
Gizmodo: What are your biggest fears about all of this?
BN: My biggest fear is that people who already have access to technology resources will continue to use them to subjugate people who are already oppressed; Pratyusha Kalluri has also advanced this idea of power dynamics. That’s what we’re seeing across the globe. Sure, there are cities that are trying to ban facial recognition, but unless we have a broader coalition, unless we have more cities and institutions willing to take on this thing directly, we’re not going to be able to keep this tool from exacerbating white supremacy, racism, and misogyny that that already exists inside structures today. If we don’t push policy that puts the lives of marginalized people first, then they’re going to continue being oppressed, and it’s going to accelerate.
Gizmodo: How has thinking about AI ethics affected your own research?
BN: I have to question whether I want to do AI work and how I’m going to do it; whether or not it’s the right thing to do to build a certain algorithm. That’s something I have to keep asking myself… Before, it was like, how fast can I discover new things and build technology that can help the world learn something? Now there’s a significant piece of nuance to that. Even the best things for humanity could be used in some of the worst ways. It’s a fundamental rethinking of the order of operations when it comes to my research.
I don’t think it’s weird to think about safety first. We have OSHA and safety groups at institutions who write down lists of things you have to check off before you’re allowed to take out a ladder, for example. Why are we not doing the same thing in AI? A part of the answer is obvious: Not all of us are people who experience the negative effects of these algorithms. But as one of the few Black people at the institutions I work in, I’m aware of it, I’m worried about it, and the scientific community needs to appreciate that my safety matters too, and that my safety concerns don’t end when I walk out of work.
Gizmodo: Anything else?
BN: I’d like to re-emphasize that when you look at some of the research that has come out, like vetting candidates for graduate school, or when you look at the biases of the algorithms used in criminal justice, these are problems being repeated over and over again, with the same biases. It doesn’t take a lot of investigation to see that bias enters these algorithms very quickly. The people developing them should really know better. Maybe there needs to be more educational requirements for algorithm developers to think about these issues before they have the opportunity to unleash them on the world.
This conversation needs to be raised to the level where individuals and institutions consider these issues a priority. Once you’re there, you need people to see that this is an opportunity for leadership. If we can get a grassroots community to help an institution to take the lead on this, it incentivizes a lot of people to start to take action.
And finally, people who have expertise in these areas need to be allowed to speak their minds. We can’t allow our institutions to quiet us so we can’t talk about the issues we’re bringing up. The fact that I have experience as a Black man doing science in America, and the fact that I do AI—that should be appreciated by institutions. It gives them an opportunity to have a unique perspective and take a unique leadership position. I would be worried if individuals felt like they couldn’t speak their mind. If we can’t get these issues out into the sunlight, how will we be able to build out of the darkness?
Ryan F. Mandelbaum – Former Gizmodo physics writer and founder of Birdmodo, now a science communicator specializing in quantum computing and birds
Lucas Stephens, Erle Ellis & Dorian Fuller – 1 October 2020
A revolution in archaeology has exposed the extraordinary extent of human influence over our planet’s past and its future
Lucas Stephens is a senior research analyst at the Environmental Law and Policy Center in Chicago. He was a specialist researcher at the ArchaeoGLOBE project.
Erle Ellis is a professor of geography and environmental systems at the University of Maryland, Baltimore County. He is a member of the Anthropocene Working Group, a fellow of the Global Land Programme, a senior fellow of the Breakthrough Institute, and an advisor to the Nature Needs Half movement. He is the author of Anthropocene: A Very Short Introduction (2018).
Dorian Fuller is professor of archaeobotany at University College London.
Humanity’s transition from hunting and gathering to agriculture is one of the most important developments in human and Earth history. Human societies, plant and animal populations, the makeup of the atmosphere, even the Earth’s surface – all were irreversibly transformed.
When asked about this transition, some people might be able to name the Neolithic Revolution or point to the Fertile Crescent on a map. This widespread understanding is the product of years of toil by archaeologists, who diligently unearthed the sickles, grinding stones and storage vessels that spoke to the birth of new technologies for growing crops and domesticating animals. The story they constructed went something like this: beginning in the Near East some 11,000 years ago, humans discovered how to control the reproduction of wheat and barley, which precipitated a rapid switch to farming. Within 500 to 1,000 years, a scattering of small farming villages sprang up, each with several hundred inhabitants eating bread, chickpeas and lentils, soon also herding sheep and goats in the hills, some keeping cattle.
This sedentary lifestyle spread, as farmers migrated from the Fertile Crescent through Turkey and, from there, over the Bosporus and across the Mediterranean into Europe. They moved east from Iran into South Asia and the Indian subcontinent, and south from the Levant into eastern Africa. As farmers and herders populated new areas, they cleared forests to make fields and brought their animals with them, forever changing local environments. Over time, agricultural advances allowed ever larger and denser settlements to flourish, eventually giving rise to cities and civilisations, such as those in Mesopotamia, Egypt, the Indus and later others throughout the Mediterranean and elsewhere.
For many decades, the study of early agriculture centred on only a few other regions apart from the Fertile Crescent. In China, millet, rice and pigs gave rise to the first Chinese cities and dynasties. In southern Mexico, it was maize, squash and beans that were first cultivated and supported later civilisations such as the Olmecs or the Puebloans of the American Southwest. In Peru, native potato, quinoa and llamas were among species domesticated by 5,000 years ago that made later civilisations in the Andes possible. In each of these regions, the transition to agriculture set off trends of rising human populations and growing settlements that required increasing amounts of wood, clay and other raw materials from the surrounding environments.
Yet for all its sweep and influence, this picture of the spread of agriculture is incomplete. New technologies have changed how archaeology is practised, from the way we examine ancient food scraps at a molecular level, to the use of satellite photography to trace patterns of irrigation across entire landscapes. Recent discoveries are expanding our awareness of just how early, extensive and transformative humans’ use of land has been. The rise of agriculture was not a ‘point in time’ revolution that occurred only in a few regions, but rather a pervasive, socioecological shifting back and forth across fuzzy thresholds in many locations.
Bringing together the collective knowledge of more than 250 archaeologists, the ArchaeoGLOBE project in which we participated is the first global, crowdsourced database of archaeological expertise on land use over the past 10,000 years. It tells a completely different story of Earth’s transformation than is commonly acknowledged in the natural sciences. ArchaeoGLOBE reveals that human societies modified most of Earth’s biosphere much earlier and more profoundly than we thought – an insight that has serious implications for how we understand humanity’s relationship to nature and the planet as a whole.
Just as recent archaeological research has challenged old definitions of agriculture and blurred the lines between farmers and hunter-gatherers, it’s also leading us to rethink what nature means and where it is. The deep roots of how humanity transformed the globe pose a challenge to the emerging Anthropocene paradigm, in which human-caused environmental change is typically seen as a 20th-century or industrial-era phenomenon. Instead, it’s clearer than ever before that most places we think of as ‘pristine’ or ‘untouched’ have long relied on human societies to fill crucial ecological roles. As a consequence, trying to disentangle ‘natural’ ecosystems from those that people have managed for millennia is becoming less and less realistic, let alone desirable.
Our understanding of early agriculture derives mostly from the material remains of food – seeds, other plant remains and animal bones. Archaeologists traditionally document these finds from excavated sites and use them to track the dates and distribution of different people and practices. Over the past several decades, though, practitioners have become more skilled at spotting the earliest signatures of domestication, relying on cutting-edge advances in chemistry, biology, imaging and computer science.
Archaeologists have greatly improved their capacity to trace the evolution of crops, thanks to advances in our capacity to recover minute plant remains – from silica microfossils to attachment scars of cereals, where the seeds attach to the rest of the plant. Along with early crops, agricultural weeds and storage pests such as mice and weevils also appeared. Increasingly, we can identify a broader biotic community that emerged around the first villages and spread with agriculture. For example, weeds that originated in the Fertile Crescent alongside early wheat and barley crops also show up in the earliest agricultural communities in places such as Germany and Pakistan.
Collections of animal bones provide evidence of how herded creatures changed physically through the process of domestication. Butchering marks on bones can help reconstruct culling strategies. From the ages and sizes of animals, archaeologists can deduce the populations of herds in terms of age and sex ratios, all of which reveals how herding differed from hunting. Herding systems themselves also vary, with some focused only on producing meat, and others on milk and wool too.
The British Isles were transformed by imported crops, weeds and livestock from millennia earlier
Measurements of bones and seeds have made great strides with technologies such as geometric morphometrics – complex mathematical shape analysis that allows for a more nuanced understanding of how varieties evolved and moved between regions. Biomolecular methods have also multiplied. The recovery of amino acid profiles from fragmented animal bones, for example, has allowed us to discern which animals they came from, even when they’re too degraded for visual identification. The increasingly sophisticated use and analysis of ancient DNA now allows researchers to track the development and distribution of domesticated animals and crops in great detail.
Archaeologists have also used mass spectrometry, a technique involving gas ions, to pinpoint which species were cooked together based on the presence of biomolecules such as lipids. Stable isotopes of carbon and nitrogen from animal bones and seeds give insight into where and how plants and animals were managed – allowing us to more fully sketch out ancient foodwebs from soil conditions to human consumption. Strontium isotopes in human and animal bones, meanwhile, allow us to identify migrations across a single organism’s lifetime, revealing more and earlier long-distance interconnections than previously imagined. Radiocarbon dating was already possible in the 1950s – but recent improvements that have reduced sample sizes and error margins allow us to build fine-grained chronologies and directly date individual crops.
With all these fresh data, it’s now possible to tell a much richer, more diverse story about the gradual evolutions and dispersals of early agriculture. By 6,000 years ago, the British Isles were being transformed by an imported collection of crops, weeds and livestock that had originated millennia earlier in the Near East. Similarly, millet, rice and pigs from central China had been spread as far as Thailand by 4,000 years ago, and began transforming much of the region’s tropical woodland to agricultural fields. New stories are constantly emerging too – including that sorghum, a grain crop, was domesticated in the savannahs of eastern Sudan more than 5,000 years ago, before the arrival of domesticated sheep or goats in that area. Once combined with Near Eastern sheep, goats and cattle, agropastoralism spread rapidly throughout most of sub-Saharan Africa by 2,000 years ago.
Advances in the study of plant silica micro-fossils (phytoliths) have helped trace banana cultivation from the Island of New Guinea more than 7,000 years ago – from where it spread through Island Southeast Asia, and eventually across the Indian Ocean to Africa, more than a millennium before Vasco da Gama navigated from Africa to India. These techniques have also revealed unforeseen agricultural origins – such as the forgotten cereal, browntop millet. It was the first staple crop of South India, before it was largely replaced by crops such as sorghum that were translocated from Africa. Many people might be surprised to learn that the early farming tradition in the Mississippi basin relied on pitseed goosefoot, erect knotweed and marsh elder some 3,000-4,000 years ago, long before maize agriculture arrived in the American Midwest.
Archaeologists don’t just study materials painstakingly uncovered in excavations. They also examine landscapes, patterns of settlement, and the built infrastructure of past societies to get a sense of the accumulated changes that humans have made to our environments. They have developed a repertoire of techniques that allow them to study the traces of ancient people on scales much larger than an individual site: from simply walking and documenting the density of broken pottery on the ground, to examining satellite imagery, using lidar (light and laser) and drones to build 3D models, even searching for subsurface magnetic anomalies to plot out the walls of buried cities.
There was usually a long continuum of exploitation, translocation and management of ecosystems
As a result, new revelations about our deep past are constantly emerging. Recent discoveries in southwestern Amazonia showed that people were cultivating squash and manioc more than 10,000 years ago, and maize only a few thousand years later. They did so living in an engineered landscape consisting of thousands of artificial forested islands, within a seasonally flooded savannah.
Some of the most stunning discoveries have come from the application of lidar around Maya cities, buried underneath the tropical canopy in Central America. Lasers can penetrate this canopy to define the shapes of mounds, plazas, ceremonial platforms and long causeways that were previously indistinguishable from the topography of the jungle. A recent example in Mexico pushed back the time period for monumental construction to what we used to consider the very beginning of Maya civilisation – 3,000 years ago – and suggests the monuments were more widespread than previously believed.
These transitions were not linear or absolute. It’s now clear that there was usually a long continuum of exploitation, translocation and management of plants, animals, landforms and ecosystems well before (and often after) domestication occurred. This makes it harder to draw solid lines between hunter-gatherer and farmer societies, or between societies who practised different subsistence strategies. Over archaeological timescales spanning hundreds to thousands of years, land use can be thought of instead as a tapestry of ever-evolving anthroecosystems with higher or lower degrees of transformation – more or less human-shaped, or ‘domesticated’ environments.
In 2003, the climatologist William Ruddiman introduced the ‘early anthropogenic hypothesis’: the idea that agricultural land use began warming Earth’s climate thousands of years ago. While some aspects of this early global climate change remain unsettled among scientists, there’s strong consensus that land-use change was the greatest driver of global climate change until the 1950s, and remains a major driver of climate change today. As a result, global maps of historical changes in land use, and their effects on vegetation cover, soils and greenhouse gas emissions, are a critical component of all contemporary models for forecasting Earth’s future climate.
Deforestation, tilling the land and other agricultural practices alter regional and global climate because they release greenhouse gases from vegetation and soils, as well as altering the exchange of heat and moisture across Earth. These effects reverse when land is abandoned and vegetation recovers or is restored. Early changes in agricultural land use therefore have major implications in understanding climate changes of the past, present and future.
The main global map of historical land use deployed in climate models is HYDE (the History Database of the Global Environment), combining contemporary and historical patterns of land use and population across the planet over the past 12,000 years. Despite this huge span of space and time, with notable exceptions, HYDE is based largely on historical census data that go back to 1960, mostly from Europe.
HYDE’s creator, a collaborator in ArchaeoGLOBE, has long requested help from historians, scientists and archaeologists to build a stronger empirical basis for HYDE’s global maps – especially for the deep past, where data are especially lacking. The data needed to improve the HYDE database exist, but reside in a format that’s difficult to access – the expert knowledge of archaeologists working in sites and regions around the world. The problem is that no single archaeologist has the breadth or time-depth of knowledge required.
Archaeologists typically study individual regions and time periods, and have only background knowledge on wider areas. Research methods and terminology also aren’t standardised worldwide, making syntheses difficult, rare and subjective. To construct a comprehensive global database of past land use, you need to gather information from hundreds of regional specialists and collate it, allowing this mosaic of individual studies to emerge as a single picture. This was exactly what we did for ArchaeoGLOBE.
Earth’s terrestrial ecology was already largely transformed by hunter-gatherers, farmers and pastoralists
In 2018, we surveyed more than 1,300 archaeologists around the world, and synthesised their responses into ArchaeoGLOBE. The format of our questionnaire was based on 10 time-slices from history (from 10,000 years ago, roughly the beginning of agriculture, to 1850 CE, the industrial era in Europe); 146 geographic regions; four levels of land-use prevalence; and five land-use categories (foraging/hunting/gathering/fishing; pastoralism; extensive agriculture; intensive agriculture; urbanism).
We ended up receiving 711 regional assessments from 255 individual archaeologists – resulting in a globally complete, if uneven, map of archaeological knowledge. After synthesis and careful analysis, our results (along with 117 other co-authors) were published in 2019 in Science. We also made all our data and analysis available online, at every stage of the research process – even before we had finished collecting it – in an effort to stimulate the culture of open knowledge-sharing in archaeology as a discipline.
The resulting data-trove allows researchers to compare land-use systems over time and in different regions, as well as to aggregate their cumulative, global impacts at different points over the past 10,000 years. When we compared ArchaeoGLOBE results with HYDE, we found that archaeological assessments showed much earlier and more widespread agricultural land use than HYDE suggested – and, therefore, more intensive land use than had been factored into climate change assessments. Indeed, the beginnings of intensive agriculture in ArchaeoGLOBE were earlier than HYDE’s across more than half of Earth’s current agricultural regions, often by 1,000 years or more.
By 3,000 years ago, Earth’s terrestrial ecology was already largely transformed by hunter-gatherers, farmers and pastoralists – with more than half of regions assessed engaged in significant levels of agriculture or pastoralism. For example, the Kopaic Basin in the Greek region of Boeotia was drained and converted from wetland to agricultural land in the 13th century BCE. This plain – roughly 1,500 hectares (15 sq km) in size – surrounded by steep limestone hills, had been a large, shallow lake since the end of the last Ice Age. Late Bronze Age residents of the area, members of what we call the Mycenaean culture, constructed a hydraulic infrastructural system on a massive scale to drain the wetland and claim it for agriculture. They channelised rivers, dug drainage canals, built long dikes and expanded natural sinkholes to direct the water off what would have been nutrient-rich soil. Eventually, when the Mycenaean civilisation collapsed at the end of the Bronze Age, the basin flooded again and returned to its previous wetland state. Legend has it that Heracles filled in the sinkholes as revenge against a local king. The area was not successfully drained again until the 20th century.
These examples highlight a general trend we found that agriculture and pastoralism gradually replaced foraging-hunting-gathering around the world. But the data also show that there were reversals and different subsistence economies, from foraging to farming, operating in parallel in some places. Moreover, agriculture and pastoralism are not the only practices that transform environments. Hunter-gatherer land use was already widespread across the globe (82 per cent of regions) by 10,000 years ago. Through the selective harvest and translocation of favoured species, hunting (sometimes to extinction) and the use of fire to dramatically alter landscapes, most of the terrestrial biosphere was already significantly influenced by human activities, even before the domestication of plants and animals.
ArchaeoGLOBE is both a cause and a consequence of a dramatic change in perspective about how early land use produced long-term global environmental change. Archaeological knowledge is increasingly becoming a crucial instrument for understanding humanity’s cumulative effect on ecology and the Earth system, including global changes in climate and biodiversity. As a discipline, the mindset of archaeology stands in contrast to earlier perspectives grounded in the natural sciences, which have long emphasised a dichotomy between humans and nature.
In the ‘pristine myth’ paradigm from the natural sciences, as the geographer William Denevan called it, human societies are recent destroyers, or at the very least disturbers, of a mostly pristine natural world. Denevan was reacting against the portrayal of pre-1492 America as an untouched paradise, and he used the substantial evidence of indigenous landscape modification to argue that the human presence was perhaps more visible in 1492 than 1750. Recent popular conceptions of the Anthropocene risk making a similar mistake, drawing a thin bright line at 1950 and describing what comes after as a new, modern form of ecological disaster. Human changes to the environment are cumulative and were substantial at different scales throughout our history. The deep trajectory of land use revealed by ArchaeoGLOBE runs counter to the idea of pinpointing a single catalytic moment that fundamentally changed the relationship between humanity and the Earth system.
The pristine myth also accounts for why places without contemporary intensive land use are often dubbed ‘wilderness’ – such as areas of the Americas depopulated by the great post-Columbian die-off. Such interpretations, perpetuated by scientists, have long supported colonial narratives in which indigenous hunter-gatherer and even agricultural lands are portrayed as unused and ripe for productive use by colonial settlers.
The notion of a pristine Earth also pervaded the thinking of early conservationists in the United States such as John Muir. They were intent on preserving what they saw as the nobility of nature from a mob of lesser natural life, and also those eager to manage wilderness areas to maintain the trophy animals they enjoyed hunting. For example, the governor of California violently forced Indigenous peoples out of Yosemite Valley in the 19th century, making way for wilderness conservation. These ideas went hand-in-hand with a white supremacist view of humanity that cast immigrants and the poor as a type of invasive species. It was not a great leap of theorising to move from a notion of pristine nature to seeing much of humanity as the opposite – a contaminated, marring mass. In both realms, the human and the natural, the object was to exclude undesirable people to preserve bastions of the unspoilt world. These extreme expressions of a dichotomous view of nature and society are possible only by ignoring the growing evidence of long-term human changes to Earth’s ecology – humans were, and are still, essential components of most ‘natural’ ecosystems.
A clear-eyed appreciation for the deep entanglement of the human and natural worlds is vital
Humans have continually altered biodiversity on many scales. We have changed the local mix of species, their ranges, habitats and niches for thousands of years. Long before agriculture, selective human predation of many non-domesticated species shaped their evolutionary course. Even the relatively small hunter-gatherer populations of the late Pleistocene were capable of negatively affecting animal populations – driving many megafauna and island species extinct or to the point of extinction. But there have also been widespread social and ecological adaptations to these changes: human management can even increase biodiversity of landscapes and can sustain these increases for thousands of years. For example, pastoralism might have helped defer climate-driven aridification of the Sahara, maintaining mixed forests and grassland ecosystems in the region for centuries.
This recognition should cause us to rethink what ‘nature’ and ‘wilderness’ really are. If by ‘nature’ we mean something divorced from or untouched by humans, there’s almost nowhere on Earth where such conditions exist, or have existed for thousands of years. The same can be said of Earth’s climate. If early agricultural land use began warming our climate thousands of years ago, as the early anthropogenic hypothesis suggests, it implies that no ‘natural’ climate has existed for millennia.
A clear-eyed appreciation for the deep entanglement of the human and natural worlds is vital if we are to grapple with the unprecedented ecological challenges of our times. Naively romanticising a pristine Earth, on the other hand, will hold us back. Grasping that nature is inextricably linked with human societies is fundamental to the worldview of many Indigenous cultures – but it remains a novel and often controversial perspective within the natural sciences. Thankfully, it’s now gaining prominence within conservation circles, where it’s shifting attitudes about how to enable sustainable and resilient stewardship of land and ecosystems.
Viewing humans and nature as entwined doesn’t mean that we should shrug our shoulders at current climatic trends, unchecked deforestation, accelerating extinction rates or widespread industrial waste. Indeed, archaeology supplies numerous examples of societal and ecosystem collapse: a warning of what happens if we ignore the consequences of human-caused environmental change.
But ecological crises are not inevitable. Humans have long maintained sustainable environments by adapting and transforming their societies. As our work demonstrates, humans have shaped the ecology of this planet for thousands of years, and continue to shape it.
We live at a unique time in history, in which our awareness of our role in changing the planet is increasing at the precise moment when we’re causing it to change at an alarming rate. It’s ironic that technological advances are simultaneously accelerating both global environmental change and our ability to understand humans’ role in shaping life on Earth. Ultimately, though, a deeper appreciation of how the Earth’s environments are connected to human cultural values helps us make better decisions – and also places the responsibility for the planet’s future squarely on our shoulders.
Marcella Duarte Colaboração para Tilt – 05/01/2021 17h02 4-5 minutos
Parecia que 2020 nunca ia acabar, mas, tecnicamente, ele passou mais depressa que o normal. E este ano será ainda mais ligeiro. O motivo? A Terra tem “girado” estranhamente depressa ultimamente. Por isso, pode ser que a gente precise adiantar nossos relógios, mas você nem vai perceber.
No ano passado, foi registrado o dia mais curto da história, desde que foram iniciadas as medições, há 50 anos. Em 19 de julho de 2020, o planeta completou sua rotação 1,4602 milésimo de segundo mais rápido que os costumeiros 86.400 segundos (24 horas).
O dia mais curto que até então se tinha registro aconteceu em 2005, e foi superado 28 vezes em 2020. E este ano deve ser o mais rápido da história, porque os dias de 2021 deverão ser, em média, 0,5 milissegundo mais curtos que o normal.
Essas pequenas mudanças na duração dos dias só foram descobertas após o desenvolvimento de relógios atômicos superprecisos, na década de 1960. Inicialmente, percebeu-se que a velocidade de rotação da Terra, quando gira em torno de seu próprio eixo resultando nos dias e noites, estava diminuindo ano após ano.
Desde a década de 1970, foi necessário “adicionar” 27 segundos no tempo atômico internacional, para manter nossa contagem de tempo sincronizada com o planeta mais lento. É o chamado “leap second” ou “inserção de segundo intercalado”.
Essas correções acontecem sempre ao final de um semestre, em 31 de dezembro ou 30 de junho. Assim, garante-se que o Sol sempre esteja exatamente no meio do céu ao meio-dia.
A última vez que ocorreu foi no Ano Novo de 2016, quando relógios no mundo todo pausaram por um segundo para “esperar” a Terra.
Mas recentemente, está acontecendo o oposto: a rotação está acelerando. E pode ser que a gente precise “saltar” o tempo para “alcançar” o movimento do planeta. Seria a primeira vez na história que um segundo seria deletado dos relógios internacionais.
Há um debate internacional sobre a necessidade deste ajuste e o futuro do cálculo do tempo. Cientistas acreditam que, ao longo de 2021, os relógios atômicos acumularão um atraso de 19 milésimos de segundos.
Se os ajustes não forem feitos, levaria centenas de anos para uma pessoa comum notar a diferença. Mas sistemas de navegação e de comunicação por satélite —que usam a posição da Terra, do Sol e das estrelas para funcionar— podem ser impactados mais brevemente.
Nossos “guardiões do tempo” são os oficiais do Serviço Internacional de Sistemas de Referência e Rotação da Terra (Iers), em Paris, França. São eles que monitoram a rotação da Terra e os 260 relógios atômicos espalhados pelo mundo e avisam quando é necessário adicionar —ou eventualmente deletar— algum segundo.
Manipular o tempo pode ter consequências. Quando foi adicionado um “leap second” em 2012, gigantes tecnológicos da época, como Linux, Mozilla, Java, Reddit, Foursquare, Yelp e LinkedIn reportaram falhas.
A velocidade de rotação da Terra varia constantemente, dependendo de diversos fatores, como o complexo movimento de seu núcleo derretido, dos oceanos e da atmosfera, além das interações gravitacionais com outros corpos celestes, como a Lua. O aquecimento global, e consequente derretimento das calotas polares e gelo das montanhas também tem acelerado a movimentação.
Por isso, os dias nunca têm duração exatamente igual. O último domingo (3) teve “apenas” 23 horas, 59 minutos e 59,9998927 segundos. Já a segunda-feira (4) foi mais preguiçosa, com pouco mais de 24 horas.
Academia Uruguaia de Letras defende Cavani em caso de suposto racismo e lamenta ‘falta de conhecimento’ de federação inglesa (O Globo)
O Globo, com Reuters – 02 de janeiro de 2021
Jogador foi punido por ter usado termo ‘negrito’ em sua rede social, ao agradecer a um amigo que lhe deu os parabéns depois da vitória contra Southampton
02/01/2021 – 10:33 / Atualizado em 02/01/2021 – 11:12
A Academia de Letras do Uruguai classificou nesta sexta-feira como “ignorante” e uma “grave injustiça”a punição de três jogos recebida pelo atacante Edinson Cavani, do Manchester United, aplicada pela Football Association (FA), entidade máxima do futebol inglês, por uso do termo “negrito” para se referir a um seguidor em uma postagem numa rede social.
O uruguaio de 33 anos usou a palavra “negrito” em um post no Instagram após a vitória do clube sobre o Southampton em 29 de novembro, antes de retirá-lo do ar e se desculpar. Ele disse que era uma expressão de afeto a um amigo.
Na quinta-feira, a FA disse que o comentário era “impróprio e trouxe descrédito ao jogo” e multou Cavani em 100 mil.
A academia, uma associação dedicada a proteger e promover o espanhol usado no Uruguai, disse que “rejeitou energicamente a sanção”.
“A Federação Inglesa de Futebol cometeu uma grave injustiça com o desportista uruguaio … e mostrou a sua ignorância e erro ao regulamentar o uso da língua, em particular o espanhol, sem dar atenção a todas as suas complexidades e contextos”, afirmou a academia, por meio de seu presidente, Wilfredo Penco. “No contexto em que foi escrito, o único valor que se pode dar ao negrito (e principalmente pelo uso diminutivo) é afetuoso”.
Segundo a Academia, palavras que se referem à cor da pele, peso e outras características físicas são freqüentemente usadas entre amigos e parentes na América Latina, especialmente no diminutivo. A entidade acrescenta que até pessoas alvo destas expressões muitas vezes nem tem as características citadas.
“O uso que Cavani fez para se dirigir ao amigo ‘pablofer2222’ (nome da conta) tem este tipo de teor carinhoso — dado o contexto em que foi escrito, a pessoa a quem foi dirigido e a variedade do espanhol usado, o único valor que “negrito” pode ter é o carinhoso. Para insultar em espanhol, inglês ou outra língua, é preciso ter a capacidade para ofender o outro e aí o próprio ‘pablofer2222’ teria expressado o seu incómodo”, encerra a Academia.
Cavani: “Meu coração está em paz”
Cavani usou a rede social para comentar o episódio e assumiu “desconforto” com a situação. Garantiu que nunca foi sua intenção ofender o amigo e que a expressão usada foi de afeto.
“Não quero me alongar muito neste momento desconfortável. Quero dizer que aceito a sanção disciplinar, sabendo que sou estrangeiro para os costumes da língua inglesa, mas que não partilho do mesmo ponto de vista. Peço desculpa se ofendi alguém com uma expressão de afeto para com um amigo, não era essa a minha intenção. Aqueles que me conhecem sabem que os meus esforços são sempre procurar a simples alegria e amizade”, escreveu o jogador.
“Agradeço as inúmeras mensagens de apoio e afeto. O meu coração está em paz porque sei que sempre me expressei com afeto de acordo com a minha cultura e estilo de vida. Um sincero abraço”.
Cavani: Federação uruguaia e jogadores da seleção defendem atacante e pedem revisão de pena por racismo (O Globo)
Jogador foi suspenso por três partidas e multado pela Football Association por escrever ‘Negrito’ em suas redes sociais
04/01/2021 – 12:46 / Atualizado em 04/01/2021 – 13:45
A punição imposta a Edinson Cavani pela Football Association (FA, entidade que gere o futebol na Inglaterra) pela reprodução do termo “Negrito” (diminutivo de negro, em espanhol) em suas redes sociais segue no centro de uma intensa discussão no Uruguai. Depois da Academia Uruguaia de Letras prestar solidariedade e chamar a pena de desconhecimento cultural, os jogadores da seleção e a própria Associação Uruguaia de Futebol (AUF) se manifestaram em favor do atacante.
Nesta segunda, a Associação de Futebolistas do Uruguai publicou uma carta na qual manifestou seu repúdio à decisão da FA. O documento classifica a punição como uma arbitrariedade e diz que a entidade teve uma visão distorcida, dogmática e etnocentrista do tema.
“Longe de realizar uma defesa contra o racismo, o que a FA cometeu foi um ato discriminatório contra a cultura e a forma de vida dos uruguaios”, acusa o órgão que representa a classe de jogadores do país sul-americano.
O documento foi compartilhado nas redes sociais por jogadores da seleção. Entre eles, o atacante Luis Suárez, do Atlético de Madri; e o capitão Diego Godín, zagueiro do Cagliari-ITA.
Logo em seguida, a própria federação uruguaia se juntou à rede de apoio ao atacante e ídolo da Celeste. Em comunicado divulgado em suas redes sociais, a entidade pede que a FA retire a pena imposta a Cavani e reitera a argumentação utilizada pela Academia Uruguaia de Letras ao tentar desassociar o termo “negrito” de qualquer conotação racista.
“No nosso espanhol, que difere muito do castelhano falado em outras regiões do mundo, os apelidos negro/a e negrito/a são utilizados assiduamente como expressão de amizade, afeto, proximidade e confiança e de forma alguma se referem de forma depreciativa ou discriminatória à raça ou cor da pele de quem se faz alusão”, defende o órgão.
Cavani já cumpriu o primeiro dos três jogos que recebeu de suspensão. Ele não foi relacionado para a partida do Manchester United contra o Aston Villa, no último sábado, pelo Campeonato Inglês. Além deste gancho, o jogador foi condenado a pagar uma multa de 100 mil libras (cerca de R$ 700 mil). A punição foi dada após ele escrever “Obrigado, negrito” a um elogio feito por um seguidor do Instagram.
“Um comentário postado na página Instagram do jogador do Manchester United foi insultuoso e/ou abusivo e/ou impróprio e/ou trouxe descrédito ao jogo”, posicionou-se a FA ao aplicar a pena.
Embora o episódio tenha gerado muita indignação no Uruguai, país de maioria branca, o próprio Cavani não levou o caso adiante. Ao se manifestar, o atacante se disse incomodado com a situação, não concordou com a punição, mas enfatizou que a aceitava.