Arquivo da tag: História da ciência

When Did the Anthropocene Start? Scientists Closer to Saying When. (N.Y. Times)

nytimes.com


Image credits: Alamy; David Guttenfelder for The New York Times; Getty Images; Ashley Gilbertson for The New York Times; Michael Probst/Associated Press; Getty Images; NASA

A panel of experts has spent more than a decade deliberating on how, and whether, to mark a momentous new epoch in geologic time: our own.

Raymond Zhong

Dec. 17, 2022

The official timeline of Earth’s history — from the oldest rocks to the‌ dinosaurs to the rise of primates, from the Paleozoic to the Jurassic and all points before and since — could soon include the age of nuclear weapons, human-caused climate change and the proliferation of plastics, garbage and concrete across the planet.

In short, the present.

Ten thousand years after our species began forming primitive agrarian societies, a panel of scientists on Saturday took a big step toward declaring a new interval of geologic time: the Anthropocene, the age of humans.

Our current geologic epoch, the Holocene, began 11,700 years ago with the end of the last big ice age. The panel’s roughly three dozen scholars appear close to recommending that, actually, we have spent the past few decades in a brand-new time unit, one characterized by human-induced, planetary-scale changes that are unfinished but very much underway.

“If you were around in 1920, your attitude would have been, ‘Nature’s too big for humans to influence,’” said Colin N. Waters, a geologist and chair of the Anthropocene Working Group, the panel that has been deliberating on the issue since 2009. The past century has upended that thinking, Dr. Waters said. “It’s been a shock event, a bit like an asteroid hitting the planet.”

The working group’s members on Saturday completed the first in a series of internal votes on details including when exactly they believe the Anthropocene began. Once these votes are finished, which could be by spring, the panel will submit its final proposal to three other committees of geologists whose votes will either make the Anthropocene official or reject it.

Sixty percent of each committee will need to approve the group’s proposal for it to advance to the next. If it fails in any of them, the Anthropocene might not have another chance to be ratified for years.

If it makes it all the way, though, geology’s amended timeline would officially recognize that humankind’s effects on the planet had been so consequential as to bring the previous chapter of Earth’s history to a close. It would acknowledge that these effects will be discernible in the rocks for millenniums.

Source: Syvitski, et al. (2020)
By Mira Rojanasakul/The New York Times

“I teach the history of science — you know, Copernicus, Kepler, Galileo,” said Francine McCarthy, an earth scientist at Brock University in Canada and member of the working group. “We’re actually doing it,” she said. “We’re living the history of science.”

Still, the knives are out for the Anthropocene, even though, or maybe because, we all have such firsthand familiarity with it.

Stanley C. Finney, the secretary general of the International Union of Geological Sciences, fears the Anthropocene has become a way for geologists to make a “political statement.”

Within the vast expanse of geologic time, he notes, the Anthropocene would be a blip of a blip of a blip. Other geologic time units are useful because they orient scientists in stretches of deep time that left no written records and sparse scientific observations. The Anthropocene, by contrast, would be a time in Earth’s history that humans have already been documenting extensively.

“For the human transformation, we don’t need those terminologies — we have exact years,” said Dr. Finney, whose committee would be the last to vote on the working group’s proposal if it gets that far.

Martin J. Head, a working group member and earth scientist at Brock University, argues declining to recognize the Anthropocene would have political reverberations, too.

“People would say, ‘Well, does that then mean the geological community is denying that we have changed the planet drastically?’” he said. “We would have to justify our decision either way.”

Philip L. Gibbard, a geologist at the University of Cambridge, is secretary general of another of the committees that will vote on the working group’s proposal. He has serious concerns about how the proposal is shaping up, concerns he believes the wider geological community shares.

“It won’t get an easy ride,” he said.

A 19th century black-and-white print of five men in what appears to be a cave. One stands about knee-deep in a hole. The other four are examining a dinosaur skull.
Nineteenth-century fossil hunters. The rock record is full of gaps, “a jigsaw puzzle with many of the parts missing,” one geologist said. Credit: Oxford Science Archive/Print Collector, via Getty Images

Like the zoologists who regulate the names of animal species or the astronomers who decide what counts as a planet, geology’s timekeepers work conservatively, by design. They set classifications that will be reflected in academic studies, museums and textbooks for generations to come.

“Everybody picks on the Anthropocene Working Group because they’ve taken so long,” said Lucy E. Edwards, a retired scientist with the United States Geological Survey. “In geologic time, this isn’t long.”

The geologic time scale divides Earth’s 4.6 billion-year story into grandly named chapters. Like nesting dolls, the chapters contain sub-chapters, which themselves contain sub-sub-chapters. From largest to smallest, the chapters are called eons, eras, periods, epochs and ages.

Right now, according to the current timeline, we are in — deep breath — the Meghalayan Age of the Holocene Epoch of the Quaternary Period of the Cenozoic Era of the Phanerozoic Eon, and have been for 4,200 years.

Drawing lines in Earth time has never been easy. The rock record is full of gaps, “a jigsaw puzzle with many of the parts missing,” as Dr. Gibbard puts it. And most global-scale changes happen gradually, making it tricky to pinpoint when one chapter ended and the next one began. There haven’t been many moments when the entire planet changed at once.

“If a meteor hits the Yucatán Peninsula, that’s a pretty good marker,” Dr. Edwards said. “But other than that, there’s practically nothing out there in the geologic world that’s the best line.”

The early Cambrian Period, around 540 million years ago, saw Earth explode with an astonishing diversity of animal life, but its precise starting point has been contested for decades. A long controversy led to the redrawing of our current geologic period, the Quaternary, in 2009.

“It’s a messy and disputatious business,” said Jan A. Zalasiewicz, a geologist at the University of Leicester. “And of course, the Anthropocene brings a whole new range of dimensions to the messiness and disputatiousness.”

A nuclear test near the Marshall Islands in 1958. A working group proposed the mid-20th century as the beginning of the Anthropocene, in part because of the plutonium isotopes left by bombs. Credit: Corbis, via Getty Images

It took a decade of debate — in emails, academic articles and meetings in London, Berlin, Oslo and beyond — for the Anthropocene Working Group to nail down a key aspect of its proposal.

In a 29-to-4 vote in 2019, the group agreed to recommend that the Anthropocene began in the mid-20th century. That’s when human populations, economic activity and greenhouse gas emissions began skyrocketing worldwide, leaving indelible traces: plutonium isotopes from nuclear explosions, nitrogen from fertilizers, ash from power plants.

The Anthropocene, like nearly all other geologic time intervals, needs to be defined by a specific physical site, known as a “golden spike,” where the rock record clearly sets it off from the interval before it.

After a yearslong hunt, the working group on Saturday finished voting on nine candidate sites for the Anthropocene. They represent the range of environments into which human effects are etched: a peat bog in Poland, the ice of the Antarctic Peninsula, a bay in Japan, a coral reef off the Louisiana coast.

One site — Crawford Lake in Ontario, Canada — is small enough to walk around in 10 minutes. But it is so deep that the bottom layer of water rarely mixes with the upper layers. Whatever sinks to the floor remains undisturbed, gradually accumulating into a tree-ring-like record of geochemical change.

The working group’s members also voted this month on what rank the Anthropocene should have in the timeline: an epoch, an age of the Holocene, or something else.

The group isn’t disclosing the results of these or the other votes to be held in the coming months until they are all complete and it has finalized its proposal for the next level of timekeepers to ponder. It is then that a far more contentious debate about the Anthropocene could begin.

Many scholars still aren’t sure the mid-20th century cutoff makes sense. It is awkwardly recent, especially for archaeologists and anthropologists who would have to start referring to World War II artifacts as “pre-Anthropocene.”

Crawford Lake, near Milton, Ontario. Its depth makes it a prime site for scientific research. Credit: Conservation Halton

And using nuclear bombs to mark a geologic interval strikes some scientists as abhorrent, or at least beside the point. Radionuclides are a convenient global marker, but they say nothing about climate change or other human effects, said Erle C. Ellis, an ecologist at the University of Maryland, Baltimore County.

Using the Industrial Revolution might help. But that definition would still leave out millenniums of planet-warping changes from farming and deforestation.

Canonizing the Anthropocene is a call to attention, said Naomi Oreskes, a member of the working group. For geology, but also the wider world.

“I was raised in a generation where we were taught that geology ended when people showed up,” said Dr. Oreskes, a historian of science at Harvard. The Anthropocene announces that “actually, the human impact is part of geology as a science,” she said. It demands we recognize that our influence on the planet is more than surface level.

But Dr. Gibbard of Cambridge fears that, by trying to add the Anthropocene to the geologic time scale, the working group might actually be diminishing the concept’s significance. The timeline’s strict rules force the group to impose a single starting point on a sprawling story, one that has unspooled over different times in different places.

He and others argue the Anthropocene deserves a looser geologic label: an event. Events don’t appear on the timeline; no bureaucracy of scientists regulates them. But they have been transformative for the planet.

Late-Holocene human footprints, at least 2,000 years old, in volcanic ash and mud in Nicaragua. The Anthropocene could mark an official end to the 11,700-year-old Holocene Epoch. Credit: Carl Frank/Science Source

The filling of Earth’s skies with oxygen, roughly 2.1 to 2.4 billion years ago — geologists call that the Great Oxidation Event. Mass extinctions are events, as is the burst of diversity in marine life 460 to 485 million years ago.

The term Anthropocene is already in such wide use by researchers across scientific disciplines that geologists shouldn’t force it into too narrow a definition, said Emlyn Koster, a geologist and former director of the North Carolina Museum of Natural Sciences.

“I always saw it not as an internal geological undertaking,” he said of the Anthropocene panel’s work, “but rather one that could be greatly beneficial to the world at large.”

Raymond Zhong is a climate reporter. He joined The Times in 2017 and was part of the team that won the 2021 Pulitzer Prize in public service for coverage of the coronavirus pandemic. @zhonggg

A version of this article appears in print on Dec. 18, 2022, Section A, Page 1 of the New York edition with the headline: The Next Epoch Of Planet Earth Might Be Today. Order Reprints | Today’s Paper | Subscribe

Reformation in the Church of Science (The New Atlantis)

thenewatlantis.com

How the truth monopoly was broken up

Andrea Saltelli and Daniel Sarewitz

Spring 2022


We are suffering through a pandemic of lies — or so we hear from leading voices in media, politics, and academia. Our culture is infected by a disease that has many names: fake news, post-truth, misinformation, disinformation, mal-information, anti-science. The affliction, we are told, is a perversion of the proper role of knowledge in a healthy information society.

What is to be done? To restore truth, we need strategies to “get the facts straight.” For example, we need better “science communication,” “independent fact-checking,” and a relentless commitment to exposing and countering falsehoods. This is why the Washington Post fastidiously counted 30,573 “false or misleading claims” by President Trump during his four years in office. Facebook, meanwhile, partners with eighty organizations worldwide to help it flag falsehoods and inform users of the facts. And some disinformation experts recently suggested in the New York Times that the Biden administration should appoint a “reality czar,” a central authority tasked with countering conspiracy theories about Covid and election fraud, who “could become the tip of the spear for the federal government’s response to the reality crisis.”

Such efforts reflect the view that untruth is a plague on our information society, one that can and must be cured. If we pay enough responsible, objective attention to distinguishing what is true from what is not, and thus excise misinformation from the body politic, people can be kept safe from falsehood. Put another way, it is an implicitly Edenic belief in the original purity of the information society, a state we have lapsed from but can yet return to, by the grace of fact-checkers.

We beg to differ. Fake news is not a perversion of the information society but a logical outgrowth of it, a symptom of the decades-long devolution of the traditional authority for governing knowledge and communicating information. That authority has long been held by a small number of institutions. When that kind of monopoly is no longer possible, truth itself must become contested.

This is treacherous terrain. The urge to insist on the integrity of the old order is widespread: Truth is truth, lies are lies, and established authorities must see to it that nobody blurs the two. But we also know from history that what seemed to be stable regimes of truth may collapse, and be replaced. If that is what is happening now, then the challenge is to manage the transition, not to cling to the old order as it dissolves around us.

Truth, New and Improved

The emergence of widespread challenges to the control of information by mainstream social institutions developed in three phases.

First, new technologies of mass communication in the twentieth century — radio, television, and significant improvements in printing, further empowered by new social science methods — enabled the rise of mass-market advertising, which quickly became an essential tool for success in the marketplace. Philosophers like Max Horkheimer and Theodor Adorno were bewildered by a world where, thanks to these new forms of communication, unabashed lies in the interest of selling products could become not just an art but an industry.

The rise of mass marketing created the cultural substrate for the so-called post-truth world we live in now. It normalized the application of hyperbole, superlatives, and untestable claims of superiority to the rhetoric of everyday commerce. What started out as merely a way to sell new and improved soap powder and automobiles amounts today to a rhetorical infrastructure of hype that infects every corner of culture: the way people promote their careers, universities their reputations, governments their programs, and scientists the importance of their latest findings. Whether we’re listening to a food corporation claim that its oatmeal will keep your heart healthy or a university press office herald a new study that will upend everything we know, radical skepticism would seem to be the rational stance for information consumers.

Politics, Scientized

In a second, partly overlapping phase in the twentieth century, science underwent a massive expansion of its role into the domain of public affairs, and thus into highly contestable subject matters. Spurred by a wealth of new instruments for measuring the world and techniques for analyzing the resulting data, policies on agriculture, health, education, poverty, national security, the environment and much more became subject to new types of scientific investigation. As never before, science became part of the language of policymaking, and scientists became advocates for particular policies.

The dissolving boundary between science and politics was on full display by 1958, when the chemist Linus Pauling and physicist Edward Teller debated the risks of nuclear weapons testing on a U.S. television broadcast, a spectacle that mixed scientific claims about fallout risks with theories of international affairs and assertions of personal moral conviction. The debate presaged a radical transformation of science and its social role. Where science was once a rarefied, elite practice largely isolated from society, scientific experts were now mobilized in increasing numbers to form and inform politics and policymaking. Of course, society had long been shaped, sometimes profoundly, by scientific advances. But in the second half of the twentieth century, science programs started to take on a rapidly expanding portfolio of politically divisive issues: determining the cancer-causing potential of food additives, pesticides, and tobacco; devising strategies for the U.S. government in its nuclear arms race against the Soviet Union; informing guidelines for diet, nutrition, and education; predicting future energy supplies, food supplies, and population growth; designing urban renewal programs; choosing nuclear waste disposal sites; and on and on.

Philosopher-mathematicians Silvio Funtowicz and Jerome Ravetz recognized in 1993 that a new kind of science was emerging, which they termed “post-normal science.” This kind of science was inherently contestable, both because it dealt with the irreducible uncertainties of complex and messy problems at the intersection of nature and society, and because it was being used for making decisions that were themselves value-laden and contested. Questions that may sound straightforward, such as “Should women in their forties get regular mammograms?” or “Will genetically modified crops and livestock make food more affordable?” or “Do the benefits of decarbonizing our energy production outweigh the costs?” became the focus of intractable and never-ending scientific and political disputes.

This situation remained reasonably manageable through the 1990s, because science communication was still largely controlled by powerful institutions: governments, corporations, and universities. Even if these institutions were sometimes fiercely at odds, all had a shared interest in maintaining the idea of a unitary science that provided universal truths upon which rational action should be based. Debates between experts may have raged — often without end — but one could still defend the claim that the search for truth was a coherent activity carried out by special experts working in pertinent social institutions, and that the truths emerging from their work would be recognizable and agreed-upon when finally they were determined. Few questioned the fundamental notion that science was necessary and authoritative for determining good policy choices across a wide array of social concerns. The imperative remained to find facts that could inform action — a basic tenet of Enlightenment rationality.

Science, Democratized

The rise of the Internet and social media marks the third phase of the story, and it has now rendered thoroughly implausible any institutional monopoly on factual claims. As we are continuing to see with Covid, the public has instantly available to it a nearly inexhaustible supply of competing and contradictory claims, made by credentialed experts associated with august institutions, about everything from mask efficacy to appropriate social distancing and school closure policies. And many of the targeted consumers of these claims are already conditioned to be highly skeptical of the information they receive from mainstream media.

Today’s information environment certainly invites mischievous seeding of known lies into public discourse. But bad actors are not the most important part of the story. Institutions can no longer maintain their old stance of authoritative certainty about information — the stance they need to justify their actions, or to establish a convincing dividing line between true news and fake news. Claims of disinterest by experts acting on behalf of these institutions are no longer plausible. People are free to decide what information, and in which experts, they want to believe. The Covid lab-leak hypothesis was fake news until that news itself became fake. Fact-checking organizations are themselves now subject to accusations of bias: Recently, Facebook flagged as “false” a story in the esteemed British Medical Journal about a shoddy Covid vaccine trial, and the editors of the journal in turn called Facebook’s fact-checking “inaccurate, incompetent and irresponsible.”

No political system exists without its share of lies, obfuscation, and fake news, as Plato and Machiavelli taught. Yet even those thinkers would be puzzled by the immense power of modern technologies to generate stories. Ideas have become a battlefield, and we are all getting lost in the fog of the truth wars. When everything seems like it can be plausible to someone, the term “fake news” loses its meaning.

iStock

The celebrated expedient that an aristocracy has the right and the mission to offer “noble lies” to the citizens for their own good thus looks increasingly impotent. In October 2020, U.S. National Institutes of Health director Francis Collins, a veritable aristocrat of the scientific establishment, sought to delegitimize the recently released Great Barrington Declaration. Crafted by a group he referred to as “fringe epidemiologists” (they were from Harvard, Stanford, and Oxford), the declaration questioned the mainstream lockdown approach to the pandemic, including school and business closures. “There needs to be a quick and devastating published take down,” Collins wrote in an email to fellow aristocrat Anthony Fauci.

But we now live in a moment where suppressing that kind of dissent has become impossible. By May 2021, that “fringe” became part of a new think tank, the Brownstone Institute, founded in reaction to what they describe as “the global crisis created by policy responses to the Covid-19 pandemic.” From this perspective, policies advanced by Collins and Fauci amounted to “a failed experiment in full social and economic control” reflecting “a willingness on the part of the public and officials to relinquish freedom and fundamental human rights in the name of managing a public health crisis.” The Brownstone Institute’s website is a veritable one-stop Internet shopping haven for anyone looking for well-credentialed expert opinions that counter more mainstream expert opinions on Covid.

Similarly, claims that the science around climate change is “settled,” and that therefore the world must collectively work to decarbonize the global energy system by 2050, have engendered a counter-industry of dissenting experts, organizations, and websites.

At this point, one might be forgiven for speculating that the public is being fed such a heavy diet of Covid and climate change precisely because these are problems that have been framed politically as amenable to a scientific treatment. But it seems that the more the authorities insist on the factiness of facts, the more suspect these become to larger and larger portions of the populace.

A Scientific Reformation

The introduction of the printing press in the mid-fifteenth century triggered a revolution in which the Church lost its monopoly on truth. Millions of books were printed in just a few decades after Gutenberg’s innovation. Some people held the printing press responsible for stoking collective economic manias and speculative bubbles. It allowed the widespread distribution of astrological almanacs in Europe, which fed popular hysteria around prophesies of impending doom. And it allowed dissemination of the Malleus Maleficarum, an influential treatise on demonology that contributed to rising persecution of witches.

Though the printing press allowed sanctioned ideas to spread like never before, it also allowed the spread of serious but hitherto suppressed ideas that threatened the legitimacy of the Church. A range of alternative philosophical, moral, and ideological perspectives on Christianity became newly accessible to ever-growing audiences. So did exposés of institutional corruption, such as the practice of indulgences — a market for buying one’s way out of purgatory that earned the Church vast amounts of money. Martin Luther, in particular, understood and exploited the power of the printing press in pursuing his attacks on the Church — one recent historical account, Andrew Pettegree’s book Brand Luther, portrays him as the first mass-market communicator.

“Beginning of the Reformation”: Martin Luther directs the posting of his Ninety-five Theses, protesting the practice of the sale of indulgences, to the door of the castle church in Wittenberg on October 31, 1517.
W. Baron von Löwenstern, 1830 / Library of Congress

To a religious observer living through the beginning of the Reformation, the proliferation of printed material must have appeared unsettling and dangerous: the end of an era, and the beginning of a threatening period of heterodoxy, heresies, and confusion. A person exposed to the rapid, unchecked dispersion of printed matter in the fifteenth century might have called many such publications fake news. Today many would say that it was the Reformation itself that did away with fake news, with the false orthodoxies of a corrupted Church, opening up a competition over ideas that became the foundation of the modern world. Whatever the case, this new world was neither neat nor peaceful, with the religious wars resulting from the Church’s loss of authority over truth continuing until the mid-seventeenth century.

Like the printing press in the fifteenth century, the Internet in the twenty-first has radically transformed and disrupted conventional modes of communication, destroyed the existing structure of authority over truth claims, and opened the door to a period of intense and tumultuous change.

Those who lament the death of truth should instead acknowledge the end of a monopoly system. Science was the pillar of modernity, the new privileged lens to interpret the real world and show a pathway to collective good. Science was not just an ideal but the basis for a regime, a monopoly system. Within this regime, truth was legitimized in particular private and public institutions, especially government agencies, universities, and corporations; it was interpreted and communicated by particular leaders of the scientific community, such as government science advisors, Nobel Prize winners, and the heads of learned societies; it was translated for and delivered to the laity in a wide variety of public and political contexts; it was presumed to point directly toward right action; and it was fetishized by a culture that saw it as single and unitary, something that was delivered by science and could be divorced from the contexts in which it emerged.

Such unitary truths included above all the insistence that the advance of science and technology would guarantee progress and prosperity for everyone — not unlike how the Church’s salvific authority could guarantee a negotiated process for reducing one’s punishment for sins. To achieve this modern paradise, certain subsidiary truths lent support. One, for example, held that economic rationality would illuminate the path to universal betterment, driven by the principle of comparative advantage and the harmony of globalized free markets. Another subsidiary truth expressed the social cost of carbon emissions with absolute precision to the dollar per ton, with the accompanying requirement that humans must control the global climate to the tenth of a degree Celsius. These ideas are self-evidently political, requiring monopolistic control of truth to implement their imputed agendas.

An easy prophesy here is that wars over scientific truth will intensify, as did wars over religious truth after the printing press. Those wars ended with the Peace of Westphalia in 1648, followed, eventually, by the creation of a radically new system of governance, the nation-state, and the collapse of the central authority of the Catholic Church. Will the loss of science’s monopoly over truth lead to political chaos and even bloodshed? The answer largely depends upon the resilience of democratic institutions, and their ability to resist the authoritarian drift that seems to be a consequence of crises such as Covid and climate change, to which simple solutions, and simple truths, do not pertain.

Both the Church and the Protestants enthusiastically adopted the printing press. The Church tried to control it through an index of forbidden books. Protestant print shops adopted a more liberal cultural orientation, one that allowed for competition among diverse ideas about how to express and pursue faith. Today we see a similar dynamic. Mainstream, elite science institutions use the Internet to try to preserve their monopoly over which truths get followed where, but the Internet’s bottom-up, distributed architecture appears to give a decisive advantage to dissenters and their diverse ideologies and perspectives.

Holding on to the idea that science always draws clear boundaries between the true and the false will continue to appeal strongly to many sincere and concerned people. But if, as in the fifteenth century, we are now indeed experiencing a tumultuous transition to a new world of communication, what we may need is a different cultural orientation toward science and technology. The character of this new orientation is only now beginning to emerge, but it will above all have to accommodate the over-abundance of competing truths in human affairs, and create new opportunities for people to forge collective meaning as they seek to manage the complex crises of our day.

How France created the metric system (BBC)

One of the last remaining ‘mètre étalons’, or standard metre bars, can be found below a ground-floor window on the Ministry of Justice in Paris (Credit: PjrTravel/Alamy)
(Image credit: PjrTravel/Alamy)

By Madhvi Ramani

24th September 2018

It is one of the most important developments in human history, affecting everything from engineering to international trade to political systems.

On the facade of the Ministry of Justice in Paris, just below a ground-floor window, is a marble shelf engraved with a horizontal line and the word ‘MÈTRE’. It is hardly noticeable in the grand Place Vendôme: in fact, out of all the tourists in the square, I was the only person to stop and consider it. But this shelf is one of the last remaining ‘mètre étalons’ (standard metre bars) that were placed all over the city more than 200 years ago in an attempt to introduce a new, universal system of measurement. And it is just one of many sites in Paris that point to the long and fascinating history of the metric system.

“Measurement is one of the most banal and ordinary things, but it’s actually the things we take for granted that are the most interesting and have such contentious histories,” said Dr Ken Alder, history professor at Northwestern University and author of The Measure of All Things, a book about the creation of the metre. 

One of the last remaining ‘mètre étalons’, or standard metre bars, can be found below a ground-floor window on the Ministry of Justice in Paris (Credit: PjrTravel/Alamy)

One of the last remaining ‘mètre étalons’, or standard metre bars, can be found below a ground-floor window on the Ministry of Justice in Paris (Credit: PjrTravel/Alamy)

We don’t generally notice measurement because it’s pretty much the same everywhere we go. Today, the metric system, which was created in France, is the official system of measurement for every country in the world except three: the United States, Liberia and Myanmar, also known as Burma. And even then, the metric system is still used for purposes such as global trade. But imagine a world where every time you travelled you had to use different conversions for measurements, as we do for currency. This was the case before the French Revolution in the late 18th Century, where weights and measures varied not only from nation to nation, but also within nations. In France alone, it was estimated at that time that at least 250,000 different units of weights and measures were in use during the Ancien Régime.

The French Revolution changed all that. During the volatile years between 1789 and 1799, the revolutionaries sought not only to overturn politics by taking power away from the monarchy and the church, but also to fundamentally alter society by overthrowing old traditions and habits. To this end, they introduced, among other things, the Republican Calendar in 1793, which consisted of 10-hour days, with 100 minutes per hour and 100 seconds per minute. Aside from removing religious influence from the calendar, making it difficult for Catholics to keep track of Sundays and saints’ days, this fit with the new government’s aim of introducing decimalisation to France. But while decimal time did not stick, the new decimal system of measurement, which is the basis of the metre and the kilogram, remains with us today.

Prior to the French Revolution, at least 250,000 different units of measurement were used throughout France (Credit: Madhvi Ramani)

Prior to the French Revolution, at least 250,000 different units of measurement were used throughout France (Credit: Madhvi Ramani)

The task of coming up with a new system of measurement was given to the nation’s preeminent scientific thinkers of the Enlightenment. These scientists were keen to create a new, uniform set based on reason rather than local authorities and traditions. Therefore, it was determined that the metre was to be based purely on nature. It was to be one 10-millionth of the distance from the North Pole to the equator.

The line of longitude running from the pole to the equator that would be used to determine the length of the new standard was the Paris meridian. This line bisects the centre of the Paris Observatory building in the 14th arrondissement, and is marked by a brass strip laid into the white marble floor of its high-ceilinged Meridian Room, or Cassini Room.

Although the Paris Observatory is not currently open to the public, you can trace the meridian line through the city by looking out for small bronze disks on the ground with the word ARAGO on them, installed by Dutch artist Jan Dibbets in 1994 as a memorial to the French astronomer François Arago. This is the line that two astronomers set out from Paris to measure in 1792. Jean-Baptiste-Joseph Delambre travelled north to Dunkirk while Pierre Méchain travelled south to Barcelona.

Using the latest equipment and the mathematical process of triangulation to measure the meridian arc between these two sea-level locations, and then extrapolating the distance between the North Pole and the equator by extending the arc to an ellipse, the two astronomers aimed to meet back in Paris to come up with the new, universal standard of measurement within one year. It ended up taking seven.

The line of longitude used to determine the length of the metre runs through the centre of the Paris Observatory (Credit: Madhvi Ramani)

The line of longitude used to determine the length of the metre runs through the centre of the Paris Observatory (Credit: Madhvi Ramani)

As Dr Alder details in his book, measuring this meridian arc during a time of great political and social upheaval proved to be an epic undertaking. The two astronomers were frequently met with suspicion and animosity; they fell in and out of favour with the state; and were even injured on the job, which involved climbing to high points such as the tops of churches.

The Pantheon, which was originally commissioned by Louis XV to be a church, became the central geodetic station in Paris from whose dome Delambre triangulated all the points around the city. Today, it serves as a mausoleum to heroes of the Republic, such as Voltaire, René Descartes and Victor Hugo. But during Delambre’s time, it served as another kind of mausoleum – a warehouse for all the old weights and measures that had been sent in by towns from all over France in anticipation of the new system.

But despite all the technical mastery and labour that had gone into defining the new measurement, nobody wanted to use it. People were reluctant to give up the old ways of measuring since these were inextricably bound with local rituals, customs and economies. For example, an ell, a measure of cloth, generally equalled the width of local looms, while arable land was often measured in days, referencing the amount of land that a peasant could work during this time.

Paris’ Pantheon once stored different weights and measures sent from all across France in anticipation of the new standardised system (Credit: pocholo/Alamy)

Paris’ Pantheon once stored different weights and measures sent from all across France in anticipation of the new standardised system (Credit: pocholo/Alamy)

The Paris authorities were so exasperated at the public’s refusal to give up their old measure that they even sent police inspectors to marketplaces to enforce the new system. Eventually, in 1812, Napoleon abandoned the metric system; although it was still taught in school, he largely let people use whichever measures they liked until it was reinstated in 1840. According to Dr Alder, “It took a span of roughly 100 years before almost all French people started using it.”

This was not just due to perseverance on the part of the state. France was quickly advancing into the industrial revolution; mapping required more accuracy for military purposes; and, in 1851, the first of the great World’s Fairs took place, where nations would showcase and compare industrial and scientific knowledge. Of course, it was tricky to do this unless you had clear, standard measures, such as the metre and the kilogram. For example, the Eiffel Tower was built for the 1889 World’s Fair in Paris, and at 324m, was at that time the world’s tallest man-made structure.

The metric system was necessary to compare industrial and scientific knowledge – such as the height of the Eiffel Tower – at the World’s Fairs (Credit: robertharding/Alamy)

The metric system was necessary to compare industrial and scientific knowledge – such as the height of the Eiffel Tower – at the World’s Fairs (Credit: robertharding/Alamy)

All of this came together to produce one of the world’s oldest international institutions: The International Bureau of Weights and Measures (BIPM). Located in the quiet Paris suburb of Sèvres, the BIPM is surrounded by landscaped gardens and a park. Its lack of ostentatiousness reminded me again of the mètre étalon in the Place Vendôme; it might be tucked away, but it is fundamental to the world we live in today.

Originally established to preserve international standards, the BIPM promotes the uniformity of seven international units of measurement: the metre, the kilogram, the second, the ampere, the kelvin, the mole and the candela. It is the home of the master platinum standard metre bar that was used to carefully calibrate copies, which were then sent out to various other national capitals. In the 1960s, the BIPM redefined the metre in terms of light, making it more precise than ever. And now, defined by universal laws of physics, it was finally a measure truly based on nature.

The International Bureau of Weights and Measures (BIPM) was established to promote the uniformity of international units of measurement (Credit: Chronicle/Alamy)

The International Bureau of Weights and Measures (BIPM) was established to promote the uniformity of international units of measurement (Credit: Chronicle/Alamy)

The building in Sèvres is also home to the original kilogram, which sits under three bell jars in an underground vault and can only be accessed using three different keys, held by three different individuals. The small, cylindrical weight cast in platinum-iridium alloy is also, like the metre, due to be redefined in terms of nature – specifically the quantum-mechanical quantity known as the Planck constant – by the BIPM this November.

“Establishing a new basis for a new definition of the kilogram is a very big technological challenge. [It] was described at one point as the second most difficult experiment in the whole world, the first being discovering the Higgs Boson,” said Dr Martin Milton, director of the BIPM, who showed me the lab where the research is being conducted. 

As he explained the principle of the Kibble balance and the way in which a mass is weighed against the force of a coil in a magnetic field, I marvelled at the latest scientific engineering before me, the precision and personal effort of all the people who have been working on the kilogram project since it began in 2005 and are now very close to achieving their goal.

The BIPM houses the original standard metre and the original standard kilogram (Credit: Madhvi Ramani)

The BIPM houses the original standard metre and the original standard kilogram (Credit: Madhvi Ramani)

As with the 18th-Century meridian project, defining measurement continues to be one of our most important and difficult challenges. As I walked further up the hill of the public park that surrounds the BIPM and looked out at the view of Paris, I thought about the structure of measurement underlying the whole city. The machinery used for construction; the trade and commerce happening in the city; the exact quantities of drugs, or radiation for cancer therapy, being delivered in the hospitals.

What started with the metre formed the basis of our modern economy and led to globalisation. It enabled high-precision engineering and continues to be essential for science and research, progressing our understanding of the universe.

CORRECTION: A previous version of this story incorrectly described the placement of the meridian line in the Paris Observatory. We regret the error and have updated the text accordingly.

You may also be interested in:
• How India gave us the zero
• The island that forever changed science
• The clock that changed the meaning of time

‘Não há linha clara que separe ciência da pseudociência’, diz professor de Princeton (BBC News Brasil)

bbc.com


Carlos Serrano – @carliserrano

BBC News Mundo

12 dezembro 2021

Frenologia
A relação entre o conhecimento genuíno e as doutrinas marginais é mais próxima do muitos querem aceitar, diz historiador especialista em história da ciência

Terraplanistas, antivacinas, criacionistas, astrólogos, telepatas, numerólogos, homeopatas…

Para as instituições científicas, essas práticas e movimentos enquadram-se na categoria das “pseudociências”. Ou seja, doutrinas baseadas em fundamentos que seus adeptos consideram científicas e, a partir daí, criam uma corrente que se afasta do que é normalmente aceito pelo mundo acadêmico.

Mas como distinguir o que é ciência daquilo que se faz passar por ciência?

Essa tarefa é muito mais complicada do que parece, segundo Michael Gordin, professor da Universidade Princeton, nos Estados Unidos, e especialista em história da ciência. Gordin é autor do livro On the Fringe: Where Science Meets Pseudoscience (“Na Fronteira: Onde a Ciência Encontra a Pseudociência”, em tradução livre).

Seu livro detalha como operam as pseudociências e como, do seu ponto de vista, são uma consequência inevitável do progresso científico.

Em entrevista à BBC News Mundo (o serviço em espanhol da BBC), Gordin detalha a complexa relação entre o que se considera ciência verdadeira e o que ele chama de doutrinas marginais.

Michael Gordin
Michael Gordin, autor do livro “Na Fronteira: Onde a Ciência Encontra a Pseudociência” (em tradução livre do inglês)

BBC News Mundo – O senhor afirma que não existe uma linha definida separando a ciência da pseudociência, mas a ciência tem um método claro e comprovável. Esta não seria uma diferença clara com relação à pseudociência?

Michael Gordin – Acredita-se normalmente que a ciência tem um único método, mas isso não é verdade. A ciência tem muitos métodos. Os geólogos fazem seu trabalho de forma muito diferente dos físicos teóricos, e os biólogos moleculares, dos neurocientistas. Alguns cientistas trabalham no campo, observando o que acontece. Outros trabalham em laboratório, sob condições controladas. Outros fazem simulações. Ou seja, a ciência tem muitos métodos, que são heterogêneos. A ciência é dinâmica, e esse dinamismo dificulta a definição dessa linha. Podemos tomar um exemplo concreto e dizer que se trata de ciência ou de pseudociência. É fácil com um exemplo concreto.

O problema é que essa linha não é consistente e, quando você observa uma maior quantidade de casos, haverá coisas que antes eram consideradas ciência e agora são consideradas pseudociências, como a astrologia. Existem temas como a deriva dos continentes, que inicialmente era considerada uma teoria marginal e agora é uma teoria básica da geofísica.

Quase tudo o que hoje se considera pseudociência já foi ciência no passado, que foi refutada com o passar do tempo e os que continuam a apoiá-la são considerados lunáticos ou charlatães. Ou seja, a definição do que é ciência ou pseudociência é dinâmica ao longo do tempo. Esta é uma das razões da dificuldade desse julgamento.

astrología
Considerada ciência no passado, a astrologia encontra-se hoje no rol das pseudociências – ou doutrinas marginais, segundo Michael Gordin

BBC News Mundo – Mas existem coisas que não se alteram ao longo do tempo. Por exemplo, 2+2 sempre foi igual a 4. Isso quer dizer que a ciência trabalha com base em princípios que não permitem interpretações…

Gordin – Bem, isso não é necessariamente certo. Dois óvnis mais dois óvnis são quatro óvnis.

É interessante que você tenha escolhido a matemática que, de fato, não é uma ciência empírica, pois ela não se refere ao mundo exterior. É uma série de regras que usamos para determinar certas coisas.

Uma das razões pelas quais é muito complicado fazer a distinção é o fato de que as doutrinas marginais observam o que é considerado ciência estabelecida e adaptam a elas seus argumentos e suas técnicas.

Um exemplo é o “criacionismo científico”, que defende que o mundo foi criado em sete dias, 6.000 anos atrás. Existem publicações de criacionismo científico que incluem gráficos matemáticos sobre as razões de decomposição de vários isótopos, para tentar comprovar que a Terra tem apenas 6.000 anos.

Seria genial afirmar que usar a matemática e apresentar gráficos é ciência, mas a realidade é que quase todas as doutrinas marginais usam a matemática de alguma forma.

Os cientistas discordam sobre o tipo de matemática utilizada, mas existem, por exemplo, pessoas que defendem que a matemática avançada utilizada na teoria das cordas já não é científica, porque perdeu a verificação empírica. Trata-se de matemática de alto nível, feita por doutores das melhores universidades, mas existe um debate interno na ciência, entre os físicos, que discutem se ela deve ou não ser considerada ciência.

Não estou dizendo que todos devem ser criacionistas, mas, quando a mecânica quântica foi proposta pela primeira vez, algumas pessoas diziam: “isso parece muito estranho”, “ela não se atém às medições da forma em que acreditamos que funcionem” ou “isso realmente é ciência?”

Terra plana
Nos últimos anos, popularizou-se entre alguns grupos a ideia de que a Terra é plana

BBC News Mundo – Então o sr. afirma que as pseudociências ou doutrinas marginais têm algum valor?

Gordin – A questão é que muitas coisas que consideramos inovadoras provêm dos limites do conhecimento ortodoxo.

O que quero dizer são basicamente três pontos: primeiro, que não existe uma linha divisória clara; segundo, que compreender o que fica de cada lado da linha exige a compreensão do contexto; e, terceiro, que o processo normal da ciência produz doutrinas marginais.

Não podemos descartar essas doutrinas, pois elas são inevitáveis. Elas são um produto derivado da forma como as ciências funcionam.

BBC News Mundo – Isso significa que deveríamos ser mais tolerantes com as pseudociências?

Gordin – Os cientistas, como qualquer outra pessoa, têm tempo e energia limitados e não podem pesquisar tudo.

Por isso, qualquer tempo que for dedicado a refutar ou negar a legitimidade de uma doutrina marginal é tempo que deixa de ser usado para fazer ciência — e talvez nem surta resultados.

As pessoas vêm refutando o criacionismo científico há décadas. Elas trataram de desmascarar a telepatia por ainda mais tempo e ela segue rondando à nossa volta. Existem diversos tipos de ideias marginais. Algumas são muito politizadas e chegam a ser nocivas para a saúde pública ou o meio ambiente. É a estas, a meu ver, que precisamos dedicar atenção e recursos para sua eliminação ou pelo menos explicar por que elas estão erradas.

Mas não acho que outras ideias, como acreditar em óvnis, sejam especificamente perigosas. Acredito que nem mesmo o criacionismo seja tão perigoso como ser antivacinas, ou acreditar que as mudanças climáticas são uma farsa.

Devemos observar as pseudociências como algo inevitável e abordá-las de forma pragmática. Temos uma quantidade de recursos limitada e precisamos escolher quais doutrinas podem causar danos e como enfrentá-las.

Devemos simplesmente tratar de reduzir os danos que elas podem causar? Esse é o caso da vacinação obrigatória, cujo objetivo é evitar os danos, mas sem necessariamente convencer os opositores que eles estão equivocados. Devemos persuadi-los de que estão equivocados? Isso precisa ser examinado caso a caso.

Antivacuna
Existem em várias partes do mundo grupos que se opõem às vacinas contra a covid-19

BBC News Mundo – Como então devemos lidar com as pseudociências?

Gordin – Uma possibilidade é reconhecer que são pessoas interessadas na ciência.

Um terraplanista, por exemplo, é uma pessoa interessada na configuração da Terra. Significa que é alguém que teve interesse em pesquisar a natureza e, por alguma razão, seguiu a direção incorreta.

Pode-se então perguntar por que isso aconteceu. Pode-se abordar a pessoa, dizendo: “se você não acredita nesta evidência, em qual tipo de evidência você acreditaria?” ou “mostre-me suas evidências e vamos conversar”.

É algo que poderíamos fazer, mas vale a pena fazê-lo? É uma doutrina que não considero perigosa. Seria um problema se todos os governos do mundo pensassem que a Terra é plana, mas não vejo esse risco.

A versão contemporânea do terraplanismo surgiu há cerca de 15 anos. Acredito que os acadêmicos ainda não compreendem muito bem como aconteceu, nem por que aconteceu tão rápido.

Outra coisa que podemos fazer é não necessariamente persuadi-los de que estão equivocados, porque talvez eles não aceitem, mas tentar entender como esse movimento surgiu e se expandiu. Isso pode nos orientar sobre como enfrentar ameaças mais sérias.

cálculos
As pessoas que acreditam nas doutrinas marginais muitas vezes tomam elementos da ciência estabelecida para traçar suas conclusões

BBC News Mundo – Ameaças mais sérias como os antivacinas…

Gordin – As vacinas foram inventadas no século 18, sempre houve pessoas que se opusessem a elas, em parte porque todas as vacinas apresentam risco, embora seja muito baixo.

Ao longo do tempo, a forma como se lidou com a questão foi a instituição de um sistema de seguro que basicamente diz o seguinte: você precisa receber a vacina, mas se você receber e tiver maus resultados, nós compensaremos você por esses danos.

Tenho certeza de que isso ocorrerá com a vacina contra a covid, mas ainda não conhecemos todo o espectro, nem a seriedade dos danos que ela poderá causar. Mas os danos e a probabilidade de sua ocorrência parecem ser muito baixos.

Com relação aos antivacinas que acreditam, por exemplo, que a vacina contra a covid contém um chip, a única ação que pode ser tomada para o bem da saúde pública é torná-la obrigatória. Foi dessa forma que se conseguiu erradicar a pólio na maior parte do mundo, mesmo com a existência dos opositores à vacina.

BBC News Mundo – Mas torná-la obrigatória pode fazer com que alguém diga que a ciência está sendo usada com propósitos políticos ou ideológicos…

Gordin – Tenho certeza de que, se o Estado impuser uma vacina obrigatória, alguém dirá isso. Mas não se trata de ideologia. O Estado já obriga tantas coisas e já existem vacinas que são obrigatórias.

E o Estado faz todo tipo de afirmações científicas. Não é permitido o ensino do criacionismo nas escolas, por exemplo, nem a pesquisa de clonagem de seres humanos. Ou seja, o Estado já interveio muitas vezes em disputas científicas e procura fazer isso segundo o consenso científico.

BBC News Mundo – As pessoas que adotam as pseudociências o fazem com base no ceticismo, que é exatamente um dos valores fundamentais da ciência. É um paradoxo, não?

Gordin – Este é um dos motivos por que acredito que não haja uma linha divisória clara entre a ciência e a pseudociência. O ceticismo é uma ferramenta que todos nós utilizamos. A questão é sobre qual tipo de assuntos você é cético e o que pode convencê-lo de um fato específico.

No século 19, havia um grande debate se os átomos realmente existiam ou não. Hoje, praticamente nenhum cientista duvida da sua existência. É assim que a ciência funciona. O foco do ceticismo se move de um lado para outro com o passar do tempo. Quando esse ceticismo se dirige a assuntos que já foram aceitos, às vezes ocorrem problemas, mas há ocasiões em que isso é necessário.

A essência da teoria da relatividade de Einstein é que o éter — a substância através da qual as ondas de luz supostamente viajavam — não existe. Para isso, Einstein concentrou seu ceticismo em um postulado fundamental, mas o fez dizendo que poderiam ser preservados muitos outros conhecimentos que já eram considerados estabelecidos.

Portanto, o ceticismo deve ter um propósito. Se você for cético pelo simples fato de sê-lo, este é um processo que não produz avanços.

Mulher
O ceticismo é um dos princípios básicos da ciência

BBC News Mundo – É possível que, no futuro, o que hoje consideramos ciência seja descartado como pseudociência?

Gordin – No futuro, haverá muitas doutrinas que serão consideradas pseudociências, simplesmente porque existem muitas coisas que ainda não entendemos.

Existem muitas coisas que não entendemos sobre o cérebro ou o meio ambiente. No futuro, as pessoas olharão para muitas teorias e dirão que estão erradas.

Não é suficiente que uma teoria seja incorreta para que seja considerada pseudociência. É necessário que existam pessoas que acreditem que ela é correta, mesmo que o consenso afirme que se trata de um equívoco e que as instituições científicas considerem que, por alguma razão, ela é perigosa.

Pythagoras’ revenge: humans didn’t invent mathematics, it’s what the world is made of (The Conversation)

theconversation.com

Sam Baron – November 21, 2021 11.47pm EST


Many people think that mathematics is a human invention. To this way of thinking, mathematics is like a language: it may describe real things in the world, but it doesn’t “exist” outside the minds of the people who use it.

But the Pythagorean school of thought in ancient Greece held a different view. Its proponents believed reality is fundamentally mathematical.

More than 2,000 years later, philosophers and physicists are starting to take this idea seriously.

As I argue in a new paper, mathematics is an essential component of nature that gives structure to the physical world.

Honeybees and hexagons

Bees in hives produce hexagonal honeycomb. Why?

According to the “honeycomb conjecture” in mathematics, hexagons are the most efficient shape for tiling the plane. If you want to fully cover a surface using tiles of a uniform shape and size, while keeping the total length of the perimeter to a minimum, hexagons are the shape to use.

The hexagonal pattern of honeycomb is the most efficient way to cover a space in identical tiles. Sam Baron, Author provided

Charles Darwin reasoned that bees have evolved to use this shape because it produces the largest cells to store honey for the smallest input of energy to produce wax.

The honeycomb conjecture was first proposed in ancient times, but was only proved in 1999 by mathematician Thomas Hales.

Cicadas and prime numbers

Here’s another example. There are two subspecies of North American periodical cicadas that live most of their lives in the ground. Then, every 13 or 17 years (depending on the subspecies), the cicadas emerge in great swarms for a period of around two weeks.

Why is it 13 and 17 years? Why not 12 and 14? Or 16 and 18?

One explanation appeals to the fact that 13 and 17 are prime numbers.

Some cicadas have evolved to emerge from the ground at intervals of a prime number of years, possibly to avoid predators with life cycles of different lengths. Michael Kropiewnicki / Pixels

Imagine the cicadas have a range of predators that also spend most of their lives in the ground. The cicadas need to come out of the ground when their predators are lying dormant.

Suppose there are predators with life cycles of 2, 3, 4, 5, 6, 7, 8 and 9 years. What is the best way to avoid them all?

Well, compare a 13-year life cycle and a 12-year life cycle. When a cicada with a 12-year life cycle comes out of the ground, the 2-year, 3-year and 4-year predators will also be out of the ground, because 2, 3 and 4 all divide evenly into 12.

When a cicada with a 13-year life cycle comes out of the ground, none of its predators will be out of the ground, because none of 2, 3, 4, 5, 6, 7, 8 or 9 divides evenly into 13. The same is true for 17.

P1–P9 represent cycling predators. The number-line represents years. The highlighted gaps show how 13 and 17-year cicadas manage to avoid their predators. Sam Baron, Author provided

It seems these cicadas have evolved to exploit basic facts about numbers.

Creation or discovery?

Once we start looking, it is easy to find other examples. From the shape of soap films, to gear design in engines, to the location and size of the gaps in the rings of Saturn, mathematics is everywhere.

If mathematics explains so many things we see around us, then it is unlikely that mathematics is something we’ve created. The alternative is that mathematical facts are discovered: not just by humans, but by insects, soap bubbles, combustion engines and planets.

What did Plato think?

But if we are discovering something, what is it?

The ancient Greek philosopher Plato had an answer. He thought mathematics describes objects that really exist.

For Plato, these objects included numbers and geometric shapes. Today, we might add more complicated mathematical objects such as groups, categories, functions, fields and rings to the list.

For Plato, numbers existed in a realm separate from the physical world. Geralt / Pixabay

Plato also maintained that mathematical objects exist outside of space and time. But such a view only deepens the mystery of how mathematics explains anything.

Explanation involves showing how one thing in the world depends on another. If mathematical objects exist in a realm apart from the world we live in, they don’t seem capable of relating to anything physical.

Enter Pythagoreanism

The ancient Pythagoreans agreed with Plato that mathematics describes a world of objects. But, unlike Plato, they didn’t think mathematical objects exist beyond space and time.

Instead, they believed physical reality is made of mathematical objects in the same way matter is made of atoms.

If reality is made of mathematical objects, it’s easy to see how mathematics might play a role in explaining the world around us.

Pythagorean pie: the world is made of mathematics plus matter. Sam Baron, Author provided

In the past decade, two physicists have mounted significant defences of the Pythagorean position: Swedish-US cosmologist Max Tegmark and Australian physicist-philosopher Jane McDonnell.

Tegmark argues reality just is one big mathematical object. If that seems weird, think about the idea that reality is a simulation. A simulation is a computer program, which is a kind of mathematical object.

McDonnell’s view is more radical. She thinks reality is made of mathematical objects and minds. Mathematics is how the Universe, which is conscious, comes to know itself.

I defend a different view: the world has two parts, mathematics and matter. Mathematics gives matter its form, and matter gives mathematics its substance.

Mathematical objects provide a structural framework for the physical world.

The future of mathematics

It makes sense that Pythagoreanism is being rediscovered in physics.

In the past century physics has become more and more mathematical, turning to seemingly abstract fields of inquiry such as group theory and differential geometry in an effort to explain the physical world.

As the boundary between physics and mathematics blurs, it becomes harder to say which parts of the world are physical and which are mathematical.

But it is strange that Pythagoreanism has been neglected by philosophers for so long.

I believe that is about to change. The time has arrived for a Pythagorean revolution, one that promises to radically alter our understanding of reality.

theconversation.com

Is mathematics real? A viral TikTok video raises a legitimate question with exciting answers (The Conversation)

Daniel Mansfield – August 31, 2020 1.41am EDT


While filming herself getting ready for work recently, TikTok user @gracie.ham reached deep into the ancient foundations of mathematics and found an absolute gem of a question:

How could someone come up with a concept like algebra?

She also asked what the ancient Greek philosopher Pythagoras might have used mathematics for, and other questions that revolve around the age-old conundrum of whether mathematics is “real” or something humans just made up.

Many responded negatively to the post, but others — including mathematicians like me — found the questions quite insightful.

Is mathematics real?

Philosophers and mathematicians have been arguing over this for centuries. Some believe mathematics is universal; others consider it only as real as anything else humans have invented.

Thanks to @gracie.ham, Twitter users have now vigorously joined the debate.

For me, part of the answer lies in history.

From one perspective, mathematics is a universal language used to describe the world around us. For instance, two apples plus three apples is always five apples, regardless of your point of view.

But mathematics is also a language used by humans, so it is not independent of culture. History shows us that different cultures had their own understanding of mathematics.

Unfortunately, most of this ancient understanding is now lost. In just about every ancient culture, a few scattered texts are all that remain of their scientific knowledge.

However, there is one ancient culture that left behind an absolute abundance of texts.

Babylonian algebra

Buried in the deserts of modern Iraq, clay tablets from ancient Babylon have survived intact for about 4,000 years.

These tablets are slowly being translated and what we have learned so far is that the Babylonians were practical people who were highly numerate and knew how to solve sophisticated problems with numbers.

Their arithmetic was different from ours, though. They didn’t use zero or negative numbers. They even mapped out the motion of the planets without using calculus as we do.

Of particular importance for @gracie.ham’s question about the origins of algebra is that they knew that the numbers 3, 4 and 5 correspond to the lengths of the sides and diagonal of a rectangle. They also knew these numbers satisfied the fundamental relation 3² + 4² = 5² that ensures the sides are perpendicular.

No theorems were harmed (or used) in the construction of this rectangle.

The Babylonians did all this without modern algebraic concepts. We would express a more general version of the same idea using Pythagoras’ theorem: any right-angled triangle with sides of length a and b and hypotenuse c satisfies a² + b² = c².

The Babylonian perspective omits algebraic variables, theorems, axioms and proofs not because they were ignorant but because these ideas had not yet developed. In short, these social constructs began more than 1,000 years later, in ancient Greece. The Babylonians happily and productively did mathematics and solved problems without any of these relatively modern notions.

What was it all for?

@gracie.ham also asks how Pythagoras came up with his theorem. The short answer is: he didn’t.

Pythagoras of Samos (c. 570-495 BC) probably heard about the idea we now associate with his name while he was in Egypt. He may have been the person to introduce it to Greece, but we don’t really know.

Pythagoras didn’t use his theorem for anything practical. He was primarily interested in numerology and the mysticism of numbers, rather than the applications of mathematics.


Without modern tools, how do you make right angles just right? Ancient Hindu religious texts give instructions for making a rectangular fire altar using the 3-4-5 configuration with sides of length 3 and 4, and diagonal length 5. These measurements ensure that the altar has right angles in each corner.

A man sits at a fire altar
A rectangular fire altar. Madhu K / Wikipedia, CC BY-SA

Big questions

In the 19th century, the German mathematician Leopold Kronecker said “God made the integers, all else is the work of man”. I agree with that sentiment, at least for the positive integers — the whole numbers we count with — because the Babylonians didn’t believe in zero or negative numbers.

Mathematics has been happening for a very, very long time. Long before ancient Greece and Pythagoras.

Is it real? Most cultures agree about some basics, like the positive integers and the 3-4-5 right triangle. Just about everything else in mathematics is determined by the society in which you live.

This Year Will End Eventually. Document It While You Can (New York Times)

nytimes.com

Lesley M. M. Blume

Museums are working overtime to collect artifacts and ephemera from the pandemic and the racial justice movement — and they need your help.

A journal submitted to the Autry Museum by Tanya Gibb, who came down with Covid-19 symptoms on March 5. The donor thought the canceled plans were also representative of the pandemic.
Credit…The Autry Museum of the American West

July 14, 2020, 5:00 a.m. ET

A few weeks ago, a nerdy joke went viral on Twitter: Future historians will be asked which quarter of 2020 they specialize in.

As museum curators and archivists stare down one of the most daunting challenges of their careers — telling the story of the pandemic; followed by severe economic collapse and a nationwide social justice movement — they are imploring individuals across the country to preserve personal materials for posterity, and for possible inclusion in museum archives. It’s an all-hands-on-deck effort, they say.

“Our cultural seismology is being revealed,” said Anthea M. Hartig, the director of the Smithsonian’s National Museum of American History of the events. Of these three earth-shaking events, she said, “The confluence is unlike mostanything we’ve seen.”

Museums, she said, are grappling “with the need to comprehend multiple pandemics at once.”

Last August, Dr. Erik Blutinger joined the staff of Mt. Sinai Queens as an emergency medicine physician. He knew that his first year after residency would be intense, but nothing could have prepared him for the trial-by-fire that was Covid-19.

Aware that he was at the epicenter not only of a global pandemic, but of history, Dr. Blutinger, 34, began to take iPhone videos of the scenes in his hospital, which was one of New York City’s hardest hit during the early days of the crisis.

“Everyone is Covid positive in these hallways,” he told the camera in one April 9 recording which has since been posted on the Mount Sinai YouTube channel, showing the emergency room hallways filled with hissing oxygen tanks, and the surge tents set up outside the building. “All you hear is oxygen. I’m seeing young patients, old patients, people of all age ranges, who are just incredibly sick.”

He estimated that he has recorded over 50 video diaries in total.

In Louisville, Ky., during the protests and unrest that followed the killings of George Floyd and Breonna Taylor, a Louisville resident, filmmaker named Milas Norris rushed to the streets to shoot footage using a Sony camera and a drone.

“It was pretty chaotic,” said Mr. Norris, 24, describing police in riot gear, explosions, and gas and pepper bullets. He said thatat first he didn’t know what he would do with the footage; he has since edited and posted some of it on his Instagram and Facebook accounts. “I just knew that I had to document and see what exactly was happening on the front lines.”

NPR producer Nina Gregory collects "personal ambi," or ambient noise from her home in Hollywood, Calif. "It's another form of diary," she said.
Credit…Kemper Bates

About 2,000 miles west, in Los Angeles, NPR producer Nina Gregory, 45, had set up recording equipment on the front patio of her Hollywood home. In March and April, she recorded the absence of city noise. “The sound of birds was so loud it was pinging red on my levels,” she said.

Soon the sounds of nature were replaced by the sounds of helicopters from the Los Angeles Police Department hovering overhead, and the sounds of protesters and police convoys moving through her neighborhood. She recorded all this for her personal records.

“It’s another form of diary,” she said.

Museums have indicated that these kinds of private recordings have critical value as public historical materials. All of us, curators say, are field collectors now.

In the spirit of preservation, Ms. Hartig from the National Museum of American History — along with museum collectors across the country — have begun avid campaigns to “collect the moment.”

“I do think it’s a national reckoning project,” she said. There are “a multitude of ways in which we need to document and understand — and make history a service. This is one of our highest callings.”

Some museums have assembled rapid response field collecting teams to identify and secure storytelling objects and materials. Perhaps the most widely-publicized task force, assembled by three Smithsonian museums working in a coalition, dispatched curators to Lafayette Square in Washington, D.C., to identify protest signs for eventual possible collection.

A demonstrator who was photographed by Jason Spear of the National Museum of African American History and Culture in Lafayette Square in June. Mr. Spear is part of the rapid response team working to identify protest signs for possible future collection.
Credit…Jason Spear/NMAAHC Public Affairs Specialist

The collecting task force went into action after June 1, when President Trump ordered Lafayette Square cleared of protesters so he could pose for photos in front of St. John’s Episcopal Church, clutching a bible. Shield-bearing officers and mounted police assailed peaceful protesters there with smoke canisters, pepper bullets, flash grenades and chemical spray. The White House subsequently ordered the construction of an 8-foot-high chain link fence around the perimeter, which protesters covered in art and artifacts.

Taking immediate moves to preserve these materials — much of which was made of paper and was vulnerable to the elements — amounted to a curatorial emergency for the Smithsonian’s archivists.

Yet with many museums still closed, or in the earliest stages of reopening, curatorial teams largely cannot yet bring most objects into their facilities. It isfalling to individuals to become their own interim museums and archives.

While some curators are loath to suggest a laundry list of items that we should be saving — they say that they don’t want to manipulate the documentation of history, but take their cues from the communities they document — many are imploring us to see historical value in the everyday objects of right now.

“Whatever we’re taking to be ordinary within this abnormal moment can, in fact, serve as an extraordinary artifact to our children’s children,” said Tyree Boyd-Pates, an associate curator at the Autry Museum of the American West, which is asking the public to consider submitting materials such as journal entries, selfies and even sign-of-the times social media posts (say, a tweet about someone’s quest for toilet paper — screengrab those, he said)

Credit…Lisa Herndon/The Schomburg Center for Research in Black Culture

To this end, curators said, don’t be so quick to edit and delete your cellphone photos right now. “Snapshots are valuable,” said Kevin Young, the director of New York City’s Schomburg Center for Research in Black Culture. “We might look back at one and say, ‘This picture tells more than we thought at the time.’”

At the National Civil Rights Museum in Memphis, the curatorial team will be evaluating and collecting protest materials such as placards, photos, videos and personalized masks — and the personal stories behind them.

“One activist found a tear-gas canister, and he gave it to us,” said Noelle Trent, a director at the museum. “We’re going to have to figure out how to collect items from the opposing side: We have to have theracist posters, the ‘Make America Great’ stuff. We’re going to need that at some point. The danger is that if we don’t have somebody preserving it, they will say this situation was notas bad.”

And there is perhaps no article more representative of this year than the mask, which has “become a really powerful visual symbol,” said Margaret K. Hofer, the vice president and museum director of the New-York Historical Society, which has identified around 25 masks that the museum will collect, including an N95 mask worn by a nurse in the Samaritan’s Purse emergency field hospital set up in New York’s Central Park in the spring. (The museum also collected a set of field hospital scrubs, and a cowbell that the medical team rang whenever they discharged a patient.)

A cowbell that was rung at the Samaritan’s Purse field hospital in Central Park each time a Covid patient was discharged is now in the archives of the New-York Historical Society.
Credit…New-York Historical Society

“The meaning of masks has shifted over the course of these past several months,” Ms. Hofer said. “Early on, the ones we were collecting were being sewn by people who were trying to aid medical workers, when there were all those fears about shortage of P.P.E. — last resort masks.And they’ve more recentlybecome a political statement.”

Curators say that recording the personal stories behind photos, videos and objects are just as crucial as the objects themselves — and the more personal, the better. Museums rely on objects to elicit an emotional reaction from visitors, and that sort of personal connection requires knowing the object’s back story.

“For us, really the artifact is just a metaphor, and behind that artifact are these voices, and this humanity,” said Aaron Bryant, who curates photography and visual culture at the Smithsonian’s National Museum of African American History and Culture, and who isleading the Smithsonian’s ongoing collection response in Lafayette Square.

Curatorial teams from many museums are offering to interview donors about their materials and experiences,and encourage donors to include detailed descriptions and back stories when submitting objects and records for consideration. Many are also collecting oral histories of the moment.

Many museums have put out calls for submissions on social media and are directing would-be donors to submission forms to their websites. The National Museum of African American History and Culture site has a thorough form that covers items’ significance, dimensions, condition and materials. The Civil Rights Museum is looking for “archival materials, books, photographs, clothing/textiles, audio visual materials, fine art and historic objects” that share civil rights history. The New-York Historical Society is seeking Black Lives Matter protest materials.

“We review material, we talk about it, and we respond to everyone,” said William S. Pretzer, a senior curator of history at the National Museum of African American History and Culture. “We can’t collect everything, but we’re not limiting ourselves to anything.”

Gathering materials from some communities is proving challenging, and curators are strategizing collection from individuals who may be unlikely to offer materials to historical institutions.

An anti-racism poster by 14-year-old Kyra Yip. It will be on display at New York’s Museum of Chinese in America when they reopen.
Credit…Kyra Yip

“A lot of our critical collecting and gathering of diverse stories we’ve been able to do because of directed outreach,” said Ms. Hofer of the New-York Historical Society. “We’re trying to capture the experience of all aspects of all populations in the city, including people experiencing homelessness and the incarcerated.”

“We want to make the barrier to entry on this very low,” said Nancy Yao Maasbach, the president of New York’s Museum of Chinese in America, which began collecting materials relating to pandemic-related racist attacks on Asians and Asian-Americans in late winter, and personal testimonies about experiences during the pandemic and protests. Because museums may not necessarily be obvious repositories for many immigrant communities, Ms. Maasbach said, the museum is making translators available to those who want to tell their stories.

“We’re trying to make sure we’re being accessible in creating this record,” Ms. Maasbach said.

Curators recognize that their story-of-2020 collecting will continue for years; we are in the midst of ongoing events. They are asking us to continue to document the subsequent chapters — and to be as posterity-minded as one can be when it comes to ephemera.

“We don’t know what the puzzle looks like yet,” said Ms. Hartig of the National Museum of American History. “Yet we know that each of these pieces might be an important one.”

Some museums are exhibiting submitted and accepted items right away on websites or on social media; others are planning virtual and physical exhibits for as early as this autumn. The Eiteljorg Museum of American Indians and Western Art, for example, is collecting masks and oral history testimonies from Native American communities and is considering the creation of a “rapid response gallery,” said the museum’s vice president and chief curator Elisa G. Phelps.

“If art is being sparked by something very timely, we want to have a place where we can showcase works and photos,” she said, adding that this process differed from “the elaborate, formal exhibit development process.”

Some donors, however, may not be among those to view their materials once they become part of institutionalized history — at least not right away. Even though Dr. Blutinger said that he sees the historical value of his emergency room video diaries,he has yet to revisit the peak-crisis videos himself.

“I’m almost scared to look back at them,” he said. “I’m worried that they’ll reignite a set of emotions that I’ve managed to tuck away. I’m sure one day I’ll look back and perhaps open up one or two clips, but I have never watched any of them all the way through.”

Lesley M.M. Blume is a journalist, historian andthe author of “Fallout: The Hiroshima Cover-Up and the Reporter Who Revealed It to the World,” which will be published on August 4.

Book Review: Why Science Denialism Persists (Undark)

BooksPrint

Two new books explore what motivates people to reject science — and why it’s so hard to shake deep-seated beliefs.

By Elizabeth Svoboda – 05.22.2020

To hear some experts tell it, science denial is mostly a contemporary phenomenon, with climate change deniers and vaccine skeptics at the vanguard. Yet the story of Galileo Galilei reveals just how far back denial’s lineage stretches.

BOOK REVIEW “Galileo and the Science Deniers,” by Mario Livio (Simon & Schuster, 304 pages).

Years of astronomical sightings and calculations had convinced Galileo that the Earth, rather than sitting at the center of things, revolved around a larger body, the sun. But when he laid out his findings in widely shared texts, as astrophysicist Mario Livio writes in “Galileo and the Science Deniers,” the ossified Catholic Church leadership — heavily invested in older Earth-centric theories — aimed its ire in his direction.

Rather than revise their own maps of reality to include his discoveries, clerics labeled him a heretic and banned his writings. He spent the last years of his life under house arrest, hemmed in by his own insistence on the expansiveness of the cosmos.

Nearly 400 years later, the legacy of denial remains intact in some respects. Scientists who publish research about climate change or the safety of genetically modified crops still encounter the same kind of pushback from deniers that Galileo did. Yet denialism has also sprouted some distinctly modern features: As Alan Levinovitz points out in “Natural: How Faith in Nature’s Goodness Leads to Harmful Fads, Unjust Laws, and Flawed Science,” sometimes we ourselves can become unwitting purveyors of denial, falling prey to flawed or false beliefs we may not realize we’re holding.

Levinovitz passionately protests the common assumption that natural things are inherently better than unnatural ones. Not only do people automatically tend to conclude organic foods are healthier, many choose “natural” or “alternative” methods of cancer treatment over proven chemotherapy regimens. Medication-free childbirth, meanwhile, is now considered the gold standard in many societies, despite mixed evidence of its health benefits for mothers and babies.

BOOK REVIEW “Natural: How Faith in Nature’s Goodness Leads to Harmful Fads, Unjust Laws, and Flawed Science,” by Alan Levinovitz (Beacon Press, 264 pages).

“What someone calls ‘natural’ may be good,” writes Levinovitz, a religion professor at James Madison University, “but the association is by no means necessary, or even likely.” Weaving real-life examples with vivid retellings of ancient myths about nature’s power, he demonstrates that our pro-natural bias is so pervasive that we often lose the ability to see it — or to admit the legitimacy of science that contradicts it.

From this perspective, science denial starts to look like a stunted outgrowth of what we typically consider common sense. In Galileo’s time, people thought it perfectly sensible that the planet they inhabited was at the center of everything. Today, it might seem equally sensible that it’s always better to choose natural products over artificial ones, or that a plant burger ingredient called “soy leghemoglobin” is suspect because it’s genetically engineered and can’t be sourced in the wild. Yet in these cases, what we think of as common sense turns out to be humbug.

In exploring the past and present of anti-science bias, Livio and Levinovitz show how deniers’ basic toolbox has not changed much through the centuries. Practitioners marshal arguments that appeal to our tendency to think in dichotomies: wrong or right, saved or damned, pure or tainted. Food is either nourishing manna from the earth or processed, artificial junk. The Catholic Church touted its own supreme authority while casting Galileo as an unregenerate apostate.

In the realm of denialism, Levinovitz writes, “simplicity and homogeneity take precedence over diversity, complexity, and change. Righteous laws and rituals are universal. Disobedience is sacrilege.”

The very language of pro-nature, anti-science arguments, Levinovitz argues, is structured to play up this us-versus-them credo. Monikers like Frankenfood — often used to describe genetically-modified (GM) crops — frame the entire GM food industry as monstrous, a deviation from the supposed order of things. And in some circles, he writes, the word “unnatural” has come to be almost a synonym for “moral deficiency.” Not only is such black-and-white rhetoric seductive, it can give deniers the heady sense that they occupy the moral high ground.

Both pro-natural bias and the Church’s crusade against Galileo reflect the human penchant to fit new information into an existing framework. Rather than scrapping or changing that framework, we try to jerry-rig it to make it function. Some of the jerry-rigging examples the authors describe are more toxic than others: Opting for so-called natural foods despite dubious science on their benefits, for instance, is less harmful than denying evidence of a human-caused climate crisis.

What’s more, many people actually tend to cling harder to their beliefs in the face of contradictory evidence. Studies confirm that facts and reality aren’t likely to sway most people’s pre-existing views. This is as true now as it was at the close of the Renaissance, as shown by some extremists’ stubborn denial that the Covid-19 virus is dangerous.

In the realm of denialism, “simplicity and homogeneity take precedence over diversity, complexity, and change.”

In one of his book’s most compelling chapters, Livio takes us inside a panel of theologians that convened in 1616 to rule on whether the sun was at the center of things. None of Galileo’s incisive arguments swayed their thinking one iota. “This proposition is foolish and absurd in philosophy,” the theologians wrote, “and formally heretical, since it explicitly contradicts in many places the sense of Holy Scripture.” Cardinal Bellarmino warned Galileo that if he did not renounce his heliocentric views, he could be thrown into prison.

Galileo’s discoveries threatened to topple a superstructure that the Church had spent hundreds of years buttressing. In making their case against him, his critics liked to cite a passage from Psalm 93: “The world also is established that it cannot be moved.”

Galileo refused to cave. In his 1632 book, “Dialogue Concerning the Two Chief World Systems,” he did give the views of Pope Urban VIII an airing: He repeated Urban’s statement that no human could ever hope to decode the workings of the universe. But Livio slyly points out that Galileo put these words in the mouth of a ridiculous character named Simplicio. It was a slight Urban would not forgive. “May God forgive Signor Galilei,” he intoned, “for having meddled with these subjects.”

At the close of his 1633 Inquisition trial, Galileo was forced to declare that he abandoned any belief that the Earth revolved around the sun. “I abjure, curse, and detest the above-mentioned errors and heresies.” He swore that he would never again say “anything which might cause a similar suspicion about me.” Yet as he left the courtroom, legend goes, he muttered to himself “E pur si muove” (And yet it moves).

In the face of science denial, Livio observes, people have taken up “And yet it moves” as a rallying cry: a reminder that no matter how strong our prejudices or presuppositions, the facts always remain the same. But in today’s “post-truth era,” as political theorist John Keane calls it, with little agreement on what defines a reliable source, even the idea of an inescapable what is seems to have receded from view.

Levinovitz’s own evolution in writing “Natural” reveals how hard it can be to elevate facts above all, even for avowed anti-deniers. When he began his research, he picked off instances of pro-natural bias as if they were clay pigeons, confident in the rigor of his approach. “Confronted with a false faith, I had resolved that it was wholly evil,” he reflects.

Yet he later concedes that a favoritism toward nature is logical in domains like sports, which celebrate the potential of the human body in its unaltered form. He also accepts one expert’s point that it makes sense to buy organic if the pesticides used are less dangerous to farm workers than conventional ones. By the end of the book, he finds himself in a more nuanced place: “The art of celebrating humanity and nature,” he concludes, depends on “having the courage to embrace paradox.” His quest to puncture the myth of the natural turns out to have been dogmatic in its own way.

In acknowledging this, Levinovitz hits on something important. When deniers take up arms, it’s tempting to follow their lead: to use science to build an open-and-shut case that strikes with the finality of a courtroom witness pointing out a killer.

But as Galileo knew — and as Levinovitz ultimately concedes — science, in its endlessly unspooling grandeur, tends to resist any conclusion that smacks of the absolute. “What only science can promise,” Livio writes, “is a continuous, midcourse self-correction, as additional experimental and observational evidence accumulates, and new theoretical ideas emerge.”

In their skepticism of pat answers, these books bolster the case that science’s strength is in its flexibility — its willingness to leave room for iteration, for correction, for innovation. Science is an imperfect vehicle, as any truth-seeking discipline must be. And yet, as Galileo would have noted, it moves.

Elizabeth Svoboda is a science writer based in San Jose, California. Her most recent book for children is “The Life Heroic.”

Related

Opinion: The Roots of Modern Medical Denialism

A Giant Bumptious Litter: Donna Haraway on Truth, Technology, and Resisting Extinction (Logic)

Issue 9 / Nature December 07, 2019

Donna Haraway at her desk, smiling.
Donna Haraway in her home in Santa Cruz. A still from Donna Haraway: Story Telling for Earthly Survival, a film by Fabrizio Terranova.

The history of philosophy is also a story about real estate.

Driving into Santa Cruz to visit Donna Haraway, we can’t help feeling that we were born too late. The metal sculpture of a donkey standing on Haraway’s front porch, the dogs that scramble to her front door barking when we ring the bell, and the big black rooster strutting in the coop out back — the entire setting evokes an era of freedom and creativity that postwar wealth made possible in Northern California.

Here was a counterculture whose language and sensibility the tech industry sometimes adopts, but whose practitioners it has mostly priced out. Haraway, who came to the University of Santa Cruz in 1980 to take up the first tenured professorship in feminist theory in the US, still conveys the sense of a wide‑open world.

Haraway was part of an influential cohort of feminist scholars who trained as scientists before turning to the philosophy of science in order to investigate how beliefs about gender shaped the production of knowledge about nature. Her most famous text remains “A Cyborg Manifesto,” published in 1985. It began with an assignment on feminist strategy for the Socialist Review after the election of Ronald Reagan and grew into an oracular meditation on how cybernetics and digitization had changed what it meant to be male or female — or, really, any kind of person. It gained such a cult following that Hari Kunzru, profiling her for Wired years later, wrote: “To boho twentysomethings, her name has the kind of cachet usually reserved for techno acts or new phenethylamines.”

The cyborg vision of gender as changing and changeable was radically new. Her map of how information technology linked people around the world into new chains of affiliation, exploitation, and solidarity feels prescient at a time when an Instagram influencer in Berlin can line the pockets of Silicon Valley executives by using a phone assembled in China that contains cobalt mined in Congo to access a platform moderated by Filipinas.

Haraway’s other most influential text may be an essay that appeared a few years later, on what she called “situated knowledges.” The idea, developed in conversation with feminist philosophers and activists such as Nancy Hartsock, concerns how truth is made. Concrete practices of particular people make truth, Haraway argued. The scientists in a laboratory don’t simply observe or conduct experiments on a cell, for instance, but co-create what a cell is by seeing, measuring, naming, and manipulating it. Ideas like these have a long history in American pragmatism. But they became politically explosive during the so-called Science Wars of the 1990s — a series of public debates among “scientific realists” and “postmodernists” with echoes in controversies about bias and objectivity in academia today.

Haraway’s more recent work has turned to human-animal relations and the climate crisis. She is a capacious yes, and thinker, the kind of leftist feminist who believes that the best thinking is done collectively. She is constantly citing other people, including graduate students, and giving credit to them. A recent documentary about her life and work by the Italian filmmaker Fabrizio Terranova, Story Telling for Earthly Survival, captures this sense of commitment, as well as her extraordinary intellectual agility and inventiveness.

At her home in Santa Cruz, we talked about her memories of the Science Wars and how they speak to our current “post-truth” moment, her views on contemporary climate activism and the Green New Deal, and why play is essential for politics.

Let’s begin at the beginning. Can you tell us a little bit about your childhood? 

I grew up in Denver, in the kind of white, middle-class neighborhood where people had gotten mortgages to build housing after the war. My father was a sportswriter. When I was eleven or twelve years old, I probably saw seventy baseball games a year. I learned to score as I learned to read.

My father never really wanted to do the editorials or the critical pieces exposing the industry’s financial corruption or what have you. He wanted to write game stories and he had a wonderful way with language. He was in no way a scholar — in fact he was in no way an intellectual — but he loved to tell stories and write them. I think I was interested in that as well — in words and the sensuality of words.

The other giant area of childhood storytelling was Catholicism. I was way too pious a little girl, completely inside of the colors and the rituals and the stories of saints and the rest of it. I ate and drank a sensual Catholicism that I think was rare in my generation. Very not Protestant. It was quirky then; it’s quirky now. And it shaped me. 

How so? 

One of the ways that it shaped me was through my love of biology as a materialist, sensual, fleshly being in the world as well as a knowledge-seeking apparatus. It shaped me in my sense that I saw biology simultaneously as a discourse and profoundly of the world. The Word and the flesh. 

Many of my colleagues in the History of Consciousness department, which comes much later in the story, were deeply engaged with Roland Barthes and with that kind of semiotics. I was very unconvinced and alienated from those thinkers because they were so profoundly Protestant in their secularized versions. They were so profoundly committed to the disjunction between the signifier and signified — so committed to a doctrine of the sign that is anti-Catholic, not just non-Catholic. The secularized sacramentalism that just drips from my work is against the doctrine of the sign that I felt was the orthodoxy in History of Consciousness. So Catholicism offered an alternative structure of affect. It was both profoundly theoretical and really intimate.

Did you start studying biology as an undergraduate? 

I got a scholarship that allowed me to go to Colorado College. It was a really good liberal arts school. I was there from 1962 to 1966 and I triple majored in philosophy and literature and zoology, which I regarded as branches of the same subject. They never cleanly separated. Then I got a Fulbright to go to Paris. Then I went to Yale to study cell, molecular, and developmental biology.

Did you get into politics at Yale? Or were you already political when you arrived? 

The politics came before that — probably from my Colorado College days, which were influenced by the civil rights movement. But it was at Yale that several things converged. I arrived in the fall of 1967, and a lot was happening.

New Haven in those years was full of very active politics. There was the antiwar movement. There was anti-chemical and anti-biological warfare activism among both the faculty and the graduate students in the science departments. There was Science for the People [a left-wing science organization] and the arrival of that wave of the women’s movement. My lover, Jaye Miller, who became my first husband, was gay, and gay liberation was just then emerging. There were ongoing anti-racist struggles: the Black Panther Party was very active in New Haven. 

Jaye and I were part of a commune where one of the members and her lover were Black Panthers. Gayle was a welfare rights activist and the mother of a young child, and her lover was named Sylvester. We had gotten the house for the commune from the university at a very low rent because we were officially an “experiment in Christian living.” It was a very interesting group of people! There was a five-year-old kid who lived in the commune, and he idolized Sylvester. He would clomp up the back stairs wearing these little combat boots yelling, “Power to the people! Power! Power!” It made our white downstairs neighbors nervous. They didn’t much like us anyway. It was very funny. 

Did this political climate influence your doctoral research at Yale?

I ended up writing on the ways that metaphors shape experimental practice in the laboratory. I was writing about the experience of the coming-into-being of organisms in the situated interactions of the laboratory. In a profound sense, such organisms are made but not made up. It’s not a relativist position at all; it’s a materialist position. It’s about what I later learned to call “situated knowledges.” It was in the doing of biology that this became more and more evident. 

How did these ideas go over with your labmates and colleagues?

It was never a friendly way of talking for my biology colleagues, who always felt that this verged way too far in the direction of relativism. 

It’s not that the words I was using were hard. It’s that the ideas were received with great suspicion. And I think that goes back to our discussion a few minutes ago about semiotics: I was trying to insist that the gapping of the signifier and the signified does not really determine what’s going on. 

But let’s face it: I was never very good in the lab! My lab work was appalling. Everything I ever touched died or got infected. I did not have good hands, and I didn’t have good passion. I was always more interested in the discourse, if you will. 

But you found a supervisor who was open to that? 

Yes, Evelyn Hutchinson. He was an ecologist and a man of letters and a man who had had a long history of making space for heterodox women. And I was only a tiny bit heterodox. Other women he had given space to were way more out there than me. Evelyn was also the one who got us our house for our “experiment in Christian living.” 

God bless. What happened after Yale?

Jaye got a job at the University of Hawaii teaching world history and I went as this funny thing called a “faculty wife.” I had an odd ontological status. I got a job there in the general science department. Jaye and I were also faculty advisers for something called New College, which was an experimental liberal-arts part of the university that lasted for several years. 

It was a good experience. Jaye and I got a divorce in that period but never really quite separated because we couldn’t figure out who got the camera and who got the sewing machine. That was the full extent of our property in those days. We were both part of a commune in Honolulu. 

Then one night, Jaye’s boss in the history department insisted that we go out drinking with him, at which point he attacked us both sexually and personally in a drunken, homophobic, and misogynist rant. And very shortly after that, Jaye was denied tenure. Both of us felt stunned and hurt. So I applied for a job in the History of Science department at Johns Hopkins, and Jaye applied for a job at the University of Texas in Houston. 

Baltimore and the Thickness of Worlding

How was Hopkins? 

History of Science was not a field I knew anything about, and the people who hired me knew that perfectly well. Therefore they assigned me to teach the incoming graduate seminar: Introduction to the History of Science. It was a good way to learn it! 

Hopkins was also where I met my current partner, Rusten. He was a graduate student in the History of Science department, where I was a baby assistant professor. (Today I would be fired and sued for sexual harassment — but that’s a whole other conversation.) 

Who were some of the other people who became important to you at Hopkins?

[The feminist philosopher] Nancy Hartsock and I shaped each other quite a bit in those years. We were part of the Marxist feminist scene in Baltimore. We played squash a lot — squash was a really intense part of our friendship. Her lover was a Marxist lover of Lenin; he gave lectures in town. 

In the mid-to-late 1970s, Nancy and I started the women’s studies program at Hopkins together. At the time, she was doing her article that became her book on feminist materialism, [Money, Sex, and Power: Toward a Feminist Historical Materialism]. It was very formative for me.

Those were also the years that Nancy and Sandra Harding and Patricia Hill Collins and Dorothy Smith were inventing feminist standpoint theory. I think all of us were already reaching toward those ideas, which we then consolidated as theoretical proposals to a larger community. The process was both individual and collective. We were putting these ideas together out of our struggles with our own work. You write in a closed room while tearing your hair out of your head — it was individual in that sense. But then it clicks, and the words come, and you consolidate theoretical proposals that you bring to your community. In that sense, it was a profoundly collective way of thinking with each other, and within the intensities of the social movements of the late 1960s and early 1970s. 

The ideas that you and other feminist philosophers were developing challenged many dominant assumptions about what truth is, where it comes from, and how it functions. More recently, in the era of Trump, we are often told we are living in a time of “post-truth” — and some critics have blamed philosophers like yourselves for creating the environment of “relativism” in which “post-truth” flourishes. How do you respond to that?

Our view was never that truth is just a question of which perspective you see it from. “Truth is perspectival” was never our position. We were against that. Feminist standpoint theory was always anti-perspectival. So was the Cyborg Manifesto, situated knowledges, [the philosopher] Bruno Latour’s notions of actor-network theory, and so on.

“Post-truth” gives up on materialism. It gives up on what I’ve called semiotic materialism: the idea that materialism is always situated meaning-making and never simply representation. These are not questions of perspective. They are questions of worlding and all of the thickness of that. Discourse is not just ideas and language. Discourse is bodily. It’s not embodied, as if it were stuck in a body. It’s bodily and it’s bodying, it’s worlding. This is the opposite of post-truth. This is about getting a grip on how strong knowledge claims are not just possible but necessary — worth living and dying for. 

When you, Latour, and others were criticized for “relativism,” particularly during the so-called Science Wars of the 1990s, was that how you responded? And could your critics understand your response?

Bruno and I were at a conference together in Brazil once. Which reminds me: If people want to criticize us, it ought to be for the amount of jet fuel involved in making and spreading these ideas! Not for leading the way to post-truth. We’re guilty on the carbon footprint issue, and Skyping doesn’t help, because I know what the carbon footprint of the cloud is. 

Anyhow. We were at this conference in Brazil. It was a bunch of primate field biologists, plus me and Bruno. And Stephen Glickman, a really cool biologist, a man we both love, who taught at UC Berkeley for years and studied hyenas, took us aside privately. He said, “Now, I don’t want to embarrass you. But do you believe in reality?” 

We were both kind of shocked by the question. First, we were shocked that it was a question of belief, which is a Protestant question. A confessional question. The idea that reality is a question of belief is a barely secularized legacy of the religious wars. In fact, reality is a matter of worlding and inhabiting. It is a matter of testing the holding-ness of things. Do things hold or not? 

Take evolution. The notion that you would or would not “believe” in evolution already gives away the game. If you say, “Of course I believe in evolution,” you have lost, because you have entered the semiotics of representationalism — and post-truth, frankly. You have entered an arena where these are all just matters of internal conviction and have nothing to do with the world. You have left the domain of worlding. 

The Science Warriors who attacked us during the Science Wars were determined to paint us as social constructionists — that all truth is purely socially constructed. And I think we walked into that. We invited those misreadings in a range of ways. We could have been more careful about listening and engaging more slowly. It was all too easy to read us in the way the Science Warriors did. Then the right wing took the Science Wars and ran with it, which eventually helped nourish the whole fake-news discourse.

Your opponents in the Science Wars championed “objectivity” over what they considered your “relativism.” Were you trying to stake out a position between those two terms? Or did you reject the idea that either of those terms even had a stable meaning?

Both terms inhabit the same ontological and epistemological frame — a frame that my colleagues and I have tried to make hard to inhabit. Sandra Harding insisted on “strong objectivity,” and my idiom was “situated knowledges.” We have tried to deauthorize the kind of possessive individualism that sees the world as units plus relations. You take the units, you mix them up with relations, you come up with results. Units plus relations equal the world. 

People like me say, “No thank you: it’s relationality all the way down.” You don’t have units plus relations. You just have relations. You have worlding. The whole story is about gerunds — worlding, bodying, everything-ing. The layers are inherited from other layers, temporalities, scales of time and space, which don’t nest neatly but have oddly configured geometries. Nothing starts from scratch. But the play — I think the concept of play is incredibly important in all of this — proposes something new, whether it’s the play of a couple of dogs or the play of scientists in the field. 

This is not about the opposition between objectivity and relativism. It’s about the thickness of worlding. It’s also about being of and for some worlds and not others; it’s about materialist commitment in many senses.

To this day I know only one or two scientists who like talking this way. And there are good reasons why scientists remain very wary of this kind of language. I belong to the Defend Science movement and in most public circumstances I will speak softly about my own ontological and epistemological commitments. I will use representational language. I will defend less-than-strong objectivity because I think we have to, situationally. 

Is that bad faith? Not exactly. It’s related to [what the postcolonial theorist Gayatri Chakravorty Spivak has called] “strategic essentialism.” There is a strategic use to speaking the same idiom as the people that you are sharing the room with. You craft a good-enough idiom so you can work on something together. I won’t always insist on what I think might be a stronger apparatus. I go with what we can make happen in the room together. And then we go further tomorrow.

In the struggles around climate change, for example, you have to join with your allies to block the cynical, well-funded, exterminationist machine that is rampant on the earth. I think my colleagues and I are doing that. We have not shut up, or given up on the apparatus that we developed. But one can foreground and background what is most salient depending on the historical conjuncture.

Santa Cruz and Cyborgs

To return to your own biography, tell us a bit about how and why you left Hopkins for Santa Cruz. 

Nancy Hartsock and I applied for a feminist theory job in the History of Consciousness department at UC Santa Cruz together. We wanted to share it. Everybody assumed we were lovers, which we weren’t, ever. We were told by the search committee that they couldn’t consider a joint application because they had just gotten this job okayed and it was the first tenured position in feminist theory in the country. They didn’t want to do anything further to jeopardize it. Nancy ended up deciding that she wanted to stay in Baltimore anyway, so I applied solo and got the job. And I was fired from Hopkins and hired by Santa Cruz in the same week — and for exactly the same papers.

What were the papers?

The long one was called “Signs of Dominance.” It was from a Marxist feminist perspective, and it was regarded as too political. Even though it appeared in a major journal, the person in charge of my personnel case at Hopkins told me to white it out from my CV. 

The other one was a short piece on [the poet and novelist] Marge Piercy and [feminist theorist] Shulamith Firestone in Women: a Journal of Liberation. And I was told to white that out, too. Those two papers embarrassed my colleagues and they were quite explicit about it, which was kind of amazing. Fortunately, the people at History of Consciousness loved those same papers, and the set of commitments that went with them. 

You arrived in Santa Cruz in 1980, and it was there that you wrote the Cyborg Manifesto. Tell us a bit about its origins.

It had a very particular birth. There was a journal called the Socialist Review, which had formerly been called Socialist Revolution. Jeff Escoffier, one of the editors, asked five of us to write no more than five pages each on Marxist feminism, and what future we anticipated for it. 

This was just after the election of Ronald Reagan. The future we anticipated was a hard right turn. It was the definitive end of the 1960s. Around the same time, Jeff asked me if I would represent Socialist Review at a conference of New and Old Lefts in Cavtat in Yugoslavia [now Croatia]. I said yes, and I wrote a little paper on reproductive biotechnology. A bunch of us descended on Cavtat, and there were relatively few women. So we rather quickly found one another and formed alliances with the women staff who were doing all of the reproductive labor, taking care of us. We ended up setting aside our papers and pronouncing on various feminist topics. It was really fun and quite exciting. 

Out of that experience, I came back to Santa Cruz and wrote the Cyborg Manifesto. It turned out not to be five pages, but a whole coming to terms with what had happened to me in those years from 1980 to the time it came out in 1985.

The manifesto ended up focusing a lot on cybernetics and networking technologies. Did this reflect the influence of nearby Silicon Valley? Were you close with people working in those fields?

It’s part of the air you breathe here. But the real tech alliances in my life come from my partner Rusten and his friends and colleagues, because he worked as a freelance software designer. He did contract work for Hewlett Packard for years. He had a long history in that world: when he was only fourteen, he got a job programming on punch cards for companies in Seattle. 

The Cyborg Manifesto was the first paper I ever wrote on a computer screen. We had an old HP-86. And I printed it on one of those daisy-wheel printers. One I could never get rid of, and nobody ever wanted. It ended up in some dump, God help us all.

The Cyborg Manifesto had such a tremendous impact, and continues to. What did you make of its reception?

People read it as they do. Sometimes I find it interesting. But sometimes I just want to jump into a foxhole and pull the cover over me. 

In the manifesto, you distinguish yourself from two other socialist feminist positions. The first is the techno-optimist position that embraces aggressive technological interventions in order to modify human biology. This is often associated with Shulamith Firestone’s book The Dialectic of Sex (1970), and in particular her proposal for “artificial wombs” that could reproduce humans outside of a woman’s body.

Yes, although Firestone gets slotted into a quite narrow, blissed-out techno-bunny role, as if all her work was about reproduction without wombs. She is remembered for one technological proposal, but her critique of the historical materialist conditions of mothering and reproduction was very deep and broad.

You also make some criticisms of the ideas associated with Italian autonomist feminists and the Wages for Housework campaign. You suggest that they overextend the category of “labor.”

Wages for Housework was very important. And I’m always in favor of working by addition not subtraction. I’m always in favor of enlarging the litter. Let’s watch the attachments and detachments, the compositions and decompositions, as the litter proliferates. Labor is an important category with a strong history, and Wages for Housework enlarged it.

But in thinkers with Marxist roots, there’s also a tendency to make the category of labor do too much work. A great deal of what goes on needs to be thickly described with categories other than labor — or in interesting kinds of entanglement with labor. 

What other categories would you want to add?

Play is one. Labor is so tied to functionality, whereas play is a category of non-functionality. 

Play captures a lot of what goes on in the world. There is a kind of raw opportunism in biology and chemistry, where things work stochastically to form emergent systematicities. It’s not a matter of direct functionality. We need to develop practices for thinking about those forms of activity that are not caught by functionality, those which propose the possible-but-not-yet, or that which is not-yet but still open. 

It seems to me that our politics these days require us to give each other the heart to do just that. To figure out how, with each other, we can open up possibilities for what can still be. And we can’t do that in in a negative mood. We can’t do that if we do nothing but critique. We need critique; we absolutely need it. But it’s not going to open up the sense of what might yet be. It’s not going to open up the sense of that which is not yet possible but profoundly needed.

The established disorder of our present era is not necessary. It exists. But it’s not necessary. 

Playing Against Double Death

What might some of those practices for opening up new possibilities look like?

Through playful engagement with each other, we get a hint about what can still be and learn how to make it stronger. We see that in all occupations. Historically, the Greenham Common women were fabulous at this. [Eds.: The Greenham Common Women’s Peace Camp was a series of protests against nuclear weapons at a Royal Air Force base in England, beginning in 1981.] More recently, you saw it with the Dakota Access Pipeline occupation. 

The degree to which people in these occupations play is a crucial part of how they generate a new political imagination, which in turn points to the kind of work that needs to be done. They open up the imagination of something that is not what [the ethnographer] Deborah Bird Rose calls “double death” — extermination, extraction, genocide. 

Now, we are facing a world with all three of those things. We are facing the production of systemic homelessness. The way that flowers aren’t blooming at the right time, and so insects can’t feed their babies and can’t travel because the timing is all screwed up, is a kind of forced homelessness. It’s a kind of forced migration, in time and space. 

This is also happening in the human world in spades. In regions like the Middle East and Central America, we are seeing forced displacement, some of which is climate migration. The drought in the Northern Triangle countries of Central America — Honduras, Guatemala, El Salvador — is driving people off their land. 

So it’s not a humanist question. It’s a multi-kind and multi-species question.

In the Cyborg Manifesto, you use the ideas of “the homework economy” and the “integrated circuit” to explore the various ways that information technology was restructuring labor in the early 1980s to be more precarious, more global, and more feminized. Do climate change and the ecological catastrophes you’re describing change how you think about those forces? 

Yes and no. The theories that I developed in that period emerged from a particular historical conjuncture. If I were mapping the integrated circuit today, it would have different parameters than the map that I made in the early 1980s. And surely the questions of immigration, exterminism, and extractivism would have to be deeply engaged. The problem of rebuilding place-based lives would have to get more attention.

The Cyborg Manifesto was written within the context of the hard-right turn of the 1980s. But the hard-right turn was one thing; the hard-fascist turn of the late 2010s is another. It’s not the same as Reagan. The presidents of Colombia, Hungary, Brazil, Egypt, India, the United States — we are looking at a new fascist capitalism, which requires reworking the ideas of the early 1980s for them to make sense.

So there are continuities between now and the map I made then, a lot of continuities. But there are also some pretty serious inflection points, particularly when it comes to developments in digital technologies that are playing into the new fascism.

Could you say more about those developments?

If the public-private dichotomy was old-fashioned in 1980, by 2019 I don’t even know what to call it. We have to try to rebuild some sense of a public. But how can you rebuild a public in the face of nearly total surveillance? And this surveillance doesn’t even have a single center. There is no eye in the sky.

Then we have the ongoing enclosure of the commons. Capitalism produces new forms of value and then encloses those forms of value — the digital is an especially good example of that. This involves the monetization of practically everything we do. And it’s not like we are ignorant of this dynamic. We know what’s going on. We just don’t have a clue how to get a grip on it. 

One attempt to update the ideas of the Cyborg Manifesto has come from the “xenofeminists” of the international collective Laboria Cuboniks. I believe some of them have described themselves as your “disobedient daughters.”

Overstating things, that’s not my feminism.

Why not?

I’m not very interested in those discussions, frankly. It’s not what I’m doing. It’s not what makes me vital now. In a moment of ecological urgency, I’m more engaged in questions of multispecies environmental and reproductive justice. Those questions certainly involve issues of digital and robotic and machine cultures, but they aren’t at the center of my attention.

What is at the center of my attention are land and water sovereignty struggles, such as those over the Dakota Access Pipeline, over coal mining on the Black Mesa plateau, over extractionism everywhere. My attention is centered on the extermination and extinction crises happening at a worldwide level, on human and nonhuman displacement and homelessness. That’s where my energies are. My feminism is in these other places and corridors.

Do you still think the cyborg is still a useful figure?

I think so. The cyborg has turned out to be rather deathless. Cyborgs keep reappearing in my life as well as other people’s lives. 

The cyborg remains a wily trickster figure. And, you know, they’re also kind of old-fashioned. They’re hardly up-to-the‑minute. They’re rather klutzy, a bit like R2-D2 or a pacemaker. Maybe the embodied digitality of us now is not especially well captured by the cyborg. So I’m not sure. But, yeah, I think cyborgs are still in the litter. I just think we need a giant bumptious litter whelped by a whole lot of really badass bitches — some of whom are men!

Mourning Without Despair

You mentioned that your current work is more focused on environmental issues. How are you thinking about the role of technology in mitigating or adapting to climate change — or fighting extractivism and extermination?

There is no homogeneous socialist position on this question. I’m very pro-technology, but I belong to a crowd that is quite skeptical of the projects of what we might call the “techno-fix,” in part because of their profound immersion in technocapitalism and their disengagement from communities of practice. 

Those communities may need other kinds of technologies than those promised by the techno-fix: different kinds of mortgage instruments, say, or re-engineered water systems. I’m against the kind of techno-fixes that are abstracted from place and tied up with huge amounts of technocapital. This seems to include most geoengineering projects and imaginations. 

So when I see massive solar fields and wind farms I feel conflicted, because on the one hand they may be better than fracking in Monterey County — but only maybe. Because I also know where the rare earth minerals required for renewable energy technologies come from and under what conditions. We still aren’t doing the whole supply-chain analysis of our technologies. So I think we have a long way to go in socialist understanding of these matters. 

One tendency within socialist thought believes that socialists can simply seize capitalist technology and put it to different purposes — that you take the forces of production, build new relations around them, and you’re done. This approach is also associated with a Promethean, even utopian approach to technology. Socialist techno-utopianism has been around forever, but it has its own adherents today, such as those who advocate for “Fully Automated Luxury Communism.” I wonder how you see that particular lineage of socialist thinking about technology.

I think very few people are that simplistic, actually. In various moments we might make proclamations that come down that way. But for most people, our socialisms, and the approaches with which socialists can ally, are richer and more varied. 

When you talk to the Indigenous activists of the Black Mesa Water Coalition, for example, they have a complex sense around solar arrays and coal plants and water engineering and art practices and community movements. They have very rich articulated alliances and separations around all of this. 

Socialists aren’t the only ones who have been techno-utopian, of course. A far more prominent and more influential strand of techno-utopianism has come from the figures around the Bay Area counterculture associated with the Whole Earth Catalog, in particular Stewart Brand, who went on to play important intellectual and cultural roles in Silicon Valley.

They are not friends. They are not allies. I’m avoiding calling them enemies because I’m leaving open the possibility of their being able to learn or change, though I’m not optimistic. I think they occupy the position of the “god trick.” [Eds.: The “god trick” is an idea introduced by Haraway that refers to the traditional view of objectivity as a transcendent “gaze from nowhere.”] I think they are blissed out by their own privileged positions and have no idea what their own positionality in the world really is. And I think they cause a lot of harm, both ideologically and technically. 

How so?

They get a lot of publicity. They take up a lot of the air in the room. 

It’s not that I think they’re horrible people. There should be space for people pushing new technologies. But I don’t see nearly enough attention given to what kinds of technological innovation are really needed to produce viable local and regional energy systems that don’t depend on species-destroying solar farms and wind farms that require giant land grabs in the desert.

The kinds of conversations around technology that I think we need are those among folks who know how to write law and policy, folks who know how to do material science, folks who are interested in architecture and park design, and folks who are involved in land struggles and solidarity movements. I want to see us do much savvier scientific, technological, and political thinking with each other, and I want to see it get press. The Stewart Brand types are never going there. 

Do you see clear limitations in their worldviews and their politics?

They remain remarkably humanist in their orientation, in their cognitive apparatus, and in their vision of the world. They also have an almost Peter Pan quality. They never quite grew up. They say, “If it’s broken, fix it.” 

This comes from an incapacity to mourn and an incapacity to be finite. I mean that psychoanalytically: an incapacity to understand that there is no status quo ante, to understand that death and loss are real. Only within that understanding is it possible to open up to a kind of vitality that isn’t double death, that isn’t extermination, and which doesn’t yearn for transcendence, yearn for the fix.

There’s not much mourning with the Stewart Brand types. There’s not much felt loss of the already disappeared, the already dead — the disappeared of Argentina, the disappeared of the caravans, the disappeared of the species that will not come back. You can try to do as much resurrection biology as you want to. But any of the biologists who are actually involved in the work are very clear that there is no resurrection. 

You have also been critical of the Anthropocene, as a proposed new geological epoch defined by human influence on the earth. Do you see the idea of the Anthropocene as having similar limitations?

I think the Anthropocene framework has been a fertile container for quite a lot, actually. The Anthropocene has turned out to be a rather capacious territory for incorporating people in struggle. There are a lot of interesting collaborations with artists and scientists and activists going on.

The main thing that’s too bad about the term is that it perpetuates the misunderstanding that what has happened is a human species act, as if human beings as a species necessarily exterminate every planet we dare to live on. As if we can’t stop our productive and reproductive excesses. 

Extractivism and exterminationism are not human species acts. They come from a situated historical conjuncture of about five hundred years in duration that begins with the invention of the plantation and the subsequent modeling of industrial capitalism. It is a situated historical conjuncture that has had devastating effects even while it has created astonishing wealth. 

To define this as a human species act affects the way a lot of scientists think about the Anthropocene. My scientist colleagues and friends really do continue to think of it as something human beings can’t stop doing, even while they understand my historical critique and agree with a lot of it. 

It’s a little bit like the relativism versus objectivity problem. The old languages have a deep grip. The situated historical way of thinking is not instinctual for Western science, whose offspring are numerous. 

Are there alternatives that you think could work better than the Anthropocene?

There are plenty of other ways of thinking. Take climate change. Now, climate change is a necessary and essential category. But if you go to the circumpolar North as a Southern scientist wanting to collaborate with Indigenous people on climate change — on questions of changes in the sea ice, for example, or changes in the hunting and subsistence base — the limitations of that category will be profound. That’s because it fails to engage with the Indigenous categories that are actually active on the ground. 

There is an Inuktitut word, “sila.” In an Anglophone lexicon, “sila” will be translated as “weather.” But in fact, it’s much more complicated. In the circumpolar North, climate change is a concept that collects a lot of stuff that the Southern scientist won’t understand. So the Southern scientist who wants to collaborate on climate change finds it almost impossible to build a contact zone. 

Anyway, there are plenty of other ways of thinking about shared contemporary problems. But they require building contact zones between cognitive apparatuses, out of which neither will leave the same as they were before. These are the kinds of encounters that need to be happening more.

A final question. Have you been following the revival of socialism, and socialist feminism, over the past few years? 

Yes.

What do you make of it? I mean, socialist feminism is becoming so mainstream that even Harper’s Bazaar is running essays on “emotional labor.”

I’m really pleased! The old lady is happy. I like the resurgence of socialism. For all the horror of Trump, it has released us. A whole lot of things are now being seriously considered, including mass nonviolent social resistance. So I am not in a state of cynicism or despair.

An excerpted version of this interview originally appeared in The Guardian.

https://logicmag.io/nature/a-giant-bumptious-litter/

Inuits do Canadá: uma longa jornada de volta (Estadão)

The Economist

04 Março 2015 | 03h 00

Esqueletos foram descobertos há pouco tempo em um museu francês, mas caminho para repatriá-los não é fácil

Em agosto de 1880, oito Inuits da costa nordeste do Canadá aceitaram viajar para a Europa a fim de serem exibidos em um zoológico humano. Pouco depois, morriam de varíola, antes de retornar ao seu lar. Os esqueletos de Abraham Ulrikab e da maior parte dos seus companheiros foram descobertos há pouco tempo, montados completamente nos depósitos de um museu francês para serem exibidos. Os anciãos Inuits querem que os restos mortais de seu povo, até mesmo dos que morreram longe dos territórios de caça do Norte, nos séculos 19 e 20, voltem para o seu país. Mas isso levará muito tempo.

O governo de Nunatsiavut, uma região Inuit do norte do Labrador criada em 2005, já recuperou restos humanos de museus de Chicago e da Terranova. David Lough, vice-ministro da Cultura de Nunatsiavut, não sabe ao certo quantos outros há para serem reclamados. Mas ele acredita que, em 500 anos de contato entre o Labrador e o mundo exterior, muitas pessoas e artefatos foram parar do outro lado do oceano. Nancy Columbia fez parte de um grupo encarregado de apresentar a cultura Inuit na Feira Mundial de Chicago, e chegou a Hollywood, onde estrelou filmes western como princesa americana nativa.

The New York Times

Governo procura descendentes para definir o que será feito

Até pouco tempo atrás, os museus resistiam a devolver restos humanos, em nome da ciência e da preservação da cultura. As múmias egípcias do Museu Britânico e as tsantsas (cabeças encolhidas) do Amazonas, do Museu Pitt Rivers de Oxford, são as peças mais importantes de suas coleções. Mas, pressionados por grupos indígenas, começaram a ceder. A Declaração sobre os Direitos das Nações Indígenas da ONU, adotada em 2007, consagra o direito de reclamar restos humanos, assim como a legislação em Grã-Bretanha, Austrália e Estados Unidos (mas não a do Canadá). Dezenas de museus (incluindo o Museu Britânico e o Pitt Rivers) elaboraram políticas de repatriação e códigos éticos sobre o tratamento a ser dado a restos mortais. O Museu do Homem da França, onde os esqueletos de Abraham Ulrikab e seus companheiros estão guardados, pretende devolvê-los, afirma France Rivet, autora de um novo livro sobre a saga do grupo. “Eles aguardam apenas uma solicitação do Canadá”, afirma.

A solicitação não chegou, diz Lough, em parte porque “os Inuits querem que todos sejam consultados”. A frágil situação das comunidades Inuit torna isso difícil. Hebron, terra natal da família Ulrikab, foi fundada por missionários da Morávia. Mas o assentamento foi abandonado em 1959, quando a missão fechou; os descendentes da família se dispersaram. Eles deverão ser encontrados para ajudar a decidir onde os restos deverão ser sepultados e o tipo de cerimônia que será realizado. Nakvak, local de origem de outros integrantes do grupo original, agora fica no Parque Nacional das Montanhas Torngat, e existem obstáculos burocráticos para utilizá-lo como local de sepultamento.

Somente depois que os Inuits decidirem o que fazer com os restos mortais as negociações poderão começar entre os governos do Canadá e da França a respeito de sua devolução e do pagamento dos custos da repatriação. Em 2013, Stephen Harper, primeiro-ministro do Canadá, e o presidente da França, François Hollande, concordaram em colaborar para a repatriação. Mas a África do Sul esperou oito anos por Saartjie Baartman, a “Vênus hotentote”, depois que Nelson Mandela solicitou seu regresso, em 1994. Para Abraham Ulrikab e seus amigos, pelo menos, a jornada de volta começou.

© 2015 THE ECONOMIST NEWSPAPER LIMITED. DIREITOS RESERVADOS. TRADUZIDO POR ANNA CAPOVILLA, PUBLICADO SOB LICENÇA. O TEXTO ORIGINAL EM INGLÊS ESTÁ EM WWW.ECONOMIST.COM