Thu 25 Mar 2021 14.00 GMT Last modified on Thu 25 Mar 2021 16.44 GMT
The embattled indigenous peoples of Latin America are by far the best guardians of the regions’ forests, according to a UN report, with deforestation rates up to 50% lower in their territories than elsewhere.
Protecting the vast forests is vital to tackling the climate crisis and plummeting populations of wildlife, and the report found that recognising the rights of indigenous and tribal peoples to their land is one of the most cost-effective actions. The report also calls for the peoples to be paid for the environmental benefits their stewardship provides, and for funding for the revitalisation of their ancestral knowledge of living in harmony with nature.
However, the demand for beef, soy, timber, oil and minerals means the threats to indigenous peoples and their forest homes are rising. Hundreds of community leaders have been killed because of disputes over land in recent years and the Covid-19 pandemic has added to the dangers forest peoples face.
Demands by indigenous peoples for their rights have become increasingly visible in recent years, the report said, but this has come with increasing persecution, racism, and assassinations. Supporting these peoples to protect the forests is particularly crucial now with scientists warning that the Amazon is nearing a tipping point where it switches from rainforest to savannah, risking the release of billions of tonnes of carbon into the atmosphere.
The report was produced by the UN Food and Agriculture Organization and the Fund for the Development of Indigenous Peoples of Latin America and the Caribbean (Filac), based on a review of more than 300 studies.
“Almost half of the intact forests in the Amazon basin are in indigenous territories and the evidence of their vital role in forest protection is crystal clear,” said the president of Filac, Myrna Cunningham, an indigenous woman from Nicaragua. “While the area of intact forest declined by only 5% between 2000 and 2016 in the region’s indigenous areas, in the non-indigenous areas it fell by 11%. This is why [indigenous peoples’] voice and vision should be taken into account in all global initiatives relating to climate change, biodiversity and forestry.”
“Indigenous peoples have a different concept of forests,” she said. “They are not seen as a place where you take out resources to increase your money – they are seen as a space where we live and that is given to us to protect for the next generations.”
Indigenous and tribal territories contain about a third of all the carbon stored in the forests of Latin America, said Julio Berdegué, the FAO’s Regional Representative: “These peoples are rich when it comes to culture, knowledge, and natural resources, but some of the poorest when it comes to incomes and access to services.” Supporting them would also help avoid new pandemics, he said, as these are most often the result of the destruction of nature.
“Even under siege from Covid-19 and a frightening rise in invasions from outsiders, we remain the ones who can stop the destruction of our forests and their biodiverse treasures,” said José Gregorio Diaz Mirabal, indigenous leader of an umbrella group, the Coordinator of the Indigenous Organizations of the Amazon Basin. He said the report’s evidence supports his call for climate funds to go directly to indigenous peoples and not governments vulnerable to corruption. Advertisement
The report found the best forest protection was provided by peoples with collective legal titles to their lands. A 12-year study in the Bolivian, Brazilian, and Colombian Amazon found deforestation rates in such territories were only one half to one-third of those in other similar forests. Even though indigenous territories cover 28% of the Amazon Basin, they only generated 2.6% of the region’s carbon emissions, the report said.
Indigenous peoples occupy 400m hectares of land in the region, but there is no legal recognition of their property rights in a third of this area. “While the impact of guaranteeing tenure security is great, the cost is very low,” the report said, needing less than $45 per hectare for the mapping, negotiation and legal work required.
The report said it would cost many times more to prevent carbon emissions from fossil fuel burning using carbon capture and storage technology on power plants. The granting of land rights to indigenous people has increased over the last 20 years, Cunningham said, but has slowed down in recent years.
Paying indigenous and tribal communities for the environmental services of their territories has reduced deforestation in countries including Ecuador, Mexico, and Peru. Berdegué said such programmes could attract hundreds of millions of dollars per year from international sources.
The need for protection is urgent, the report said, with annual deforestation rates in Brazil’s indigenous territories rising from 10,000 hectares in 2017 to 43,000 hectares in 2019. In January, indigenous leaders urged the international criminal court to investigate Brazil’s president, Jair Bolsonaro, over his dismantling of environmental policies and violations of indigenous rights.
Elsewhere, the area of large intact forests in indigenous territories has fallen between 2000 and 2016, with 59% lost in Paraguay, 42% in Nicaragua, 30% in Honduras and 20% in Bolivia. Mining and oil concessions now overlay almost a quarter of the land in Amazon basin indigenous and tribal territories, the report said.
The U.S. National Academy of Sciences has published a new report (“Reflecting Sunlight“) on the topic of Geoengineering (that is, the deliberate manipulation of the global Earth environment in an effort to offset the effects of human carbon pollution-caused climate change). While I am, in full disclosure, a member of the Academy, I offer the following comments in an entirely independent capacity:
Let me start by congratulating the authors on their comprehensive assessment of the science. It is solid as we would expect, since the author team and reviewers cover that well in their expertise. The science underlying geoengineering is the true remit of the study. Chris Field , the lead author, is a duly qualified person to lead the effort, and did a good job making sure that intricacies of the science are covered, including the substantial uncertainties and caveats when it comes to the potential environmental impacts of some of the riskier geoengineering strategies (i.e. stratosphere sulphate aerosol injection to block out sunlight).
I like the fact that there is a discussion of the importance of labels and terminology and how this can impact public perception. For example, the oft-used term “solar radiation management” is not favored by the report authors, as it can be misleading (we don’t have our hand on a dial that controls solar output). On the other hand, I think that the term they do chose to use “solar geoengineering”, is still potentially problematic, because it still implies we’re directly modify solar output—but that’s not the case. We’re talking about messing with Earth’s atmospheric chemistry, we’re not dialing down the sun, even though many of the modeling experiments assume that’s what we’re doing. It’s a bit of a bait and switch. Even the title of the report, “Reflecting Sunlight” falls victim to this biased framing.
“They don’t actually put aerosols in the atmosphere. They turn down the Sun to mimic geoengineering. You might think that is relatively unimportant . . . [but] controlling the Sun is effectively a perfect knob. We know almost precisely how a reduction in solar flux will project onto the energy balance of a planet. Aerosol-climate interactions are much more complex.”
I have a deeper and more substantive concern though, and it really is about the entire framing of the report. A report like this is as much about the policy message it conveys as it is about the scientific assessment, for it will be used immediately by policy advocates. And here I’m honestly troubled at the fodder it provides for mis-framing of the risks.
I recognize that the authors are dealing with a contentious and still much-debated topic, and it’s a challenge to represent the full range of views within the community, but the opening of the report itself, in my view, really puts a thumb on the scales. It falls victim to the moral hazard that I warn about in “The New Climate War” when it states, as justification for potentially considering implementing these geoengineering schemes:
But despite overwhelming evidence that the climate crisis is real and pressing, emissions of greenhouse gases continue to increase, with global emissions of fossil carbon dioxide rising 10.8 percent from 2010 through 2019. The total for 2020 is on track to decrease in response to decreased economic activity related to the COVID-19 pandemic. The pandemic is thus providing frustrating confirmation of the fact that the world has made little progress in separating economic activity from carbon dioxide emissions.
First of all, the discussion of carbon emissions reductions there is misleading. Emissions flattened in the years before the pandemic, and the International Energy Agency (IEA) specifically attributed that flattening to a decrease in carbon emissions globally in the power generation sector. These reductions continue on and contributed at least party to the 7% decrease in global emissions last year. We will certainly need policy interventions favoring further decarbonization to maintain that level of decrease year after year, but if we can do that, we remain on a path to limiting warming below dangerous levels (decent chance less than 1.5C and very good chance less than 2C) without resorting on very risky geoengineering schemes. It is a matter of political willpower, not technology–we have the technology now necessary to decarbonize our economy.
The authors are basically arguing that because carbon reductions haven’t been great enough (thanks to successful opposition by polluters and their advocates) we should consider geoengineering. That framing (unintentionally, I realize) provides precisely the crutch that polluters are looking for.
As I explain in the book:
A fundamental problem with geoengineering is that it presents what is known as a moral hazard, namely, a scenario in which one party (e.g., the fossil fuel industry) promotes actions that are risky for another party (e.g., the rest of us), but seemingly advantageous to itself. Geoengineering provides a potential crutch for beneficiaries of our continued dependence on fossil fuels. Why threaten our economy with draconian regulations on carbon when we have a cheap alternative? The two main problems with that argument are that (1) climate change poses a far greater threat to our economy than decarbonization, and (2) geoengineering is hardly cheap—it comes with great potential harm.
So, in short, this report is somewhat of a mixed bag. The scientific assessment and discussion is solid, and there is a discussion of uncertainties and caveats in the detailed report. But the spin in the opening falls victim to moral hazard and will provide fodder for geoengineering advocates to use in leveraging policy decision-making.
Despite warnings, American and European officials gave up leverage that could have guaranteed access for billions of people. That risks prolonging the pandemic.
In the coming days, a patent will finally be issued on a five-year-old invention, a feat of molecular engineering that is at the heart of at least five major Covid-19 vaccines. And the United States government will control that patent.
The new patent presents an opportunity — and some argue the last best chance — to exact leverage over the drug companies producing the vaccines and pressure them to expand access to less affluent countries.
The question is whether the government will do anything at all.
The rapid development of Covid-19 vaccines, achieved at record speed and financed by massive public funding in the United States, the European Union and Britain, represents a great triumph of the pandemic. Governments partnered with drugmakers, pouring in billions of dollars to procure raw materials, finance clinical trials and retrofit factories. Billions more were committed to buy the finished product.
But this Western success has created stark inequity. Residents of wealthy and middle-income countries have received about 90 percent of the nearly 400 million vaccines delivered so far. Under current projections, many of the rest will have to wait years.
Growing numbers of health officials and advocacy groups worldwide are calling for Western governments to use aggressive powers — most of them rarely or never used before — to force companies to publish vaccine recipes, share their know-how and ramp up manufacturing. Public health advocates have pleaded for help, including asking the Biden administration to use its patent to push for broader vaccine access.
Governments have resisted. By partnering with drug companies, Western leaders bought their way to the front of the line. But they also ignored years of warnings — and explicit calls from the World Health Organization — to include contract language that would have guaranteed doses for poor countries or encouraged companies to share their knowledge and the patents they control.
“It was like a run on toilet paper. Everybody was like, ‘Get out of my way. I’m gonna get that last package of Charmin,’” said Gregg Gonsalves, a Yale epidemiologist. “We just ran for the doses.”
The prospect of billions of people waiting years to be vaccinated poses a health threat to even the richest countries. One example: In Britain, where the vaccine rollout has been strong, health officials are tracking a virus variant that emerged in South Africa, where vaccine coverage is weak. That variant may be able to blunt the effect of vaccines, meaning even vaccinated people might get sick.
Western health officials said they never intended to exclude others. But with their own countries facing massive death tolls, the focus was at home. Patent sharing, they said, simply never came up.
“It was U.S.-centric. It wasn’t anti-global.” said Moncef Slaoui, who was the chief scientific adviser for Operation Warp Speed, a Trump administration program that funded the search for vaccines in the United States. “Everybody was in agreement that vaccine doses, once the U.S. is served, will go elsewhere.”
President Biden and Ursula von der Leyen, the president of the European Union’s executive branch, are reluctant to change course. Mr. Biden has promised to help an Indian company produce about 1 billion doses by the end of 2022 and his administration has donated doses to Mexico and Canada. But he has made it clear that his focus is at home.
“We’re going to start off making sure Americans are taken care of first,” Mr. Biden said recently. “But we’re then going to try and help the rest of the world.”
Pressuring companies to share patents could be seen as undermining innovation, sabotaging drugmakers or picking drawn-out and expensive fights with the very companies digging a way out of the pandemic.
As rich countries fight to keep things as they are, others like South Africa and India have taken the battle to the World Trade Organization, seeking a waiver on patent restrictions for Covid-19 vaccines.
Russia and China, meanwhile, have promised to fill the void as part of their vaccine diplomacy. The Gamaleya Institute in Moscow, for example, has entered into partnerships with producers from Kazakhstan to South Korea, according to data from Airfinity, a science analytics company, and UNICEF. Chinese vaccine makers have reached similar deals in the United Arab Emirates, Brazil and Indonesia.
Addressing patents would not, by itself, solve the vaccine imbalance. Retrofitting or constructing factories would take time. More raw materials would need to be manufactured. Regulators would have to approve new assembly lines.
And as with cooking a complicated dish, giving someone a list of ingredients is no substitute to showing them how to make it.
To address these problems, the World Health Organization created a technology pool last year to encourage companies to share know-how with manufacturers in lower-income nations.
Not a single vaccine company has signed up.
“The problem is that the companies don’t want to do it. And the government is just not very tough with the companies,” said James Love, who leads Knowledge Ecology International, a nonprofit.
Drug company executives told European lawmakers recently that they were licensing their vaccines as quickly as possible, but that finding partners with the right technology was challenging.
“They don’t have the equipment,” Moderna’s chief executive, Stéphane Bancel, said. “There is no capacity.”
But manufacturers from Canada to Bangladesh say they can make vaccines — they just lack patent licensing deals. When the price is right, companies have shared secrets with new manufacturers in just months, ramping up production and retrofitting factories.
It helps when the government sweetens the deal. Earlier this month, Mr. Biden announced that the pharmaceutical giant Merck would help make vaccines for its competitor Johnson & Johnson. The government pressured Johnson & Johnson to accept the help and is using wartime procurement powers to secure supplies for the company. It will also pay to retrofit Merck’s production line, with an eye toward making vaccines available to every adult in the United States by May.
Despite the hefty government funding, drug companies control nearly all of the intellectual property and stand to make fortunes off the vaccines. A critical exception is the patent expected to be approved soon — a government-led discovery for manipulating a key coronavirus protein.
This breakthrough, at the center of the 2020 race for a vaccine, actually came years earlier in a National Institutes of Health lab, where an American scientist named Dr. Barney Graham was in pursuit of a medical moonshot.
‘We’d already done everything’
For years, Dr. Graham specialized in the kind of long, expensive research that only governments bankroll. He searched for a key to unlock universal vaccines — genetic blueprints to be used against any of the roughly two dozen viral families that infect humans. When a new virus emerged, scientists could simply tweak the code and quickly make a vaccine.
In 2016, while working on Middle East Respiratory Syndrome, another coronavirus known as MERS, he and his colleagues developed a way to swap a pair of amino acids in the coronavirus spike protein. That bit of molecular engineering, they realized, could be used to develop effective vaccines against any coronavirus. The government, along with its partners at Dartmouth College and the Scripps Research Institute, filed for a patent, which will be issued this month.
When Chinese scientists published the genetic code of the new coronavirus in January 2020, Dr. Graham’s team had their cookbook ready.
“We kind of knew exactly what we had to do,” said Jason McLellan, one of the inventors, who now works at the University of Texas at Austin. “We’d already done everything.”
Dr. Graham was already working with Moderna on a vaccine for another virus when the outbreak in China inspired his team to change focus. “We just flipped it to coronavirus and said, ‘How fast can we go?’” Dr. Graham recalled.
Within a few days, they emailed the vaccine’s genetic blueprint to Moderna to begin manufacturing. By late February, Moderna had produced enough vaccines for government-run clinical trials.
“We did the front end. They did the middle. And we did the back end,” Dr. Graham said.
Exactly who holds patents for which vaccines won’t be sorted out for months or years. But it is clear now that several of today’s vaccines — including those from Moderna, Johnson & Johnson, Novavax, CureVac and Pfizer-BioNTech — rely on the 2016 invention. Of those, only BioNTech has paid the U.S. government to license the technology. The patent is scheduled to be issued March 30.
Patent lawyers and public health advocates say it’s likely that other companies will either have to negotiate a licensing agreement with the government, or face the prospect of a lawsuit worth billions. The government filed such a lawsuit in 2019 against the drugmaker Gilead over H.I.V. medication.
This gives the Biden administration leverage to force companies to share technology and expand worldwide production, said Christopher J. Morten, a New York University law professor specializing in medical patents.
“We can do this the hard way, where we sue you for patent infringement,” he said the government could assert. “Or just play nice with us and license your tech.”
The National Institutes of Health declined to comment on its discussions with the drugmakers but said it did not anticipate a dispute over patent infringement. None of the drug companies responded to repeated questions about the 2016 patent.
Experts said the government has stronger leverage on the Moderna vaccine, which was almost entirely funded by taxpayers. New mRNA vaccines, such as those from Moderna, are relatively easier to manufacture than vaccines that rely on live viruses. Scientists compare it to an old-fashioned cassette player: Try one tape. If it’s not right, just pop in another.
Moderna expects $18.4 billion in vaccine sales this year, but it is the delivery system — the cassette player — that is its most prized secret. Disclosing it could mean giving away the key to the company’s future.
“There should be no division in order to win this battle,” President Emmanuel Macron of France said.
Yet European governments had backed their own champions. The European Investment Bank lent nearly $120 million to BioNTech, a German company, and Germany bought a $360 million stake in the biotech firm CureVac after reports that it was being lured to the United States.
“We funded the research, on both sides of the Atlantic,” said Udo Bullmann, a German member of the European Parliament. “You could have agreed on a paragraph that says ‘You are obliged to give it to poor countries in a way that they can afford it.’ Of course you could have.”
A People’s Vaccine
In May, the leaders of Pakistan, Ghana, South Africa and others called for governments to support a “people’s vaccine” that could be quickly manufactured and given for free.
They urged the governing body of the World Health Organization to treat vaccines as “global public goods.”
Though such a declaration would have had no teeth, the Trump administration moved swiftly to block it. Intent on protecting intellectual property, the government said calls for equitable access to vaccines and treatments sent “the wrong message to innovators.”
World leaders ultimately approved a watered-down declaration that recognized extensive immunization — not the vaccines themselves — as a global public good.
That same month, the World Health Organization launched the technology-access pool and called on governments to include clauses in their drug contracts guaranteeing equitable distribution. But the world’s richest nations roundly ignored the call.
In the United States, Operation Warp Speed went on a summertime spending spree, disbursing over $10 billion to handpicked companies and absorbing the financial risks of bringing a vaccine to market.
“Our role was to enable the private sector to be successful,” said Paul Mango, a top adviser to the then health secretary, Alex M. Azar II.
The deals came with few strings attached.
Large chunks of the contracts are redacted and some remain secret. But public records show that the government used unusual contracts that omitted its right to take over intellectual property or influence the price and availability of vaccines. They did not let the government compel companies to share their technology.
British and other European leaders made similar concessions as they ordered enough doses to vaccinate their populations multiple times over.
“You have to write the rules of the game, and the place to do that would have been these funding contracts,” said Ellen ’t Hoen, the director of Medicines Law and Policy, an international research group.
By comparison, one of the world’s largest health financiers, the Bill & Melinda Gates Foundation, includes grant language requiring equitable access to vaccines. As leverage, the organization retains some right to the intellectual property.
Dr. Slaoui, who came to Warp Speed after leading research and development at GlaxoSmithKline, is sympathetic to this idea. But it would have been impractical to demand patent concessions and still deliver on the program’s primary goals of speed and volume, he said.
“I can guarantee you that the agreements with the companies would have been much more complex and taken a much longer time,” he said. The European Union, for example, haggled over price and liability provisions, which delayed the rollout.
In some ways, this was a trip down a trodden path. When the H1N1 “swine flu” pandemic broke out in 2009, the wealthiest countries cornered the global vaccine market and all but locked out the rest of the world.
Experts said at the time that this was a chance to rethink the approach. But the swine flu pandemic fizzled and governments ended up destroying the vaccines they had hoarded. They then forgot to prepare for the future.
The International View
For months, the United States and European Union have blocked a proposal at the World Trade Organization that would waive intellectual property rights for Covid-19 vaccines and treatments. The application, put forward by South Africa and India with support from most developing nations, has been bogged down in procedural hearings.
“Every minute we are deadlocked in the negotiating room, people are dying,” said Mustaqeem De Gama, a South African diplomat who is involved in the talks.
But in Brussels and Washington, leaders are still worried about undermining innovation.
During the presidential campaign, Mr. Biden’s team gathered top intellectual property lawyers to discuss ways to increase vaccine production.
“They were planning on taking the international view on things,” said Ana Santos Rutschman, a Saint Louis University law professor who participated in the sessions.
Most of the options were politically thorny. Among them was the use of a federal law allowing the government to seize a company’s patent and give it to another in order to increase supply. Former campaign advisers say the Biden camp was lukewarm to this proposal and others that called for a broader exercise of its powers.
The administration has instead promised to give $4 billion to Covax, the global vaccine alliance. The European Union has given nearly $1 billion so far. But Covax aims to vaccinate only 20 percent of people in the world’s poorest countries this year, and faces a $2 billion shortfall even to accomplish that.
Dr. Graham, the N.I.H. scientist whose team cracked the coronavirus vaccine code for Moderna, said that pandemic preparedness and vaccine development should be international collaborations, not competitions.
“A lot of this would not have happened unless there was a big infusion of government money,” he said.
But governments cannot afford to sabotage companies that need profit to survive.
Dr. Graham has largely moved on from studying the coronavirus. He is searching for a universal flu vaccine, a silver bullet that could prevent all strains of the disease without an annual tweak.
Though he was vaccinated through work, he spent the early part of the year trying to get his wife and grown children onto waiting lists — an ordeal that even one of the key inventors had to endure. “You can imagine how aggravating that is,” he said.
Matina Stevis-Gridneff and Monika Pronczuk contributed reporting.
Wed 17 Mar 2021 07.01 GMT Last modified on Thu 18 Mar 2021 14.38 GMT
A remarkable new study on how whales behaved when attacked by humans in the 19th century has implications for the way they react to changes wreaked by humans in the 21st century.
The paper, published by the Royal Society on Wednesday, is authored by Hal Whitehead and Luke Rendell, pre-eminent scientists working with cetaceans, and Tim D Smith, a data scientist, and their research addresses an age-old question: if whales are so smart, why did they hang around to be killed? The answer? They didn’t.
Using newly digitised logbooks detailing the hunting of sperm whales in the north Pacific, the authors discovered that within just a few years, the strike rate of the whalers’ harpoons fell by 58%. This simple fact leads to an astonishing conclusion: that information about what was happening to them was being collectively shared among the whales, who made vital changes to their behaviour. As their culture made fatal first contact with ours, they learned quickly from their mistakes.
“Sperm whales have a traditional way of reacting to attacks from orca,” notes Hal Whitehead, who spoke to the Guardian from his house overlooking the ocean in Halifax, Nova Scotia, where he teaches at Dalhousie University. Before humans, orca were their only predators, against whom sperm whales form defensive circles, their powerful tails held outwards to keep their assailants at bay. But such techniques “just made it easier for the whalers to slaughter them”, says Whitehead.
It was a frighteningly rapid killing, and it accompanied other threats to the ironically named Pacific. From whaling and sealing stations to missionary bases, western culture was imported to an ocean that had remained largely untouched. As Herman Melville, himself a whaler in the Pacific in 1841, would write in Moby-Dick (1851): “The moot point is, whether Leviathan can long endure so wide a chase, and so remorseless a havoc.”
Sperm whales are highly socialised animals, able to communicate over great distances. They associate in clans defined by the dialect pattern of their sonar clicks. Their culture is matrilinear, and information about the new dangers may have been passed on in the same way whale matriarchs share knowledge about feeding grounds. Sperm whales also possess the largest brain on the planet. It is not hard to imagine that they understood what was happening to them.
The hunters themselves realised the whales’ efforts to escape. They saw that the animals appeared to communicate the threat within their attacked groups. Abandoning their usual defensive formations, the whales swam upwind to escape the hunters’ ships, themselves wind-powered. ‘This was cultural evolution, much too fast for genetic evolution,’ says Whitehead.
And in turn, it evokes another irony. Now, just as whales are beginning to recover from the industrial destruction by 20th-century whaling fleets – whose steamships and grenade harpoons no whale could evade – they face new threats created by our technology. ‘They’re having to learn not to get hit by ships, cope with the depredations of longline fishing, the changing source of their food due to climate change,’ says Whitehead. Perhaps the greatest modern peril is noise pollution, one they can do nothing to evade.
Whitehead and Randall have written persuasively of whale culture, expressed in localised feeding techniques as whales adapt to shifting sources, or in subtle changes in humpback song whose meaning remains mysterious. The same sort of urgent social learning the animals experienced in the whale wars of two centuries ago is reflected in the way they negotiate today’s uncertain world and what we’ve done to it.
As Whitehead observes, whale culture is many millions of years older than ours. Perhaps we need to learn from them as they learned from us. After all, it was the whales that provoked Melville to his prophesies in Moby-Dick. “We account the whale immortal in his species, however perishable in individuality,” he wrote, “and if ever the world is to be again flooded … then the eternal whale will still survive, and … spout his frothed defiance to the skies.”
• This article was amended on 18 March 2021 to make clear the status of “Dalhousie” as a university, not a placename.
In the grasslands of Ethiopia, scientists were amazed to find a striking example of inter-species collaboration. Ethiopian wolves were seen casually strolling among herds of gelada monkeys, which you would expect to flee out of the way of such a predator. But it seems like the monkeys tolerate the wolves in their presence and are not frightened by them. The wolves, on the other hand, ignore the geladas’ potential as meals, preferring to linger around the herd because it helps them catch more rodents. This odd relationship resembles the ancient domestication of dogs or cats by humans, some researchers say.
Live and let live
Gelada monkeys (Theropithecus gelada) look a lot like baboons. These primates are known to live in close-knit family groups, but can also live as part of shockingly vast communities consisting of hundreds of individuals. They live peacefully even in the most numerous communities, a relatively rare achievement in the wilds of Africa.
Geladas are graminivores, meaning their diet consists of 90% grass. Essentially, they’re the only living primates that subsist almost entirely on grass, a trait more commonly seen in ungulates like deer and cattle.
While the primates congregate in huge herds, munching on grass for hours upon hours, the shrewd (and endangered) Ethiopian wolf (Canis Simensis) mingles with the geladas. Usually, the wolves travel in zig-zag, sprinting when they sense prey is within their grasp. But, around the geladas, the wolves roam casually, being careful not to startle the herd.
Researchers at Dartmouth College observed the dynamics between the species for a new study. They conclude that the Ethiopian wolf is not interested in geladas for food, although they have no qualms hunting juvenile sheep and goat. The monkeys seem to know this, as they don’t seem to feel threatened in the predators’ presence. But why is that?
After following Ethiopian wolves for 17 days, the researchers found that those individuals which hunted rodents within a gelada herd were successful 67% of the time, compared to a success rate of only 25% when they prowled on their own. The findings were reported in the Journal of Mammalogy.
“For Ethiopian wolves, establishing proximity to geladas as foraging commensals could be an adaptive strategy to elevate foraging success. The novel dynamics documented here shed light on the ecological circumstances that contribute to the stability of mixed groups of predators and potential prey,” wrote the authors of the new study.
For now, it’s not clear what makes the wolves more successful when hunting within gelada groups. The monkeys might be flushing out rodents from their burrows due to their insistent grazing, but that’s just an unverified hypothesis at the moment. Alternatively, the monkeys might be providing cover for the wolves, distracting the rodents from the dangerous predator.
Sometimes, a wolf will attack a gelada young. During one instance when this happened, the other monkeys in the herd quickly attacked the wolf, forcing it to drop the infant. After the wolf was driven away, it was never allowed in the midst of the herd again. Other individuals seem to understand this dynamic very well and will resist the temptation of grabbing a quick gelada meal in favor of the prospect of better dividends in the long run.
The researchers say that the Ethiopian wolf might be hanging around other species, such as cattle, to hunt more rodents. It’s also possible other predator species may be doing something similar without us finding out about it yet.
What’s intriguing is that the gradual toleration between the two species is very similar to the domestication process performed by humans on dogs. The first wolves began to be domesticated by humans sometime between 40,000 and 11,000 years ago, but the details pertaining to how this happened are not clear. According to one hypothesis, wolves started hanging around humans, who would leave large carcasses behind them after each big hunt. Gradually, the two species became more accustomed to one another. Later, wolves may have helped humans on the hunt, cementing the relationship between the two.
Could the same thing be happening in Ethiopia’s grasslands? Given a couple thousand of years, could we see geladas with wolves as pets? That would be quite the sight — but it’s rather unlikely. The monkeys don’t seem to derive any benefit from tolerating the wolves in their presence, and without a two-way value exchange between the two species, domestication won’t likely happen.
What’s more, the Ethiopian wolf might become extinct soon before there’s any reasonable time for domestication to play out. Researchers estimate that there are only 450 adult Ethiopian wolves left in the wild. Continuous loss of habitat due to high-altitude subsistence agriculture represents the major current threat to the Ethiopian wolf.
Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.
It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.
As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”
The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.
In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.
Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.
Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.
Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”
In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.
He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.
I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?
But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.
By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.
The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.
“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.
“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”
In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.
Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.
His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.
Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.
Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.
Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.
News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.
Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.
They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)
“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”
Joaquin Quiñonero Candela
In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.
Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.
Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”
With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.
If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.
But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”
While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.
“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”
A former AI researcher who joined in 2018
In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.
Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.
The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)
But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.
One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”
Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.
That former employee, meanwhile, no longer lets his daughter use Facebook.
Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.
It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.
At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.
Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.
Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.
“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”
At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.
The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.
And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.
Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.
On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.
For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.
Facebook did not grant me an interview with Zuckerberg, but previousreporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.
On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.
It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.
This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.
(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)
But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.
Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.
But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.
The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.
Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it canpersist on the platform.
In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.
Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”
But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.
A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.
“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”
When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.
“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”
“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”
Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.
Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.
But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.
This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.
In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.
All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.
The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.
“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”
Ellery Roberts Biddle, editorial director of Ranking Digital Rights
This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.
Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.
Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.
Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”
The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.
In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.
I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”
Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.
I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”
Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.
I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.
“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”
Corrections:We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.
Last Updated: March 10, 2021 at 5:59 p.m. ET First Published: March 10, 2021 at 8:28 a.m. ET By
Vincent H. Smith and Eric J. Belasco
Congress has reduced risk by underwriting crop prices and cash revenues
Bill Gates is now the largestowner of farmland in the U.S. having made substantial investments in at least 19 states throughout the country. He has apparently followed the advice of another wealthy investor, Warren Buffett, who in a February 24, 2014 letter to investors described farmland as an investment that has “no downside and potentially substantial upside.”
There is a simple explanation for this affection for agricultural assets. Since the early 1980s, Congress has consistently succumbed to pressures from farm interest groups to remove as much risk as possible from agricultural enterprises by using taxpayer funds to underwrite crop prices and cash revenues.
Over the years, three trends in farm subsidy programs have emerged.
The first and most visible is the expansion of the federally supported crop insurance program, which has grown from less than $200 million in 1981 to over $8 billion in 2021. In 1980, only a few crops were covered and the government’s goal was just to pay for administrative costs. Today taxpayers pay over two-thirds of the total cost of the insurance programs that protect farmers against drops in prices and yields for hundreds of commodities ranging from organic oranges to GMO soybeans.
The second trend is the continuation of longstanding programs to protect farmers against relatively low revenues because of price declines and lower-than-average crop yields. The subsidies, which on average cost taxpayers over $5 billion a year, are targeted to major Corn Belt crops such as soybeans and wheat. Also included are other commodities such as peanuts, cotton and rice, which are grown in congressionally powerful districts in Georgia, the Carolinas, Texas, Arkansas, Mississippi and California.
The third, more recent trend is a return over the past four years to a 1970s practice: annual ad hoc “one off” programs justified by political expediency with support from the White House and Congress. These expenditures were $5.1 billion in 2018, $14.7 billion in 2019, and over $32 billion in 2020, of which $29 billion came from COVID relief funds authorized in the CARES Act. An additional $13 billion for farm subsidies was later included in the December 2020 stimulus bill.
If you are wondering why so many different subsidy programs are used to compensate farmers multiple times for the same price drops and other revenue losses, you are not alone. Our research indicates that many owners of large farms collect taxpayer dollars from all three sources. For many of the farms ranked in the top 10% in terms of sales, recent annual payments exceeded a quarter of a million dollars.
Farms with average or modest sales received much less. Their subsidies ranged from close to zero for small farms to a few thousand dollars for averaged-sized operations.
So what does all this have to do with Bill Gates, Warren Buffet and their love of farmland as an investment? In a financial environment in which real interest rates have been near zero or negative for almost two decades, the annual average inflation-adjusted (real) rate of return in agriculture (over 80% of which consists of land) has been about 5% for the past 30 years, despite some ups and downs, as this chart shows. It is a very solid investment for an owner who can hold on to farmland for the long term.
The overwhelming majority of farm owners can manage that because they have substantial amounts of equity (the sector-wide debt-to-equity ratio has been less than 14% for many years) and receive significant revenue from other sources.
Thus for almost all farm owners, and especially the largest 10% whose net equity averages over $6 million, as Buffet observed, there is little or no risk and lots of potential gain in owning and investing in agricultural land.
Returns from agricultural land stem from two sources: asset appreciation — increases in land prices, which account for the majority of the gains — and net cash income from operating the land. As is well known, farmland prices are closely tied to expected future revenue. And these include generous subsidies, which have averaged 17% of annual net cash incomes over the past 50 years. In addition, Congress often provides substantial additional one-off payments in years when net cash income is likely to be lower than average, as in 2000 and 2001 when grain prices were relatively low and in 2019 and 2020.
It is possible for small-scale investors to buy shares in real-estate investment trusts (REITs) that own and manage agricultural land. However, as with all such investments, how a REIT is managed can be a substantive source of risk unrelated to the underlying value of the land assets, not all of which may be farm land.
Thanks to Congress and the average less affluent American taxpayer, farmers and other agricultural landowners get a steady and substantial return on their investments through subsidies that consistently guarantee and increase those revenues.
While many agricultural support programs are meant to “save the family farm,” the largest beneficiaries of agricultural subsidies are the richest landowners with the largest farms who, like Bill Gates and Warren Buffet, are scarcely in any need of taxpayer handouts.
Many actions that would be considered heinous to humans — cannibalism, eating offspring, torture and rape — have been observed in the animal kingdom. Most (but not all) eyebrow-raising behaviors among animals have an evolutionary underpinning.
“In sober truth,” wrote the British philosopher John Stuart Mill, “nearly all the things which men are hanged or imprisoned for doing to one another, are nature’s everyday performances.” While it is true that rape, torture and murder are more commonplace in the animal kingdom than they are in human civilization, our fellow creatures almost always seem to have some kind of evolutionary justification for their actions — one that we Homosapiens lack.
Cats, for instance, are known to toy with small birds and rodents before finally killing them. Although it is easy to conclude that this makes the popular pet a born sadist, some zoologists have proposed that exhausting prey is the safest way of catching them. Similarly, it’s tempting to describe the way African lions and bottlenose dolphins –– large, social mammals –– commit infanticide (the killing of young offspring), as possibly psychopathic. Interestingly, experts suspect that these creatures are in fact doing themselves a favor; by killing offspring, adult males are making their female partners available to mate again.
These behaviors, which initially may seem symptomatic of some sinister psychological defect, turn out to be nothing more than different examples of the kind of selfishness that evolution is full of. Well played, Mother Nature.
But what if harming others is of no benefit to the assailant? In the human world, senseless destruction features on virtually every evening news program. In the animal world, where the laws of nature –– so we’ve been taught –– don’t allow for moral crises, it’s a different story. By all accounts, such undermining behavior shouldn’t be able to occur. Yet it does, and it’s as puzzling to biologists as the existence of somebody like Ted Bundy or Adolf Hitler has been to theodicists –– those who follow a philosophy of religion that ponders why God permits evil.
Cains and Abels
According to Charles Darwin’s theory of evolution, genes that increase an organism’s ability to survive are passed down, while those that don’t are not. Although Darwin remains an important reference point for how humans interpret the natural world, he is not infallible. During the 1960s, biologist W.D. Hamilton proposed that On the Origins of Species failed to account for the persistency of traits that didn’t directly benefit the animal in question.
The first of these two patterns –– altruism –– was amalgamated into Darwin’s theory of evolution when researchers uncovered its evolutionary benefits. One would think that creatures are hardwired to avoid self-sacrifice, but this is not the case. The common vampire bat shares its food with roostmates whose hunt ended in failure. Recently, Antarctic plunder fish have been found to guard the nests of others if they are left unprotected. In both of these cases, altruistic behavior is put on display when the indirect benefit to relatives of the animal in question outweighs the direct cost incurred by that animal.
In Search of Spite
The second animal behavior –– spite –– continues to be difficult to make sense of. For humans, its concept is a familiar yet elusive one, perhaps understood best through the Biblical story of Cain and Abel or the writings of Fyodor Dostoevsky. Although a number of prominent evolutionary biologists –– from Frans de Waal to members of the West Group at the University of Oxford’s Department of Zoology –– have made entire careers out of studying the overlap between animal and human behavior, even they warn against the stubborn tendency to anthropomorphize nonhuman subjects.
As Edward O. Wilson put it in his study, “The Insect Societies,” spite refers to any “behavior that gains nothing or may even diminish the fitness of the individual performing the act, but is definitely harmful to the fitness of another.” Wilson’s definition, which is generally accepted by biologists, allows researchers to study its occurrence in an objective, non-anthropomorphized manner. It initially drew academic attention to species of fish and birds that destroyed the eggs (hatched or unhatched) of rival nests, all at no apparent benefit to them.
Emphasis on “apparent,” though, because –– as those lions and dolphins demonstrated earlier –– certain actions and consequences aren’t always what we think they are. In their research, biologists Andy Gardner and Stuart West maintain that many of the animal behaviors which were once thought spiteful are now understood as selfish. Not in the direct sense of the word (approaching another nest often leads to brutal clashes with its guardian), but an indirect one: With fewer generational competitors, the murderer’s own offspring are more likely to thrive.
For a specific action to be considered true spite, a few more conditions have to be met. The cost incurred by the party acting out the behavior must be “smaller than the product of the negative benefit to the recipient and negative relatedness of the recipient to the actor,” Gardner and West wrote inCurrent Biology. In other words, a creature can be considered spiteful if harming other creatures does them more bad than good. So far, true spite has only been observed rarely in the animal kingdom, and mostly occurs among smaller creatures.
The larvae of polyembryonic parasitoid wasps, which hatch from eggs that are laid on top of caterpillar eggs, occasionally develop into adults that are not just infertile but have a habit of eating other larvae. From an evolutionary perspective, developing into this infertile form is not a smart move for the wasp because it cannot pass on its genes to the next generation. Nor does it help the creature’s relatives survive, as they are then at risk of being eaten.
That doesn’t mean spite is relegated to the world of insects. It also pops up among monkeys, where it tends to manifest in more recognizable forms. In a 2016 study, Harvard University psychology researchers Kristin Leimgruber and Alexandra Rosati separated chimpanzees and capuchins from the rest of the group during feeding time and gave them the option take away everyone’s food. While the chimps only ever denied food to those who violated their group’s social norms, the capuchins often acted simply out of spite. As Leimgruber explains: “Our study provides the first evidence of a non-human primate choosing to punish others simply because they have more. This sort of ‘if I can’t have it, no one can’ response is consistent with psychological spite, a behavior previously believed unique to humans.”
Beyond the Dark Tetrad
Of course, spite isn’t the only type of complex and curiously human behavior for which the principles of evolution have not produced an easily discoverable (or digestible) answer. Just as confounding are the four components of the Dark Tetrad — a model for categorizing malevolent behaviors, assembled by personality psychologist Delroy Paulhus. The framework’s traits include narcissism, Machiavellianism, psychopathy and everyday sadism.
Traces of all four have been found inside the animal kingdom. The intertribal warfare among chimpanzees is, first and foremost, a means of controlling resources. At the same time, many appear to actively enjoy partaking in hyperviolent patrols. Elsewhere, primate researchers who have made advances in the assessment of great ape psychology suggest the existence of psychotic personality types. As for Machiavellianism, the willingness to hurt relatives in order to protect oneself has been observed in both rhesus macaques and Nile tilapia.
Although the reasons for certain types of animal behavior are still debated, the nature of these discussions tend to be markedly different from discourse around, say, the motivations of serial killers. And often, researchers have a solid understanding of the motivations and feelings of their own study subjects but not those outside of their purview. Regardless of whether the academic community is talking about humans or animals, however, the underlying conviction guiding the conversation — that every action, no matter how upsetting or implacable, must have a logical explanation — is one and the same.
Cal Newport explains how Slack and Gmail are making us miserable — and what to do about it.
Friday, March 5th, 2021
Well, I’m Ezra Klein. Welcome to “The Ezra Klein Show.”
Before we get into it, a bit of housekeeping. We are looking for an associate producer. That job is still open, but not for much longer. If you have two years of audio experience and want to work on the show, go check out the link to the job listing and show notes. But to the show today, I want to begin here with a concept that’s going to be important throughout the episode — the hyperactive hive mind. That’s the idea at the center of Cal Newport’s new book, “A World Without Email.” And it’s the idea he says at the center of how a lot of us are working and living these days. He defines the hyperactive hive mind as a workflow centered on ongoing conversation fueled by unstructured and unscheduled messages delivered through digital communication tools, like email and instant messenger. It’s a bit of a mouthful, but if you’re someone working in an office, maybe a remote one now, where there’s just a constant stream of digital work-like chatter, that you kind of always need to be keeping up with, but also you sense it’s distracting you from doing your work and also from seeing your family and just relaxing pretty often, that you’re in a hyperactive hive mind. And a lot of us — not all of us, but a lot of us — are in this now. I’ve been a fan of Newport’s work for years, going back to his book, “Deep Work.” Newport has been circling this idea that all of the digital wonder around us has come with a cost. We’re losing our ability to concentrate. These remarkable vistas of information that have been opened to us have also been polluted by endless distraction. And so, we’re not benefiting from any of this the way we thought we would. Instead of getting more done in less time, we feel like we have less time than ever and are never getting enough done. It’s really weird. Something is wrong here. And one reason I like Newport’s work is I think he is right on this. I think we have a lot of trouble seeing the cost of technology, at least when that technology comes with a lot of good, as the internet and digital communication, of course, does. But we have to be able to step back and look at it because the way we adopt a technology at the beginning is never going to be — never going to be — particularly when it is harnessed to firms trying to sell it all to us. It is never going to be the way we ultimately should use it. But the weakness, I would say, of Newport’s previous book — so a weakness he agrees with — is that they were about individuals. They were sometimes the equivalent of giving diet advice to somebody who lives in the chips and cookies aisle of the supermarket. There’s not a lot you can do around that much temptation, but even more so when your built environment is decided for you, when so many choices about how you have to work and what you have to be part of are already made for you. But this book is a step forward in that way. This book is about systems, and in particular, about workplaces. Newport is making a radical argument here, that companies that obsess about efficiency, that think of themselves as rational economic actors, they are utterly failing to question and experiment with their own workflows, like the fundamental nature of how they do their business. And in that, they are making their employees unhappy. They are making their products worse, and they are just contributing to an overall degradation of society. It’s a pretty stunning indictment. I’m not sure I agree with all of it. But I think there’s really something to it. As always, my email is firstname.lastname@example.org. Always interested to know who you’d like to see on the show next, so send me your guest suggestions. Here’s Cal Newport.
So this is a book about how the information technology revolution went wrong in the workplace. What went wrong?
Well, once we had the arrival of email in the workplace, it very quickly gave rise to a really new way of organizing large groups of people to work together. It’s what I call the hyperactive hive mind. But essentially, we said, OK, now that we have low friction, low cost digital communication, we can just figure things out on the fly. We’ll plug everyone into an inbox, or later, into a Slack channel, and ad hoc unstructured back and forth messages, just figure things out with people as you need them. And that swept basically the entire knowledge sector. And I think that ended up being a disaster.
Why? What is your evidence it’s a disaster?
Well, I have two main threads. So the first thread of evidence is that it makes it essentially impossible to work. And essentially, the culprit here is network switching. Human brains take a long time to switch. If you’re going to put your target of attention on one thing and then switch it to a new target, that takes a while, right? There’s biological things going on here. You have to suppress some networks. You have to amplify other networks. It takes some time. When you glance at an inbox or when you glance at a Slack channel, as is required that you do constantly, if back and forth messaging is how you organize most of your work, you begin to trigger all these network shifts, so all of these complex biological cascades initiate. And you see all these unresolved issues and things you can’t get back to. And then if you wrench your attention back to what you were trying to do, it creates this whole pile-up in your brain, which we experience as a loss of cognitive function. We also feel frustrated. We feel tired. We feel anxious. Because the human brain can’t do it. And so essentially, the hyperactive hive mind, on paper, had this really good attribute, which is it’s flexible and it’s easy and it’s cheap. You just kind of figure things out on the fly. But the biological reality is it made us really bad at doing our work. And then we have the second thread, which I think had been somewhat unexplored, which is this way of working makes us miserable. It just clashes with our fundamental human wiring to have this nonstop piling up of communication from our tribe members that we can’t keep up with. And that hits all of these deeply rooted social networks in our brain to take this type of thing seriously. No matter how much the frontal cortex tells us it’s OK, we don’t have to answer these emails right away. There’s a deeper part of our brain that’s worried. And so it makes us miserable, and it makes us terrible at work. But other than that, though, it’s been pretty good.
I want to pick up on this question of whether or not it’s making us miserable. Because one way of looking at this is that it is a triumph of workers who don’t want to work all that hard and want lots of opportunities for distraction over bosses who want them to work really hard. So Slack is just an amazingly deceptive piece of enterprise software, in my mind. I was at an organization that we didn’t have it. And then I helped bring it to that organization. And now, it’s completely clear to me that Slack makes organizations less effective. It’s very well built to help workers slack off, right? To help me slack off. I enjoy slacking off on Slack. I mean, it’s literally right there in the name. It’s called Slack. And they’ve made all these wonderful — you can put GIFs in so easily and little reaction emoji. It’s a great way to bullshit around the water cooler digitally. And so there’s one perspective on this, which is that we’re seeing a failure, and then another that we’re seeing a kind of success of people taking their time back and having more socializing at work. Why should that not be the attitude or conceptual frame I put around this?
Well, no, I think you’re getting at some truth there. I had a recent New Yorker piece that was titled, “Slack is the right tool for the wrong way to work,” where I was trying to really grapple with this notion that there’s a reason why Slack is popular, and there’s also a reason why we hate it. It’s serving two purposes, which kind of complicates the story. I think it’s absolutely true that one of the benefits of the hive mind is it gives you obfuscation. So say you don’t want to work as hard. Let’s say I don’t want to do as much, or I’m in a situation maybe where I can’t work as hard. There is an obfuscation you can get because it’s so ambiguous and ad hoc and on-demand that you can basically generate smokescreens by rapid responses and being on active on the Slack channels. And there’s also a social component to it. And I think those are both really interesting aspects of the hive mind. But I don’t think either justify the hive mind is the right way to work.
A point you make in the book is productivity growth across the economy is not way better today than it was before the widespread adoption of email or before the widespread adoption of Slack. One might have thought that speeding communication would make it so we could get a lot more done a lot quicker. That does not appear to be happening. What problem does interoffice communication solve, and at what point does it become too much?
Well, so what Slack was trying to do — or at least, this was my argument in that former piece — is, Slack said, OK, if we’re going to use the hyperactive hive mind as our primary workflow — that is, if we’re just going to work things out on the fly with back and forth messaging, email is not that great at it. We can do it better with Slack. So when I called Slack the right tool for the wrong way to work, I mean, it’s a tool that is optimized. If we’re going to do the hive mind, this is a better tool for implementing constant chatter than email was, which is why we both love and hate it. We love it because if our organization runs on constant chatter, it does a better job as a tool of that than an inbox does with email. We hate it because this way of working has fundamental issues. But if we go back in time, what problem was email solving? I mean, my ultimate argument is that the original rise, which I document, came from the reality that having fast, but asynchronous communication was sort of a productivity silver bullet. It was an issue that rose once the rise of large offices emerged in mid century, this notion that you might have 1,000 people working in a non-industrial manner for the same company. How do they communicate? And the telephone, the interoffice telephone introduced a synchronous option, but there’s a lot of overhead to getting someone on the phone at the same time. Memos and mail carts, this gave us an asynchronous option, but they were slow. There was people involved. You had to put things on carts. It could take all day. So email was solving a really real problem. I want to do asynchronous communication. I want to do it fast and with low overhead. But once it was there in a way that was unintentional, unplanned, no one thought this was a good way to work, it spiraled us into this hyperactive hive mind, where we basically threw out any other processes or structures for organizing our work and said, why don’t we just figure it out on the fly? And there’s a lot of reasons why that happened. But what I want to underscore here is that shift was unintentional and unplanned. We live in this hive mind not because some corporate consultant said this will make us more productive. It’s actually a lot more accidental.
From an economic perspective, what you’re positing here is not just a very big market failure, but a really big failure of firm organization and management. What you’re saying is that the people in charge of these firms, certainly the people in charge of the digital structure internally at these firms, have actually failed at a very profound level. They’ve brought in these tools. These tools have gotten out of control. They’re reducing worker productivity and firm productivity. They’re reducing worker happiness and firm overall happiness. All that seems basically true to me, but then what is your explanation for why so very, very few major firms have come up with some really, really aggressively alternative way to work? If this is all working so badly, why is it spreading so ubiquitously?
This was one of the big ideas I did some original reporting on for the book. We have a big explanation from this from the late management theorist, Peter Drucker, who coined the term “knowledge work” and really helped American industry in particular understand how this type of work was different than industrial work. He sort of set the trajectories in place. One of the big ideas he emphasized was autonomy. Knowledge workers, unlike industrial workers, need autonomy on how they get their work done. You cannot tell them how to work, how they organize themselves productivity. So he was really pushing autonomy. He introduced this very influential notion of management by objectives. Don’t tell me how to work, just give me clear objectives, and leave it up to me how to actually get things done. And there’s a lot of truth in that, right? I mean, he was right in the sense that you can’t tell an ad copywriter or a computer programmer, you know, how to write ad copy or how to program a computer in the way that you could go to an assembly line in a car plant, because he used to study GM, and say, OK, here’s the step-by-step process for building a steering wheel. So he was right about that. But I think it went too far. My argument is that we are so insistent on autonomy on how we execute work, we accidentally expanded that envelope to mean autonomy on how we also organize our work, how we assign our work, how we figure out who should be working on what. And so we fell into this autonomy trap where we feel as managers or entrepreneurs or people who run companies, like, look, it’s not our job to try to figure out the best way to organize work. We’ll just let individuals do that. And when you leave it entirely up to the individuals, you end up with the hyperactive hive mind because it’s the kind of the easiest, least common denominator thing, that if you have no other control, that’s where we’re going to end up. So I think we’re in a trap because we took truckers’ autonomy maybe a little bit too literally.
I want to try out an alternative explanation I knew that I’ve been thinking about. And this one comes more from the incentives of enterprise software companies like Slack or Microsoft in making Teams. Or I guess, Facebook has Blue Jeans as their Zoom competitor and so on and so forth. Which is that you might think the way productivity software, firm level productivity software, gets marketed is that you go to the people who run IT for a big firm and you show some studies about how your software will make the firm work better, and they compare that to the other people trying to sell them something and then go with you if your studies are best. But actually, particularly once you hit a critical mass of other firms using something, there’s actually pressure from employees. And the employee pressure comes from, I would enjoy this software, so I could be good. We would prefer — I remember pushing for Gmail at The Washington Post because we were using Lotus Notes at that point, or Lotus mail, whatever the Lotus level mail software was. And of course, Gmail made it easier to be on email all the time. And so, there’s a funny way in which what we think of as enterprise software is actually sold for the ones that are the real winners in the space through employee demands. But the incentives are misaligned. Then what you’re actually trying to do is win over employees, and you’re going to do that through software that’s more fun to use.
That actually just underscores this interesting autonomy trap we’re in. I mean, you want to imagine a car factory, right? How is it that might be the more fun way to build the cars, right? So in other sectors, people are more process engineering focused, right? What’s the evidence? What’s the best way to do this? And in the knowledge sector, you can imagine a similar thought about how should brains collaborate, what’s the right way for brains to work, how much work should be on everyone’s plate, where should we store things, what’s the right way to communicate. Should it be back and forth messages? Should it be more synchronous meetings? You would think that we could be doing tons of thinking and engineering like that. But we don’t because we’re in this autonomy trap. We’re like, look, that’s not up for us. We put up the OKRs. You guys figure out how to work. And if you tell us you think Slack is more fun, then maybe we’ll buy Slack. But if you step back, I think the metaphorical house is on fire here. We’re at a point now where it’s completely common in a lot of knowledge ware companies that not only do you spend a lot of time doing things like email and meetings, you now spend all of your time doing that, every working hour. And actual work has to get done in these hidden second shifts that happen in the morning or happen in the evening, which creates all of these unexpected inequities. I mean, the fact that that is happening now should be alarm bells ringing, but instead, we’re like, it’s busy. It’s modern times. We’re high tech. That’s just what life is like. We have acceded to it, which I find surprising.
So there’s a thread here that I think is interesting. So you go back to more of the period you’re talking about. Well, let’s call it the early 2000s. So now you’re seeing the very sharp rise of your Google’s. Apple’s already pretty big, but you begin to see Facebook, et cetera. And you remember all this. There was a real vogue for, can you believe all these Silicon Valley firms have ping pong tables? Just like, it’s ping pong tables everywhere. And, right, Google had all of these features done on their workplace culture. And there were slides in a bunch of the offices and on-site laundry and these beautiful lunches with fancy chefs and cafeterias. Initially, this was all presented as paradise for a worker. And then, slowly, this alternative narrative began to take hold, which is, no, this is actually a quite insidious kind of trap. This is a way of making workers spend all of their time at work. It’s a way of making it so people don’t go home easily at night. It’s a way of blurring the lines between what is fun and social and community, which we normally think of as not happening in your office, and what is your office. And it’s a way of getting people to put in 10, 12-hour days. And a lot of the software that emerges out of these companies and out of this period actually seems to me to take that physical insight, that by blurring the line of fun at work, you could allow work to colonize spaces that hadn’t colonized before, and it becomes a software insight. And so then, as you say, things that look like fun at the front end, right — we can chatter with our employees all day — now begin to overwhelm things that actually would have been more fun or more restful or more fulfilling. Like, you have Slack pings hitting your phone at night when you’re supposed to be with your family, or you’re sitting with your friends, and you’re looking at your phone because you’re just so used to being in that constant communication. That the blending of work and fun, which I do think of as a distinctive work culture thing of our era, has actually been really toxic for real fun — and maybe for work, too.
Well, it certainly doesn’t help. And I agree that it’s really a culture of 20 to 30 somethings living in the Bay Area during a certain period, who had emerged with this lifestyle that was entirely integrated with the digital, especially once you get post-smartphone, post-constant connectivity. And you do see that trend move into these tools. But there’s also countervailing trends. So I’ll give you a counter example. I was fascinated working on the book on this notion of extreme programming. So it’s like a workplace methodology and the guy who was telling me about it is a real zealot. His company had been bought by Google, and he had gotten disillusioned that Google wasn’t hardcore enough about his methodology. So he left to start his own lab. But if we think about extreme programming as like an extreme case study, what they do in these shops is all built around, OK, we have brains that can produce good code. If that’s really what we want to maximize, how do we do it? So there’s no email, there’s no Slack. You come in, you sit at a screen with another programmer. If you have two brains working on the same thing, you push each other, and you get more insights. But also, you take less breaks. You slack less, right? Their project leads handles all communication on their behalf. You have no inbox, you have no whatever, and they just code. And it’s so intense that they’re done by 3:00 or 4:00. And there would be no notion that you would stay there late. It would be impossible to. We work really hard, and then when we’re done, we’re done. They said when people are newly hired here, they end up having to go home and take naps for the first couple of weeks, just to adjust to the load. Now that is rightfully called extreme, but what boggles my mind is why aren’t there dozens and dozens of experiments of all these different ways of working? Clearly, you can change the way you work. When you start thinking about, OK, how do you get value out of human minds? How do you stop the human mind from burning out? How do we stop people from being miserable? There’s all of these options. And the fact that it’s so unexplored, that something like an extreme program is this weird outlier case study, to me, I think that’s very striking, right? I mean, to me, it’s a revolution waiting to happen. We’ve seen this in past intersections of technology and commerce, that there’s these long simmering revolutions, where we’re not doing things the way that would be smart. We’re doing what’s convenient. We’re doing what the momentum pushes us. We’re following inertia. And then, overnight, suddenly, we have electric motors and factories. Overnight, they don’t build cars craft method anymore. They do it the assembly line. So these tend to be non-contiguous, right, so these kind of discontinuities when we have these jumps. I just think something like this is coming for knowledge work. This constant back and forth chatter, it doesn’t make a lot of sense. And so something has to change.
Let me pick up on the cars example. I love the way you tell the very oft told story of Henry Ford and the Model T and the assembly line. Because I’ve read a version of that story I don’t know how many dozen times in productivity and management and innovation books. But it often feels like there was bespoke artisanal car manufacturing, and then all of a sudden, here comes Henry Ford and the Model T. And you focus on what is happening between those two moments, right? This period when Ford is experimenting, how difficult the experimentation must have been, how frustrating it must have been, and that there are a bunch of experiments that failed. Can you talk a little bit about that, the path from one to the other?
Yeah, I think it’s very, very illustrative. So, Ford, when he was first running his factory, when you have the early days, let’s say, of the Highland Park Factory, the craft method did dominate, right? So they took this bespoke method, where just some craftsmen would build a car. And the way they scaled it is they just had more teams working on more cars. They put them up on sawhorses, and you would surround it, you and five other guys. And you would build a car. And so he started experimenting. OK, this seems like it’s not that fast. And so he went through a whole series of experimentations, which I thought were really interesting once you uncovered them. They tried lots of things. Like, what if we have one guy who is the wheel guy, and he just goes from sawhorse to sawhorse and puts on the wheels? Well, what if we put the materials in the ceiling so that they can come down chutes? And then you could have it come right down to where you are without having to take on space on the floor. Well, what if we have a whole team that moves from car to car? So he was doing all of these experiments to try to figure out, is there a better way to actually take all this material, and then on the other end, have a car built? And the two things I like to emphasize is, one, the way they were building cars before was very easy and very convenient and very natural. And we actually see this story come up a lot in the history of industrial manufacturing, that when you had early factories, you built things the way that was convenient and natural because it seemed too foreboding to try to figure out something else, right? And this goes back to sort of the history of industrial manufacturing. And, two, it was a huge pain to get past that. It was all those experiments, but the assembly line was a huge pain. Once it got running, they had to hire a lot more people. They had to spend a lot more money. I’m sure no one liked the notion who was an investor in Ford. Like, you’re doing what? We’re going to double the amount of floor managers who don’t build things, but just watch things? And it would get stuck all the time. When you’re trying to figure out how to make this thing work, if the steering wheel guy is a little bit too slow, the whole assembly line would stop. So it was really inconvenient. It was a pain, and it cost more money at first. But it was 10 to 100x more productive once they figured it out, which, to me, is a good metaphor for we gravitate towards what’s easy and convenient. And it can be a pain to move to what works better at first. There is an upfront cost to figuring out, let’s say, better ways of producing things.
So you’ve been studying this over the course of your last two or three books. You’ve been circling this book, I would say. And for this book, you’ve spoken to a lot of firms that were trying to change the way they worked pretty radically. They’re the exceptions. And then I’m sure you’ve spoken to a lot of people in firms that weren’t. What is your explanation for why firms are more loath to experiment? Is it just the Peter Drucker thing at this point? Or do you see more happening in terms of the status quo bias, the lock in, the power dynamics of firms that make this kind of experimentation hard for managers to try?
So there’s sort of three hypotheses on the table I was looking at. So there’s the Peter Drucker autonomy trap. There is the — it just been hard, right? Let’s call this the Henry Ford lesson, right, that it’s actually a real pain to figure out what works better. This is convenient, this is cheap. When I was interviewing Gloria Mark, she told me about how, when she was in the computer supported collaborative work scene back in the early 1990s and computer networks were new, there was all this exciting research about look at all these tools we’re going to build that are going to sit on networks, and we can access them on networks. And it’s going to make our work so much more effective and productive. And she said the whole field basically went away once email spread because it was just cheaper to buy an email server. It’s like, look, we can just do this all with file attachments and CCs and it’s fine. We don’t need it. And then the third reason would be power dynamics, right? Which is something I heard hypothesized a lot that maybe that for a boss or something, this them more power. It could be either productivity power play, like I’ll get more out of my workers. Or it could be a sort of egotistic self-regard. I like people answer me, sort of powerplays. All three hypotheses play a role. As far as I can tell, though, it’s a combination of the first two that probably play the biggest role. So, the bosses, manager, C suites, at all these levels, I think there’s this growing awareness that this is terrible. It’s a terrible way to work. Our output as a company is lower, and employees turnover and leave the workforce because it makes them miserable. So the power dynamics didn’t show up to be as important as they once suspected. But I think it’s a combination of the autonomy bias and just the fact it’s hard. The companies I document that do replace the hyperactive hive mind with more bespoke processes that reduce all this constant back and forth, it wasn’t easy to do. It’s like figuring out how to make the assembly line work. There’s going to be false starts. There’s going to be experiments. It’s going to cost more overhead. Bad things are going to happen temporarily. And you have to be willing to go through that. And that’s a big hurdle.
So one of the obvious objections to your theory here is that if this is a market failure, if most firms are running this wrong, then it should be relatively easy to correct in the sense that firms will emerge that are working off of more Cal Newportian theory of the case. And they will come to overwhelm the market because their productivity will be higher, their output will be better. They will get better employees because it’ll be more fun to work there. When I read through the book, it obviously seems some of these firms are more fun, right? So you spend some time in firms that have shorter work weeks. You have firms that have way better work-life balances. I know some of those firms, and they don’t dominate their industry. Their practices are not spreading like wildfire. And that implies to me that something is wrong somewhere in the model because if this is such an economic drag, or at least, such a drag on worker happiness, then there should be a really huge competitive advantage to the firms who have figured out a better way or who are wandering around it. What’s your theory there?
I think it’s coming. There is a huge competitive advantage. It’s why I think we’re going to experience a punctuated equilibrium here. The shift is going to seem to be practically overnight when the shift does come. And a couple of reasons to believe it’s coming — one I like to emphasize that the timeline here is not unusual. I mean, how long did it take from the beginning of industrial car manufacturing to the change that was the assembly line? It was about 20 to 25 years. We’ve had email as a large presence for about 20 to 25 years. If you look at the electric dynamo, its integration into factory construction, it took about 50 years, even after we had generators who could generate electricity and we had electric motors. And clearly, the right thing to do was to put electric motors on the factory equipment, as opposed to having all these overhead cams and belts that were powered off of old steam engines. It still took 50 or 60 years until there was this moment where, OK, everything shifted over, and there was a lot of reasons about inertia and infrastructure that’s already been invested. So my argument is, you basically should hold this to me, right? So I’m making a falsifiable — this is my Karl Popper moment here. I’m saying, let’s look in five years. I think we’re going to see a big difference. Now partially what I’ve noticed is between when I started talking to people about this for my 2016 book, Deep Work, and now, there’s a notable shift in some of the CEOs I talked to. There’s a notable shift in some of the investors I talked to. This is on the radar, I should say, of these communities. Because they’re beginning to realize there might be hundreds of billions of dollars of GDP on the table, and that is a really rich pie. There’s been a lot of investment activity in the last couple of years on companies that are trying to better help extract this. In the conclusion of my book, I quote anonymously but a relatively well known CEO, who’s saying, like, this is going to be the moonshot of the next decade, is figuring out how to get past the hive mind and have much more sustainable productive ways of working. He calls it the moonshot because there is so much value there, but also it’s going to require so much energy to figure it out. So I would say five years from now, things will look different. And that’s a falsifiable hypothesis. I mean, if we’re in the same place five years from now, then maybe not. But we’re basically on track. This is a very normal timeline in technology and commerce. For a new technology comes, we do what’s easiest. We finally have this moment of punctuated equilibrium. We’re like, OK, enough is enough, and we shift to a different phase. [MUSIC PLAYING]
One of the things that I think about in the difficulty here because we’ve known each other a long time, and you know that I’m a believer in the Cal Newport oeuvre on these subjects. I care about deep work. Back when I was at Vox, we had a little deep work icon you could put on in Slack. And you’d be doing deep work, and nobody should bother you.
That’s a very ironic thing you just said, by the way, a deep work icon on Slack.
Listen, it’s all ironic. I’m aware of that. One of the things that I notice in myself as a worker — and others for that matter, too, but I’ll be the example here — is that as much as I know I get more done if I don’t flick over to Twitter, if I don’t flick over to Slack or my email, and I use freedom and I cut myself off from those things when I’m trying to get things done, there’s still a big part of me that wants to. And one of the tricky parts of this is, is that it’s not one of these things that is good for us and it feels good when we do it. It’s incredibly tiring to work in a sustained, focused way without getting those little dopamine hits of distraction. And the more often you get those little hits, the more you crave them. I mean, this is part of Deep Work, that you begin to train your brain to demand these little bits of feedback. And so it becomes very hard to change the way your firm works or to even just change the way you work, not because you don’t think you should, but because you are so trained to do the other thing, right? You’ve come to expect it. Then once you do it, you kind of fall back into old patterns. I’m curious how you think about that part of it, that retraining of our own expectations and rhythms.
Well, so one of the changes I’ve had in my thinking, let’s say between “Deep Work” and this book, is thinking about the individual. I think one of the issues people had — let’s say you read something like “Deep Work.” You’re like, OK, I get it. Like, concentration produces more than non-concentration. I try to spend more time in the deep work. And so then, as an individual, you should try to put more time on that. And you’re talking about how that’s very difficult. Well, that’s difficult in part because not a failure of will, you as an individual, but because it is a necessity of this underlying hyperactive hive mind workflow that this inbox is where everything’s happening. Like, there’s people who need you. Everything you’re involved in is taking place in that inbox. This back and forth messaging is how this is getting figured out and that is getting resolved and how this issue is also getting handled. And so this urge to, I need to go back and check this, I think we too often think of it as a failure of will, but it’s a failure of workflow. And it’s the reason why I think a lot of people had a hard time executing ideas of deep work. It’s the reason why I think moves to have email-free Fridays, or let’s have better norms about response times, the reason why this has failed to really calm any issues with inbox or email overload is because this is where the work happens, and when you’re away from it, it causes problems. Which is, this is my big revelation, is that we can’t solve these problems in the inbox. We have to solve these problems below the inbox. We actually have to go and take the implicit work processes that are generating all these back and forth messages and expectation of ad hoc unstructured communication, and we have to replace them with things to generate many fewer messages. We need to make the inbox a lot less interesting. I think that’s more important than trying to convince people to ignore the interesting nature of the inbox. And so, that’s something I’ve really been thinking about. Because it’s not helping to keep all of our focus on — and by our, I just mean the culture that deals with email overload — to keep all the focus on hacks and tips and how to better engage with your inbox. The problem, I think, is below.
And one of the difficulties here, too, is that there are some — advantages may not be exactly the right word, but benefits that come out of being personally engaged and sorting through the information flow. So I believe — you can tell me if I’m wrong. I believe I make an anonymous appearance in this book. And there’s this moment where you say I was talking to the editor-in-chief of a new media, a new journalism company.
This is you, yes, OK.
It is me, yeah. And I was saying to him, why didn’t you just have somebody checking Twitter on behalf of your staff and telling them if anything interesting is coming. And you say, well, this unnamed journalism EIC had never thought of this before and thought, well, what if — and that’s actually not how I remember that conversation. I’m going to give you some shit about this. And so I remember the issue there, what I said, it’s true I thought about that. That’s not a lie, but is that the difficulty with having somebody else check Twitter on my behalf, is that I am doing the information processing. And only I know what I find interesting. And only I see the things in it that I will see. And even worse for journalists — and this might be distinctive to my industry, but it is a problem in my industry — Twitter is an important place where you build your own brand. And so, I think collectively, it would make sense if we’re not all herding on there and thinking the same way and talking to each other. But for any individual to leave is a little bit irrational because you deprive yourself of mindshare and the people who could give you future jobs. And in the sort of ways your peers understand you as fitting into the firmament, which is very important for the future of your career. And so this is a situation where not every but a lot of journalists I know do not like how much time they spend on Twitter. There’s a lot of talk about this health site, all of that. And people drop off and they’ll come back because to not be there feels like it has worse consequences, even though to be there is very unpleasant. So I want to hear your response to my more nuanced explanation of why journalists are on Twitter.
Yeah, no, I remember you having that response, and I still don’t buy it. I think it’s — [LAUGHTER] I think Twitter is melting journalist brains. I mean —
I’m not arguing that.
Yeah, it’s making journalists miserable. I still hold by my original stance. Like, there’s got to be a way that the — I mean, you mentioned it was like breaking news was important. And hearing from sources was important, so that went over to email a little bit. And that’s where I figured —
No, I don’t think — I will say I don’t think the breaking news function is that important. I think a lot of journalists will tell you it is, but I don’t agree with them on that.
I think it’s actually more esoteric things one sees that can be important.
Right, but at the time, I think the breaking news was a thing that — and I think we’ve in general, as a culture, I think have evolved on that because we realize like, oh, wait, we’re not getting on the ground AP reports from Twitter. We’re getting a lot of randomness and a lot of false information, too. I would still argue there’s got to be a way — I mean, this is like digital minimalism 101. So let’s say there is something about direct encounter with the esoterica of Twitter that helps sort of you gain a better zeitgeist understanding of cultural trends, which will then inform your writing. OK, let’s say we buy that premise. Minimalism would say, great. What’s the right way to get that benefit while minimizing the cost? It would probably be like, I have my Twitter hour, where I go. The thing that I think was killer for a lot of journalists is this notion of, I always am on this thing, and I’m always checking this thing. And Twitter has its own emotional issues. It has its own issues like you’ve talked about. And I heard you talk about this with Zeynep Tufekci recently on your podcast. It has idea hurting issues, but it also has the issues I talk about, which it significantly reduces your cognitive capacity. You can’t think as clearly. You feel tired. You feel anxious. The work you produce as a journalist, all of that is worse as well. When I was doing the digital minimalism promotion a couple of years ago, there was one — I’ll leave this anonymous. And it’s not you, though — I will say that. There was one interview I did with a well-known journalist. And this journalist producer admitted to me, I didn’t really have you on for the audience, I wanted the host to hear these ideas because I think this person is going insane. I have to get them off of Twitter, so.
Did it work?
Oh, no. Oh, no. It got worse.
[LAUGHS] You say something, though, around this issue that I think is really wise, which is that one thing that a lot of these mediums do is that they make us all think we should be generalists. They make us all think that we should and can do everything. So something about the way Twitter does news is that it feels like you should be on top of everything. And I think actually something that I try very hard as a journalist to do is say, there are some things that I’m just not going to know that much about because I need to know a lot about the things I write on. And so, I need to let other things pass me by. But in general, you have a section of the book — this is more towards the end, but where you talk about specialization as an answer here and how one of the odd effects of hyperactive hive mind thinking is that it has cut against specialization. Could you talk a little bit about specialization, why you think we’ve lost it and what kinds of ways we could get it back?
One of the claims I try to back up in the book is that when you remove the friction required to communicate with people inside your organization, both the amount and diversity of things that’s on their plate that they have to deal with explodes. Right? So now you just have many more things you have to do. You have many more, some of it administrative and some of it non-administrative. But if you just look at the sheer variety of things that the knowledge worker has on their proverbial task list — and I say proverbial because they probably don’t actually have a real task list. It probably is just all mungled in their inbox, which is its own issue. It’s huge, right? So there’s a really interesting notion from the literature on this. And it’s this idea of diminishment of intellectual specialization. And it’s a term that was coined by an economist named Peter Sassone, and he was at Georgia Tech. And he wrote this paper back in the ‘90s that I cite all the time because I think it’s just really fascinating. But he studied earlier technologies arriving. He had five companies, 20 departments within these companies, more like the personal computer, right? So this would have been the late ‘80s. So not email, but we can extrapolate from this. And what he documented happened in these companies is that these computers had time-saving, quote unquote, software, word processors and early email and these type of things. And so these companies say this is great. We can fire support staff. We don’t need a typing pool. We don’t need secretaries. We can fire support staff because now everything is kind of easy enough. The friction’s low enough that the executives or the employees themselves can just do the work. The problem was, is, all this work now shifted onto the plate, so that the people that maybe were doing five main things for the company now had 15 things on their plate, so they could get less of the original value producing work done. So they had to hire more of these higher priced employees to actually keep up with the same amount of output. And Sassone crunched the numbers and said, actually, their salary costs ended up, after all this was done, 15 percent higher. So they cut the salaries of support staff, but then they had to add more of these higher priced salaries because people were less productive, and they ended up worse off than they were before. And he called this the diminishment of intellectual specialization. I think this is something that’s just really being amplified right now in our age of the hyperactive hive mind. Every unit in your company, every vendor, every client, every other team that might need your time and attention, can just easily grab you, grab that time and attention, put more and more things on your plate. It makes everyone’s life a little bit easier in the moment. But we get so much less done of the primary things that originally produce value, is that you’re not actually getting ahead. And in the end, you’re producing less. So I think this notion that we all do a lot more, we all can do a lot more, is not necessarily compatible with trying to get the most out of people. And I’m going to real argue that we need to return to much more specialization. I do very few things.
One of my criticisms of some of your past books — and we’ve talked about this — is that they felt to me very much about the individual creator, that it felt to me sometimes like you are really creating a structure that made sense for Cal Newport, university professor, or even maybe Ezra Klein, article writer. But that there were managers in this world that were collaborative workers in this world, and it wouldn’t work for them. You have more on that in this book in a way that I find persuasive. But something you talk about here is that management has to be about more than responsiveness, and that one of the things happening with a lot of these tools is they are changing the expectations of managers. They are changing how responsive their employees expect them to be. They are changing sort of the work that management is actually able to do. And so probably degrading or at least changing the way firms are managed. Can you talk a little bit about this from the manager’s perspective?
Yeah, and there’s research on this. I mean, I found this interesting study where they could look at inbox levels. Like, how much email is managers having to answer? And they could correlate this with what they call leadership activities. So the type of activities are important for getting the most out of your team, moving your team to where it needs to be, seeing issues that are coming from down the road and make sure that you’re around them, giving the support that individual team members need to thrive. All these leadership activities significantly decrease as you increase the amount of email that managers have to answer. And what these researchers documented is that as the email load increases, managers retreat into a task-oriented productivity mode. And they’re just like human network routers. Like, I’m just trying to take care of small things to come at me via email, answering questions, moving things around. And a lot of the managers I talked to when I was working on this book just have this vision of themselves as, I’m like an operator. And little questions and concerns come to me, and I try to answer them as quickly as possible. And one of the big points is, that’s not really good management. There’s some of that have to figure out how to do. Of course, questions need to be answered. But if all you’re doing is just trying to keep up with a hyperactive hive mind flow of all these ongoing conversations, the real important stuff doesn’t happen, that managers, too, need to be able to do one thing at a time, give things the attention they deserve. And that’s basically impossible if the hyperactive hive mind is the main way that your team coordinates and organizes. [MUSIC PLAYING]
So I want to ask a little bit about solutions here. And you go into sort of some granular detail on different ways different firms end up doing Trello boards and other things. But I want to talk about it in more high level. Let me start here. Let’s say you are somebody running an existing firm right now. You’re not starting something new. You have 100 employees or used to certain ways of doing things. You have all the accoutrements of modern enterprise software. You have Slack, you have Gmail. You’re an advertising firm, a media firm, whatever it might be. Where do they start implementing the ideas of this book?
Well, so the big idea is, whether you name it or not, you have processes that repeatedly happen that produce the stuff that has to happen in your company. Now if you don’t have names for them, if you haven’t thought about them, you’re probably implementing most of these processes with the hyperactive hive mind. Just, let’s figure it out on the fly. So the first step is just to identify what these things are. We have a deal with client question process. We have an article production process. We have a strategizing for future business moves process, right? You name them. You see what they are. What are the things that we actually do on a repeated basis? And what I recommend is what you really want to do is, process by process, say, OK, how do we actually want to implement how this happens? And the metric that I push, it’s not like how much time is it going to take or how hard is this particular method, but to what degree can we minimize unscheduled back and forth communication? So how can we implement this particular process, like responding to client questions, producing articles, whatever it is, in a way that does not require the sort of asynchronous back and forth messaging that, in turn, will require check after check after check after check to kind of keep that ping pong ball bouncing. Once you know that what you’re looking at is processes and what you’re trying to do is reduce unscheduled back and forth messaging, it opens up endless innovations. Like, oh, there’s all sorts of different ways we might do this, right? But if you don’t have the right metrics in mind, if you’re not looking at the right target, you’re just going to get stuck looking at these overcrowded email inboxes and sending around memos about, let’s have better norms on response times, or let’s write better subject lines or something like that. You’re putting your energy into the wrong process. So that’s that process oriented thinking. Optimize, optimize one by one. Back and forth messages, that’s the killer. That’s what we want to reduce. You just do that, and you’ll begin to see, I think, almost immediate results. It reduces the pressure on the inbox, as opposed to have better organizational tactics for dealing with the inbox.
And how about if you’re somebody starting a new firm or at a new firm? If you buy the Cal Newport theory that there are huge gains to be unlocked by building a radically different culture of communication and process, how do you unlock them? How do you keep focus on that, particularly when people are going to come in, expecting it to work or the way they’ve known other places to work?
It’s not easy. I mean, first, there’s a general culture that you want to try to instill, which is a culture that really thinks about tools like email are great for sending information. I’d rather send you a file with an email than a fax machine. They’re terrible for interaction. We should not be trying to collaborate or coordinate ourselves with back and forth messages. Two, you really have to separate execution from how we organize the work. Execution has to be really autonomous. You have to be very careful that you’re not stepping on the toes of creative skilled professionals about how they actually write their ad copy or how they actually write their code, that making that sacrosanct is what allows knowledge work to be much more satisfying and meaningful and allows us to avoid the drudgery that industrial work fell into. You’re putting your focus on the workflows that organize that work. What are the processes by which information moves? We make decisions. We agree on things. Where do files go? Where do we take them from? So make sure that execution is sacrosanct. It’s all of the organization around the execution that you’re trying to optimize. And then, two, lead by example. So even if it’s really convenient for you just to grab that purse and be like, OK, let me not do that. Let me try to think about these processes. And I document somewhat in the book what it’s like to try to get these things in place. They need buy-in. They have to be bottom up. Everyone involved in the process has to be involved in making it. And you have to have a culture of evolution. It’s not quite working, let’s tweak it. So put those things into place, it’s still not easy. But, again, it was a pain to build the assembly line. So at least there’s incentives to push you through that pain.
And one of the things that is a little bit counterintuitive about this book is, I think people building new things, meetings, in-person meetings, phone meetings, they have a really bad reputation. I often say to people, like, let’s try to just make this an email, which means I have a lot of emails bouncing back and forth. You have a little bit higher of an opinion about what it means to save more things for meetings than I think the dominant culture holds. So if you were to preach the value of actual meetings as opposed to having things be done through communication, how would you tell a CEO or tell a CEO to tell their employees that they should think about meetings with a little bit more affection, and email with a little bit less?
Well, any time you have to make a decision or have back and forth — there’s interaction that has to occur — real time is exponentially better than asynchronous, right? It’s better to be able to just talk with you on the phone or on Zoom or in person to go back and forth. The amount of bits of information that’s able to be established in a back and forth conversation is of a different order of magnitude than when you’re in a purely linguistic medium. Like, I put some text in an email, it goes to you. Later that day, you send an email back that has some more text. That type of asynchronous communication has huge overheads, and it’s not very effective. So I’m a huge believer in real time interaction as a highly effective and efficient way to get things done, to reach decisions that do interactions. The problem with meetings people have is that they’re not coupled with well thought through processes, right? So if you look at a software development firm, where they think a lot about this type of stuff, and if it’s a software development team that’s running an agile methodology like Scrum, they will have these daily stand-up meetings. They only last 20 minutes. They fit very clearly into an overall structure of how tasks are identified, assigned, and reviewed, right? So they have these 20-minute meetings that incredibly efficiently people figure out, here’s what I did. Here’s what I’m working on. Here’s what I need from you. I need it by now. Great, we’re on the same team. Go right, right? It’s a meeting done well. That’s way more effective than try and do that over email. What happens I think in a lot of hyperactive hive mind style knowledge firms is that we throw meetings as issues as a proxy for productivity. I don’t really want to think about this. If I put a meeting on my calendar, then at least I know that has to happen. So at least I won’t forget it. I think meetings are often used because people don’t have systems where they trust themselves to remember or make progress on things. Like, well, if it’s a recurring meeting, then I do look at my calendar. They’re not tied to other processes. They’re not tried to optimize ways to get things done. So, meetings not connected to processes can make work really unbearable. I think a lot of pandemic workers have discovered that doing Zoom all day long can’t possibly be the best way to organize. But a meeting tied to a really smart process can actually save you a lot of time.
I guess a good place to come to a close. So end of the show, I always ask for a couple of different book recommendations, and let me start here. What’s a book that’s done the most to inspire your work and your explorations?
Well, it probably depends on the topics that I’m reading, but when it came to these explorations of email, I was really taken by a lot of these books that were the 20th century techno determinists. So there was all this interesting philosophy of technology thinkers in the 20th century that were really trying to understand a way that if you introduce a new technology into an ecosystem, it can actually really unsettle this ecosystem in ways that are unpredictable and unintentional. And that opened up a lot for me because it got me out of this mindset of, well, if we’re all doing email, it must be because it’s helping somebody. There must be a reason why we’re doing this. It’s got to be maybe adversaries versus the good guys and what’s the battle going on. But the idea that technology itself can just have these ecological changes I think is really important. So probably Lewis Mumford’s “Technics and Civilization,” that’s an early 20th century book that really pushed those ideas. I think that’s really interesting. A lot of Neil Postman — Postman was a very famous techno determinist. I actually cite a speech from Postman at the end of the book that was influential to me. It wasn’t a book that he wrote. It was a summary of his thoughts on technology. And it’s really rich, and I put it in the citation in the book. But that’s where he made really clear this notion that technology is not additive, it’s ecological. He was like the Middle Ages plus the — once you got the printing press, it was not just the Middle Ages plus printing presses. It was an entirely different world. And that notion really shaped the way I thought about email. The arrival of email did not give us the 1990 office plus now we had email. It gave us an entirely different notion of what work meant. And so any of these writers who were writing in this vein of technological determinism were very influential. I think it comes through in a lot of my thinking.
You talk a lot about the difference between the kinds of products one creates and the hyperactive work worlds many of us exist in and the slower, more thoughtful, more deeply creative spaces of “Deep Work.” What’s a fiction book or piece of art that you think is what it looks like when “Deep Work” works, the kind of thing that you’re not going to be able to do checking Twitter every couple of minutes?
Well, I mean, basically, any award caliber literary fiction has to be created in that mindset. So whatever your favorite sort of award caliber literary fiction novel is, there’s really no way to produce real insight in writing at that level without actually just having the ability to be alone with your own thoughts and observing the world, and just letting that percolate and letting that move, and trying to craft and move and work with it. I’ll say it’s not a book, it’s a video. I actually wrote an essay about a blog post about not too long ago. It was a stone carver. A young woman, I think she’s based in the — near you, actually. I think she’s she’s based in the Bay Area. And it was just this video they had put up on Vimeo that just captured what it is to carve a statue out of stone. And something about that was really affecting to me. It’s just all you do all day long, and she’s looking at the stone and she has the bust. And then it’s manipulating the material and manipulating the real world. And it’s in this warehouse, and the doors open out into some trees or something like that. And I don’t know — there was something very affecting to me about that story. But it’s someone that’s just, they are 100 percent in the world of trying to take this block of stone, and from it, make manifest some sort of intention that exists just in their mind. I mean, that’s human depth personified, and the opposite, I would say, of Slack.
So my son just came home and is crying in the background. So this final one feels apropos. What’s your favorite children’s book?
When my first kid was born, my literary agent sent me a bunch of books. And there’s one that all of my kids have loved. It’s called “Andrew Henry’s Meadow.” And it’s an older book. It’s illustrated. And the premise is this young boy who builds things. It’s beautifully illustrated. And he’s not sort of — it feels like he’s not appreciated by his family, so he leaves. And all the kids follow him across the creek and through the woods and to Andrew Henry’s meadow. And they build these elaborate, beautifully illustrated houses. There’s like a castle, and there’s like a tree house. It’s all built from sort of found objects. And then the parents realize at some point that they’re gone, and they’re all panicking. And they go and they find them. And when they finally bring them back, they make a space for Andrew Henry in the basement to be able to build his contraptions. Kids love it because of the illustrations. It somehow just gets into the psyche of kids. But there’s kind of a nicer message lurking in there. I’ve always kind of liked that message of understanding what it is to drive your kids and then making room for it. So that’s my underground favorite because almost no one’s heard of it. And we’ve gone through a couple of copies now.
Cal Newport, thank you very much.
Thanks, Ezra. [MUSIC PLAYING]
That is the show. Thank you for listening. I always appreciate you being here. Give us a review on whatever podcast app you’re listening on if you’re enjoying it, or send it to a friend. “The Ezra Klein Show” is a production of New York Times Opinion. It is produced by Roge Karma and Jeff Geld, fact-checked by Michelle Harris, original music by Isaac Jones, and mixing by Jeff Geld.
We were promised, with the internet, a productivity revolution. We were told that we’d get more done, in less time, with less stress. Instead, we got always-on communication, the dissolution of the boundaries between work and home, the feeling of constantly being behind, lackluster productivity numbers, and, to be fair, reaction GIFs. What went wrong?
Cal Newport is a computer scientist at Georgetown and the author of books trying to figure that out. At the center of his work is the idea that the technologies billed as offering us more productive, happier, socially rich lives have left us more exhausted, empty and stressed out than ever. He’s doing something not enough people do: questioning whether this was all worth it.
My critique of Newport’s work has always been that it focuses too much on the individual: Telling someone whose workplace communicates exclusively via Slack and email to be a “digital minimalist” is like telling someone who lives in a candy store to diet. But his new book, “A World Without Email,” is all about systems — specifically, the systems that govern how we work. In it, Newport makes a radical argument: We are living through a massive, rolling failure of markets and firms to rethink work for the digital age. But that can change. We can change it.
To listen to the full conversation, subscribe to “The Ezra Klein Show” wherever you get your podcasts, or click the player below.
(A full transcript of the episode can be found here.)
On Monday, the U.S. reached a heartbreaking 500,000 deaths from COVID-19.
But widespread death from COVID-19 isn’t necessarily inevitable.
Data from Johns Hopkins University shows that some countries have had few cases and fewer deaths per capita. The U.S. has had 152 deaths per 100,000 people, for example, versus .03 in Burundi and .04 in Taiwan.
There are many reasons for these differences among countries, but a study in The LancetPlanetary Health published last month suggests that a key factor may be cultural.
The study looks at “loose” nations — those with relaxed social norms and fewer rules and restrictions — and “tight” nations, those with stricter rules and restrictions and harsher disciplinary measures. And it found that “loose” nations had five times more cases (7,132 cases per million people versus 1,428 per million) and over eight times more deaths from COVID-19 (183 deaths per million people versus 21 per million) than “tight” countries during the first ten months of the pandemic.
Gelfand says her past research suggested that tight cultures may be better equipped to respond to a global pandemic than loose cultures because their citizensmay be more willing to cooperate with rules, and that the pandemic “is the first time we have been able to examine how countries around the world respond to the same collective threat simultaneously.”
For the Lancet article, the researchers examined data from 57 countries in the fall of 2020 using the online database “Our World in Data,” which provides daily updates on COVID-19 cases and deaths. They paired this information with previous research classifying each of the countries on a scale of cultural tightness or looseness. Results revealed that nations categorized as looser — like the U.S., Brazil and Spain — experienced significantly more cases and deaths from COVID-19 by October 2020 than countries like South Korea, Taiwan and Singapore, which have much tighter cultures.
NPR talks to Gelfand about the findings and about how understanding the concepts of “looser” and “tighter” nations might lead to measures that help prevent COVID-19 cases and deaths as the pandemic continues.
This interview has been edited for length and clarity.
How did your past research bring you to your current findings about the pandemic?
One of the things I’ve been looking at for many years is how strictly cultures abide by social norms. All cultures have social norms that are kind of unwritten rules for social behavior. We don’t face backward in elevators. We don’t start singing loudly in movie theaters. And we behave this way because it helps us to coordinate with other human beings, to help our societies function. [Norms] are really the glue that keep us together.
One thing we learned during our earlier work is that some cultures abide by social norms quite strictly. And these differences are not random. Tight cultures tend to have had a lot of threat in their histories from Mother Nature, like disasters, famine and pathogen outbreaks, and non-natural threats such as invasions on their territory. And the idea is when you have a lot of collective threat you need strict rules. They help people coordinate and predict each other’s behavior. So, in a sense, you can think about it from an evolutionary perspective that following rules helps us to survive chaos and crisis.
Can you change a culture to make it tighter?
Yes, but you need leadership to tell you this is a really dangerous situation. And you need people from the bottom up being willing to sacrifice some of the freedom for rules to keep the whole country safe. And that’s what’s happening in New Zealand, where they had few cases and few deaths per million, and where they’re really very egalitarian. My interpretation is that people said look, “We all have to follow the rules to keep people safe.”
Can you give us some examples of how tight and loose cultures operate when there’s not a pandemic going on?
Tight cultures have a lot of order and discipline — they have a lot less crime and more monitoring of [citizens’] behavior and [more] security personnel and police per capita. Loose cultures struggle with order.
Loose cultures corner the market on openness toward people from different races and religion and are far more creative in terms of idea generation and ability to think outside the box. Tight cultures struggle with openness.
Do you think it’s possible to tighten up as needed?
Yeah, absolutely. I mean I would call that ambidexterity — the ability to tighten up when there’s an objective threat and to loosen up when the threat is diminished. People who don’t like the idea of tightening would need to understand that this is temporary and the quicker we tighten the quicker it will reduce the threat and the quicker we can get back to our freedom-loving behavior.
I imagine people are worried, though, about long-term consequences of tightening up.
We shouldn’t confuse authoritarianism with tightness.
Following rules in terms of wearing masks and social distancing will help get us back faster to opening up the economy and to saving our freedom. And we can also look to other cultures that have been able to open up with greater success, like Taiwan for example. Increased self-regulation and [abidance of] physical distancing, wearing masks and avoiding large crowds allowed the country to keep both the infection and mortality rates low without shutting down the economy entirely. We need to think of this as being situation-specific in terms of following certain types of rules.
It requires using cultural intelligence to understand when we deploy tightness and when we deploy looseness. And my optimistic view is that we’re going to learn how to communicate about threats better, how to nudge people to follow rules, so that people understand the danger but also feel empowered to deal with it.
[In the U.S., for example, we] need to have national unity to cope with collective threat so that we are prepared as a nation to come together like we have in the past during other collected threats, such as after September 11.
Fran Kritz is a health policy reporter based in Washington, D.C., who has contributed to The Washington Post and Kaiser Health News. Find her on Twitter: @fkritz
We’re one step closer to officially moving up hurricane season. The National Hurricane Center announced Tuesday that it would formally start issuing its hurricane season tropical weather outlooks on May 15 this year, bumping it up from the traditional start of hurricane season on June 1. The move comes after a recent spate of early season storms have raked the Atlantic.
Atlantic hurricane season runs from June 1 to November 30. That’s when conditions are most conducive to storm formation owing to warm air and water temperatures. (The Pacific ocean has its own hurricane season, which covers the same timeframe, but since waters are colder fewer hurricanes tend to form there than in the Atlantic.)
Storms have begun forming on the Atlantic earlier as ocean and air temperatures have increased due to climate change. Last year, Hurricane Arthur roared to life off the East Coast on May 16. That storm made 2020 the sixth hurricane season in a row to have a storm that formed earlier than the June 1 official start date. While the National Oceanic and Atmospheric Administration won’t be moving up the start of the season just yet, the earlier outlooks addresses the recent history.
“In the last decade, there have been 10 storms formed in the weeks before the traditional start of the season, which is a big jump,” said Sean Sublette, a meteorologist at Climate Central, who pointed out that the 1960s through 2010s saw between one and three storms each decade before the June 1 start date on average.
It might be tempting to ascribe this earlier season entirely to climate change warming the Atlantic. But technology also has a role to play, with more observations along the coast as well as satellites that can spot storms far out to sea.
“I would caution that we can’t just go, ‘hah, the planet’s warming, we’ve had to move the entire season!’” Sublette said. “I don’t think there’s solid ground for attribution of how much of one there is over the other. Weather folks can sit around and debate that for awhile.”
Earlier storms don’t necessarily mean more harmful ones, either. In fact, hurricanes earlier in the season tend to be weaker than the monsters that form in August and September when hurricane season is at its peak. But regardless of their strength, these earlier storms have generated discussion inside the NHC on whether to move up the official start date for the season, when the agency usually puts out two reports per day on hurricane activity. Tuesday’s step is not an official announcement of this decision, but an acknowledgement of the increased attention on early hurricanes.
“I would say that [Tuesday’s announcement] is the National Hurricane Center being proactive,” Sublette said. “Like hey, we know that the last few years it’s been a little busier in May than we’ve seen in the past five decades, and we know there is an awareness now, so we’re going to start issuing these reports early.”
While the jury is still out on whether climate change is pushing the season earlier, research has shown that the strongest hurricanes are becoming more common, and that climate change is likely playing a role. A study published last year found the odds of a storm becoming a major hurricanes—those Category 3 or stronger—have increase 49% in the basin since satellite monitoring began in earnest four decades ago. And when storms make landfall, sea level rise allows them to do more damage. So regardless of if climate change is pushing Atlantic hurricane season is getting earlier or not, the risks are increasing. Now, at least, we’ll have better warnings before early storms do hit.