Arquivo da tag: Modelagem

Como ciência tenta prever os eventos ‘cisnes negros’ (BBC News Brasil)

bbc.com


Analía Llorente

BBC News Mundo

4 outubro 2021

Cena do filme 'Cisne Negro'

O que o surgimento da internet, os ataques de 11 de setembro de 2001 e a crise econômica de 2008 têm em comum?

Foram eventos extremamente raros e surpreendentes que tiveram um forte impacto na história.

Acontecimentos deste tipo costumam ser chamados de “cisnes negros”.

Alguns argumentam que a recente pandemia de covid-19 também pode ser considerada um deles, mas nem todo mundo concorda.

A “teoria do cisne negro” foi desenvolvida pelo professor, escritor e ex-operador da bolsa libanês-americano Nassim Taleb em 2007.

E possui três componentes, como o próprio Taleb explicou em um artigo no jornal americano The New York Times no mesmo ano:

– Em primeiro lugar, é algo atípico, já que está fora do âmbito das expectativas habituais, porque nada no passado pode apontar de forma convincente para sua possibilidade.

– Em segundo lugar, tem um impacto extremo.

– Em terceiro lugar, apesar de seu status atípico, a natureza humana nos faz inventar explicações para sua ocorrência após o fato em si, tornando-o explicável e previsível.

A tese de Taleb está geralmente associada à economia, mas se aplica a qualquer área.

E uma vez que as consequências costumam ser catastróficas, é importante aceitar que a ocorrência de um evento”cisne negro” é possível — e por isso é necessário ter um plano para lidar com o mesmo.

Em suma, o “cisne negro” representa uma metáfora para algo imprevisível e muito estranho, mas não impossível.

Por que são chamados assim?

No fim do século 17, navios europeus embarcaram na aventura de explorar a Austrália.

Em 1697, enquanto navegava nas águas de um rio desconhecido no sudoeste da Austrália Ocidental, o capitão holandês Willem de Vlamingh avistou vários cisnes negros, sendo possivelmente o primeiro europeu a observá-los.

Como consequência, Vlamingh deu ao rio o nome de Zwaanenrivier (Rio dos Cisnes, em holandês) por causa do grande número de cisnes negros que havia ali.

Tratava-se de um acontecimento inesperado e novo. Até aquele momento, a ciência só havia registrado cisnes brancos.

A primeira referência conhecida ao termo “cisne negro” associado ao significado de raridade vem de uma frase do poeta romano Décimo Júnio Juvenal (60-128).

Desesperado para encontrar uma esposa com todas as “qualidades certas” da época, ele escreveu em latim que esta mulher era rara avis in terris, nigroque simillima cygno (“uma ave rara nestas terras, como um cisne negro”), detalha o dicionário de Oxford.

Porque naquela época e até cerca de 1,6 mil anos depois, para os europeus, não existiam cisnes negros.

Prevendo os ‘cisnes negros’

Um grupo de cientistas da Universidade de Stanford, nos Estados Unidos, está trabalhando para prever o imprevisível.

Ou seja, para se antecipar aos “cisnes negros” — não às aves, mas aos estranhos eventos que acontecem na história.

Embora sua análise primária tenha sido baseada em três ambientes diferentes na natureza, o método computacional que eles criaram pode ser aplicado a qualquer área, incluindo economia e política.

“Ao analisar dados de longo prazo de três ecossistemas, pudemos demonstrar que as flutuações que ocorrem em diferentes espécies biológicas são estatisticamente iguais em diferentes ecossistemas”, afirmou Samuel Bray, assistente de pesquisa no laboratório de Bo Wang, professor de bioengenharia na Universidade de Stanford.

“Isso sugere que existem certos processos universais que podemos utilizar para prever esse tipo de comportamento extremo”, acrescentou Bray, conforme publicado no site da universidade.

Para desenvolver o método de previsão, os pesquisadores procuraram sistemas biológicos que vivenciaram eventos “cisne negro” e como foram os contextos em que ocorreram.

Eles se basearam então em ecossistemas monitorados de perto por muitos anos.

Os exemplos incluíram: um estudo de oito anos do plâncton do Mar Báltico com níveis de espécies medidos duas vezes por semana; medições de carbono de um bosque da Universidade de Harvard, nos EUA, que foram coletadas a cada 30 minutos desde 1991; e medições de cracas (mariscos), algas e mexilhões na costa da Nova Zelândia, feitas mensalmente por mais de 20 anos, detalha o estudo publicado na revista científica Plos Computational Biology.

Os pesquisadores aplicaram a estas bases de dados a teoria física por trás de avalanches e terremotos que, assim como os “cisnes negros”, mostram um comportamento extremo, repentino e de curto prazo.

A partir desta análise, os especialistas desenvolveram um método para prever eventos “cisne negro” que fosse flexível entre espécies e períodos de tempo e também capaz de trabalhar com dados muito menos detalhados e mais complexos.

Posteriormente, conseguiram prever com precisão eventos extremos que ocorreram nesses sistemas.

Até agora, “os métodos se baseavam no que vimos para prever o que pode acontecer no futuro, e é por isso que não costumam identificar os eventos ‘cisne negro'”, diz Wang.

Mas este novo mecanismo é diferente, segundo o professor de Stanford, “porque parte do pressuposto de que estamos vendo apenas parte do mundo”.

“Extrapola um pouco do que falta e ajuda enormemente em termos de previsão”, acrescenta.

Então, os “cisnes negros” poderiam ser detectados em outras áreas, como finanças ou economia?

“Aplicamos nosso método às flutuações do mercado de ações e funcionou muito bem”, disse Wang à BBC News Mundo, serviço de notícias em espanhol da BBC, por e-mail.

Os pesquisadores analisaram os índices Nasdaq, Dow Jones Industrial Average e S&P 500.

“Embora a principal tendência do mercado seja o crescimento exponencial de longo prazo, as flutuações em torno dessa tendência seguem as mesmas trajetórias e escalas médias que vimos nos sistemas ecológicos”, explica.

Mas “embora as semelhanças entre as variações na bolsa e ecológicas sejam interessantes, nosso método de previsão é mais útil nos casos em que os dados são escassos e as flutuações geralmente vão além dos registros históricos (o que não é o caso do mercado de ações)”, adverte Wang.

Por isso, temos que continuar atentos para saber se o próximo “cisne negro” vai nos pegar de surpresa… ou talvez não.

Humans Have Broken One of The Natural Power Laws Governing Earth’s Oceans (Science Alert)

sciencealert.com

Tessa Koumoundouros – 12 NOVEMBER 2021

(Má Li Huang Mù/EyeEm/Getty Images)

Just as with planetary or molecular systems, mathematical laws can be found that accurately describe and allow for predictions in chaotically dynamic ecosystems too – at least, if we zoom out enough.

But as humans are now having such a destructive impact on the life we share our planet with, we’re throwing even these once natural universalities into disarray.

“Humans have impacted the ocean in a more dramatic fashion than merely capturing fish,” explained marine ecologist Ryan Heneghan from the Queensland University of Technology.

“It seems that we have broken the size spectrum – one of the largest power law distributions known in nature.”

The power law can be used to describe many things in biology, from patterns of cascading neural activity to the foraging journeys of various species. It’s when two quantities, whatever their initial starting point be, change in proportion relative to each other.

In the case of a particular type of power law, first described in a paper led by Raymond W. Sheldon in 1972 and now known as the ‘Sheldon spectrum’, the two quantities are the body size of an organism, scaled in proportion to its abundance. So, the larger they get, there tend to be consistently fewer individuals within a set species size group.

For example, while krill are 12 orders of magnitudes (about a billion) times smaller than tuna, they’re also 12 orders of magnitudes more abundant than tuna. So hypothetically, all the tuna flesh in the world combined (tuna biomass) is roughly the same amount (to within the same order of magnitude at least) as all the krill biomass in the world.

Since it was first proposed in 1972, scientists had only tested for this natural scaling pattern within limited groups of species in aquatic environments, at relatively small scales. From marine plankton, to fish in freshwater this pattern held true – the biomass of larger less abundant species was roughly equivalent to the biomass of the smaller yet more abundant species. 

Now, Max Planck Institute ecologist Ian Hatton and colleagues have looked to see if this law also reflects what’s happening on a global scale. 

“One of the biggest challenges to comparing organisms spanning bacteria to whales is the enormous differences in scale,” says Hatton.

“The ratio of their masses is equivalent to that between a human being and the entire Earth. We estimated organisms at the small end of the scale from more than 200,000 water samples collected globally, but larger marine life required completely different methods.”

Using historical data, the team confirmed the Sheldon spectrum fit this relationship globally for pre-industrial oceanic conditions (before 1850). Across 12 groups of sea life, including bacteria, algae, zooplankton, fish and mammals, over 33,000 grid points of the global ocean, roughly equal amounts of biomass occurred in each size category of organism.

“We were amazed to see that each order of magnitude size class contains approximately 1 gigaton of biomass globally,” says McGill University geoscientist Eric Galbraith.

""(Ian Hatton et al, Science Advances, 2021)

Hatton and team discussed possible explanations for this, including limitations set by factors such as predator-prey interactions, metabolism, growth rates, reproduction and mortality. Many of these factors also scale with an organism’s size. But they’re all speculation at this point.

“The fact that marine life is evenly distributed across sizes is remarkable,” said Galbraith. “We don’t understand why it would need to be this way – why couldn’t there be much more small things than large things? Or an ideal size that lies in the middle? In that sense, the results highlight how much we don’t understand about the ecosystem.”

There were two exceptions to the rule however, at both extremes of the size scale examined. Bacteria were more abundant than the law predicted, and whales far less. Again, why is a complete mystery.

The researchers then compared these findings to the same analysis applied to present day samples and data. While the power law still mostly applied, there was a stark disruption to its pattern evident with larger organisms.

“Human impacts appear to have significantly truncated the upper one-third of the spectrum,” the team wrote in their paper. “Humans have not merely replaced the ocean’s top predators but have instead, through the cumulative impact of the past two centuries, fundamentally altered the flow of energy through the ecosystem.”

""(Ian Hatton et al, Science Advances, 2021)

While fishes compose less than 3 percent of annual human food consumption, the team found we’ve reduced fish and marine mammal biomass by 60 percent since the 1800s. It’s even worse for Earth’s most giant living animals – historical hunting has left us with a 90 percent reduction of whales.

This really highlights the inefficiency of industrial fishing, Galbraith notes. Our current strategies are wasting magnitudes more biomass and the energy it holds, than we actually consume. Nor have we replaced the role that biomass once played, despite now being one of the largest vertebrate species by biomass.

Around 2.7 gigatonnes have been lost from the largest species groups in the oceans, whereas humans make up around 0.4 gigatonnes. Further work is needed to understand how this massive loss in biomass affects the oceans, the team wrote.

“The good news is that we can reverse the imbalance we’ve created, by reducing the number of active fishing vessels around the world,” Galbraith says. “Reducing overfishing will also help make fisheries more profitable and sustainable – it’s a potential win-win, if we can get our act together.”

Their research was published in Science Advances.

A real-time revolution will up-end the practice of macroeconomics (The Economist)

economist.com

The Economist Oct 23rd 2021


DOES ANYONE really understand what is going on in the world economy? The pandemic has made plenty of observers look clueless. Few predicted $80 oil, let alone fleets of container ships waiting outside Californian and Chinese ports. As covid-19 let rip in 2020, forecasters overestimated how high unemployment would be by the end of the year. Today prices are rising faster than expected and nobody is sure if inflation and wages will spiral upward. For all their equations and theories, economists are often fumbling in the dark, with too little information to pick the policies that would maximise jobs and growth.

Yet, as we report this week, the age of bewilderment is starting to give way to greater enlightenment. The world is on the brink of a real-time revolution in economics, as the quality and timeliness of information are transformed. Big firms from Amazon to Netflix already use instant data to monitor grocery deliveries and how many people are glued to “Squid Game”. The pandemic has led governments and central banks to experiment, from monitoring restaurant bookings to tracking card payments. The results are still rudimentary, but as digital devices, sensors and fast payments become ubiquitous, the ability to observe the economy accurately and speedily will improve. That holds open the promise of better public-sector decision-making—as well as the temptation for governments to meddle.

The desire for better economic data is hardly new. America’s GNP estimates date to 1934 and initially came with a 13-month time lag. In the 1950s a young Alan Greenspan monitored freight-car traffic to arrive at early estimates of steel production. Ever since Walmart pioneered supply-chain management in the 1980s private-sector bosses have seen timely data as a source of competitive advantage. But the public sector has been slow to reform how it works. The official figures that economists track—think of GDP or employment—come with lags of weeks or months and are often revised dramatically. Productivity takes years to calculate accurately. It is only a slight exaggeration to say that central banks are flying blind.

Bad and late data can lead to policy errors that cost millions of jobs and trillions of dollars in lost output. The financial crisis would have been a lot less harmful had the Federal Reserve cut interest rates to near zero in December 2007, when America entered recession, rather than in December 2008, when economists at last saw it in the numbers. Patchy data about a vast informal economy and rotten banks have made it harder for India’s policymakers to end their country’s lost decade of low growth. The European Central Bank wrongly raised interest rates in 2011 amid a temporary burst of inflation, sending the euro area back into recession. The Bank of England may be about to make a similar mistake today.

The pandemic has, however, become a catalyst for change. Without the time to wait for official surveys to reveal the effects of the virus or lockdowns, governments and central banks have experimented, tracking mobile phones, contactless payments and the real-time use of aircraft engines. Instead of locking themselves in their studies for years writing the next “General Theory”, today’s star economists, such as Raj Chetty at Harvard University, run well-staffed labs that crunch numbers. Firms such as JPMorgan Chase have opened up treasure chests of data on bank balances and credit-card bills, helping reveal whether people are spending cash or hoarding it.

These trends will intensify as technology permeates the economy. A larger share of spending is shifting online and transactions are being processed faster. Real-time payments grew by 41% in 2020, according to McKinsey, a consultancy (India registered 25.6bn such transactions). More machines and objects are being fitted with sensors, including individual shipping containers that could make sense of supply-chain blockages. Govcoins, or central-bank digital currencies (CBDCs), which China is already piloting and over 50 other countries are considering, might soon provide a goldmine of real-time detail about how the economy works.

Timely data would cut the risk of policy cock-ups—it would be easier to judge, say, if a dip in activity was becoming a slump. And the levers governments can pull will improve, too. Central bankers reckon it takes 18 months or more for a change in interest rates to take full effect. But Hong Kong is trying out cash handouts in digital wallets that expire if they are not spent quickly. CBDCs might allow interest rates to fall deeply negative. Good data during crises could let support be precisely targeted; imagine loans only for firms with robust balance-sheets but a temporary liquidity problem. Instead of wasteful universal welfare payments made through social-security bureaucracies, the poor could enjoy instant income top-ups if they lost their job, paid into digital wallets without any paperwork.

The real-time revolution promises to make economic decisions more accurate, transparent and rules-based. But it also brings dangers. New indicators may be misinterpreted: is a global recession starting or is Uber just losing market share? They are not as representative or free from bias as the painstaking surveys by statistical agencies. Big firms could hoard data, giving them an undue advantage. Private firms such as Facebook, which launched a digital wallet this week, may one day have more insight into consumer spending than the Fed does.

Know thyself

The biggest danger is hubris. With a panopticon of the economy, it will be tempting for politicians and officials to imagine they can see far into the future, or to mould society according to their preferences and favour particular groups. This is the dream of the Chinese Communist Party, which seeks to engage in a form of digital central planning.

In fact no amount of data can reliably predict the future. Unfathomably complex, dynamic economies rely not on Big Brother but on the spontaneous behaviour of millions of independent firms and consumers. Instant economics isn’t about clairvoyance or omniscience. Instead its promise is prosaic but transformative: better, timelier and more rational decision-making. ■

economist.com

Enter third-wave economics

Oct 23rd 2021


AS PART OF his plan for socialism in the early 1970s, Salvador Allende created Project Cybersyn. The Chilean president’s idea was to offer bureaucrats unprecedented insight into the country’s economy. Managers would feed information from factories and fields into a central database. In an operations room bureaucrats could see if production was rising in the metals sector but falling on farms, or what was happening to wages in mining. They would quickly be able to analyse the impact of a tweak to regulations or production quotas.

Cybersyn never got off the ground. But something curiously similar has emerged in Salina, a small city in Kansas. Salina311, a local paper, has started publishing a “community dashboard” for the area, with rapid-fire data on local retail prices, the number of job vacancies and more—in effect, an electrocardiogram of the economy.

What is true in Salina is true for a growing number of national governments. When the pandemic started last year bureaucrats began studying dashboards of “high-frequency” data, such as daily airport passengers and hour-by-hour credit-card-spending. In recent weeks they have turned to new high-frequency sources, to get a better sense of where labour shortages are worst or to estimate which commodity price is next in line to soar. Economists have seized on these new data sets, producing a research boom (see chart 1). In the process, they are influencing policy as never before.

This fast-paced economics involves three big changes. First, it draws on data that are not only abundant but also directly relevant to real-world problems. When policymakers are trying to understand what lockdowns do to leisure spending they look at live restaurant reservations; when they want to get a handle on supply-chain bottlenecks they look at day-by-day movements of ships. Troves of timely, granular data are to economics what the microscope was to biology, opening a new way of looking at the world.

Second, the economists using the data are keener on influencing public policy. More of them do quick-and-dirty research in response to new policies. Academics have flocked to Twitter to engage in debate.

And, third, this new type of economics involves little theory. Practitioners claim to let the information speak for itself. Raj Chetty, a Harvard professor and one of the pioneers, has suggested that controversies between economists should be little different from disagreements among doctors about whether coffee is bad for you: a matter purely of evidence. All this is causing controversy among dismal scientists, not least because some, such as Mr Chetty, have done better from the shift than others: a few superstars dominate the field.

Their emerging discipline might be called “third wave” economics. The first wave emerged with Adam Smith and the “Wealth of Nations”, published in 1776. Economics mainly involved books or papers written by one person, focusing on some big theoretical question. Smith sought to tear down the monopolistic habits of 18th-century Europe. In the 20th century John Maynard Keynes wanted people to think differently about the government’s role in managing the economic cycle. Milton Friedman aimed to eliminate many of the responsibilities that politicians, following Keynes’s ideas, had arrogated to themselves.

All three men had a big impact on policies—as late as 1850 Smith was quoted 30 times in Parliament—but in a diffuse way. Data were scarce. Even by the 1970s more than half of economics papers focused on theory alone, suggests a study published in 2012 by Daniel Hamermesh, an economist.

That changed with the second wave of economics. By 2011 purely theoretical papers accounted for only 19% of publications. The growth of official statistics gave wonks more data to work with. More powerful computers made it easier to spot patterns and ascribe causality (this year’s Nobel prize was awarded for the practice of identifying cause and effect). The average number of authors per paper rose, as the complexity of the analysis increased (see chart 2). Economists had greater involvement in policy: rich-world governments began using cost-benefit analysis for infrastructure decisions from the 1950s.

Second-wave economics nonetheless remained constrained by data. Most national statistics are published with lags of months or years. “The traditional government statistics weren’t really all that helpful—by the time they came out, the data were stale,” says Michael Faulkender, an assistant treasury secretary in Washington at the start of the pandemic. The quality of official local economic data is mixed, at best; they do a poor job of covering the housing market and consumer spending. National statistics came into being at a time when the average economy looked more industrial, and less service-based, than it does now. The Standard Industrial Classification, introduced in 1937-38 and still in use with updates, divides manufacturing into 24 subsections, but the entire financial industry into just three.

The mists of time

Especially in times of rapid change, policymakers have operated in a fog. “If you look at the data right now…we are not in what would normally be characterised as a recession,” argued Edward Lazear, then chairman of the White House Council of Economic Advisers, in May 2008. Five months later, after Lehman Brothers had collapsed, the IMF noted that America was “not necessarily” heading for a deep recession. In fact America had entered a recession in December 2007. In 2007-09 there was no surge in economics publications. Economists’ recommendations for policy were mostly based on judgment, theory and a cursory reading of national statistics.

The gap between official data and what is happening in the real economy can still be glaring. Walk around a Walmart in Kansas and many items, from pet food to bottled water, are in short supply. Yet some national statistics fail to show such problems. Dean Baker of the Centre for Economic and Policy Research, using official data, points out that American real inventories, excluding cars and farm products, are barely lower than before the pandemic.

There were hints of an economics third wave before the pandemic. Some economists were finding new, extremely detailed streams of data, such as anonymised tax records and location information from mobile phones. The analysis of these giant data sets requires the creation of what are in effect industrial labs, teams of economists who clean and probe the numbers. Susan Athey, a trailblazer in applying modern computational methods in economics, has 20 or so non-faculty researchers at her Stanford lab (Mr Chetty’s team boasts similar numbers). Of the 20 economists with the most cited new work during the pandemic, three run industrial labs.

More data sprouted from firms. Visa and Square record spending patterns, Apple and Google track movements, and security companies know when people go in and out of buildings. “Computers are in the middle of every economic arrangement, so naturally things are recorded,” says Jon Levin of Stanford’s Graduate School of Business. Jamie Dimon, the boss of JPMorgan Chase, a bank, is an unlikely hero of the emergence of third-wave economics. In 2015 he helped set up an institute at his bank which tapped into data from its network to analyse questions about consumer finances and small businesses.

The Brexit referendum of June 2016 was the first big event when real-time data were put to the test. The British government and investors needed to get a sense of this unusual shock long before Britain’s official GDP numbers came out. They scraped web pages for telltale signs such as restaurant reservations and the number of supermarkets offering discounts—and concluded, correctly, that though the economy was slowing, it was far from the catastrophe that many forecasters had predicted.

Real-time data might have remained a niche pursuit for longer were it not for the pandemic. Chinese firms have long produced granular high-frequency data on everything from cinema visits to the number of glasses of beer that people are drinking daily. Beer-and-movie statistics are a useful cross-check against sometimes dodgy official figures. China-watchers turned to them in January 2020, when lockdowns began in Hubei province. The numbers showed that the world’s second-largest economy was heading for a slump. And they made it clear to economists elsewhere how useful such data could be.

Vast and fast

In the early days of the pandemic Google started releasing anonymised data on people’s physical movements; this has helped researchers produce a day-by-day measure of the severity of lockdowns (see chart 3). OpenTable, a booking platform, started publishing daily information on restaurant reservations. America’s Census Bureau quickly introduced a weekly survey of households, asking them questions ranging from their employment status to whether they could afford to pay the rent.

In May 2020 Jose Maria Barrero, Nick Bloom and Steven Davis, three economists, began a monthly survey of American business practices and work habits. Working-age Americans are paid to answer questions on how often they plan to visit the office, say, or how they would prefer to greet a work colleague. “People often complete a survey during their lunch break,” says Mr Bloom, of Stanford University. “They sit there with a sandwich, answer some questions, and that pays for their lunch.”

Demand for research to understand a confusing economic situation jumped. The first analysis of America’s $600 weekly boost to unemployment insurance, implemented in March 2020, was published in weeks. The British government knew by October 2020 that a scheme to subsidise restaurant attendance in August 2020 had probably boosted covid infections. Many apparently self-evident things about the pandemic—that the economy collapsed in March 2020, that the poor have suffered more than the rich, or that the shift to working from home is turning out better than expected—only seem obvious because of rapid-fire economic research.

It is harder to quantify the policy impact. Some economists scoff at the notion that their research has influenced politicians’ pandemic response. Many studies using real-time data suggested that the Paycheck Protection Programme, an effort to channel money to American small firms, was doing less good than hoped. Yet small-business lobbyists ensured that politicians did not get rid of it for months. Tyler Cowen, of George Mason University, points out that the most significant contribution of economists during the pandemic involved recommending early pledges to buy vaccines—based on older research, not real-time data.

Still, Mr Faulkender says that the special support for restaurants that was included in America’s stimulus was influenced by a weak recovery in the industry seen in the OpenTable data. Research by Mr Chetty in early 2021 found that stimulus cheques sent in December boosted spending by lower-income households, but not much for richer households. He claims this informed the decision to place stronger income limits on the stimulus cheques sent in March.

Shaping the economic conversation

As for the Federal Reserve, in May 2020 the Dallas and New York regional Feds and James Stock, a Harvard economist, created an activity index using data from SafeGraph, a data provider that tracks mobility using mobile-phone pings. The St Louis Fed used data from Homebase to track employment numbers daily. Both showed shortfalls of economic activity in advance of official data. This led the Fed to communicate its doveish policy stance faster.

Speedy data also helped frame debate. Everyone realised the world was in a deep recession much sooner than they had in 2007-09. In the IMF’s overviews of the global economy in 2009, 40% of the papers cited had been published in 2008-09. In the overview published in October 2020, by contrast, over half the citations were for papers published that year.

The third wave of economics has been better for some practitioners than others. As lockdowns began, many male economists found themselves at home with no teaching responsibilities and more time to do research. Female ones often picked up the slack of child care. A paper in Covid Economics, a rapid-fire journal, finds that female authors accounted for 12% of economics working-paper submissions during the pandemic, compared with 20% before. Economists lucky enough to have researched topics before the pandemic which became hot, from home-working to welfare policy, were suddenly in demand.

There are also deeper shifts in the value placed on different sorts of research. The Economist has examined rankings of economists from IDEAS RePEC, a database of research, and citation data from Google Scholar. We divided economists into three groups: “lone wolves” (who publish with less than one unique co-author per paper on average); “collaborators” (those who tend to work with more than one unique co-author per paper, usually two to four people); and “lab leaders” (researchers who run a large team of dedicated assistants). We then looked at the top ten economists for each as measured by RePEC author rankings for the past ten years.

Collaborators performed far ahead of the other two groups during the pandemic (see chart 4). Lone wolves did worst: working with large data sets benefits from a division of labour. Why collaborators did better than lab leaders is less clear. They may have been more nimble in working with those best suited for the problems at hand; lab leaders are stuck with a fixed group of co-authors and assistants.

The most popular types of research highlight another aspect of the third wave: its usefulness for business. Scott Baker, another economist, and Messrs Bloom and Davis—three of the top four authors during the pandemic compared with the year before—are all “collaborators” and use daily newspaper data to study markets. Their uncertainty index has been used by hedge funds to understand the drivers of asset prices. The research by Messrs Bloom and Davis on working from home has also gained attention from businesses seeking insight on the transition to remote work.

But does it work in theory?

Not everyone likes where the discipline is going. When economists say that their fellows are turning into data scientists, it is not meant as a compliment. A kinder interpretation is that the shift to data-heavy work is correcting a historical imbalance. “The most important problem with macro over the past few decades has been that it has been too theoretical,” says Jón Steinsson of the University of California, Berkeley, in an essay published in July. A better balance with data improves theory. Half of the recent Nobel prize went for the application of new empirical methods to labour economics; the other half was for the statistical theory around such methods.

Some critics question the quality of many real-time sources. High-frequency data are less accurate at estimating levels (for example, the total value of GDP) than they are at estimating changes, and in particular turning-points (such as when growth turns into recession). In a recent review of real-time indicators Samuel Tombs of Pantheon Macroeconomics, a consultancy, pointed out that OpenTable data tended to exaggerate the rebound in restaurant attendance last year.

Others have worries about the new incentives facing economists. Researchers now race to post a working paper with America’s National Bureau of Economic Research in order to stake their claim to an area of study or to influence policymakers. The downside is that consumers of fast-food academic research often treat it as if it is as rigorous as the slow-cooked sort—papers which comply with the old-fashioned publication process involving endless seminars and peer review. A number of papers using high-frequency data which generated lots of clicks, including one which claimed that a motorcycle rally in South Dakota had caused a spike in covid cases, have since been called into question.

Whatever the concerns, the pandemic has given economists a new lease of life. During the Chilean coup of 1973 members of the armed forces broke into Cybersyn’s operations room and smashed up the slides of graphs—not only because it was Allende’s creation, but because the idea of an electrocardiogram of the economy just seemed a bit weird. Third-wave economics is still unusual, but ever less odd. ■

Physics meets democracy in this modeling study (Science Daily)

A new paper explores how the opinions of an electorate may be reflected in a mathematical model ‘inspired by models of simple magnetic systems’

Date: October 8, 2021

Source: University at Buffalo

Summary: A study leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.


A study in the journal Physica A leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.

Researchers created a numerical model that describes how external influences, modeled as a random field, shift the views of potential voters as they interact with each other in different political environments.

The model accounts for the behavior of conformists (people whose views align with the views of the majority in a social network); contrarians (people whose views oppose the views of the majority); and inflexibles (people who will not change their opinions).

“The interplay between these behaviors allows us to create electorates with diverse behaviors interacting in environments with different levels of dominance by political parties,” says first author Mukesh Tiwari, PhD, associate professor at the Dhirubhai Ambani Institute of Information and Communication Technology.

“We are able to model the behavior and conflicts of democracies, and capture different types of behavior that we see in elections,” says senior author Surajit Sen, PhD, professor of physics in the University at Buffalo College of Arts and Sciences.

Sen and Tiwari conducted the study with Xiguang Yang, a former UB physics student. Jacob Neiheisel, PhD, associate professor of political science at UB, provided feedback to the team, but was not an author of the research. The study was published online in Physica A in July and will appear in the journal’s Nov. 15 volume.

The model described in the paper has broad similarities to the random field Ising model, and “is inspired by models of simple magnetic systems,” Sen says.

The team used this model to explore a variety of scenarios involving different types of political environments and electorates.

Among key findings, as the authors write in the abstract: “In an electorate with only conformist agents, short-duration high-impact campaigns are highly effective. … In electorates with both conformist and contrarian agents and varying level(s) of dominance due to local factors, short-term campaigns are effective only in the case of fragile dominance of a single party. Strong local dominance is relatively difficult to influence and long-term campaigns with strategies aimed to impact local level politics are seen to be more effective.”

“I think it’s exciting that physicists are thinking about social dynamics. I love the big tent,” Neiheisel says, noting that one advantage of modeling is that it could enable researchers to explore how opinions might change over many election cycles — the type of longitudinal data that’s very difficult to collect.

Mathematical modeling has some limitations: “The real world is messy, and I think we should embrace that to the extent that we can, and models don’t capture all of this messiness,” Neiheisel says.

But Neiheisel was excited when the physicists approached him to talk about the new paper. He says the model provides “an interesting window” into processes associated with opinion dynamics and campaign effects, accurately capturing a number of effects in a “neat way.”

“The complex dynamics of strongly interacting, nonlinear and disordered systems have been a topic of interest for a long time,” Tiwari says. “There is a lot of merit in studying social systems through mathematical and computational models. These models provide insight into short- and long-term behavior. However, such endeavors can only be successful when social scientists and physicists come together to collaborate.”



Journal Reference:

  1. Mukesh Tiwari, Xiguang Yang, Surajit Sen. Modeling the nonlinear effects of opinion kinematics in elections: A simple Ising model with random field based study. Physica A: Statistical Mechanics and its Applications, 2021; 582: 126287 DOI: 10.1016/j.physa.2021.126287

5 Economists Redefining… Everything. Oh Yes, And They’re Women (Forbes)

forbes.com

Avivah Wittenberg-Cox

May 31, 2020,09:56am EDT


Five female economists.
From top left: Mariana Mazzucato, Carlota Perez, Kate Raworth, Stephanie Kelton, Esther Duflo. 20-first

Few economists become household names. Last century, it was John Maynard Keynes or Milton Friedman. Today, Thomas Piketty has become the economists’ poster-boy. Yet listen to the buzz, and it is five female economists who deserve our attention. They are revolutionising their field by questioning the meaning of everything from ‘value’ and ‘debt’ to ‘growth’ and ‘GDP.’ Esther Duflo, Stephanie Kelton, Mariana Mazzucato, Carlota Perez and Kate Raworth are united in one thing: their amazement at the way economics has been defined and debated to date. Their incredulity is palpable.

It reminds me of many women I’ve seen emerge into power over the past decade. Like Rebecca Henderson, a Management and Strategy professor at Harvard Business School and author of the new Reimagining Capitalism in a World on Fire. “It’s odd to finally make it to the inner circle,” she says, “and discover just how strangely the world is being run.” When women finally make it to the pinnacle of many professions, they often discover a world more wart-covered frog than handsome prince. Like Dorothy in The Wizard of Oz, when they get a glimpse behind the curtain, they discover the machinery of power can be more bluster than substance. As newcomers to the game, they can often see this more clearly than the long-term players. Henderson cites Tom Toro’s cartoon as her mantra. A group in rags sit around a fire with the ruins of civilisation in the background. “Yes, the planet got destroyed” says a man in a disheveled suit, “but for a beautiful moment in time we created a lot of value for shareholders.”

You get the same sense when you listen to the female economists throwing themselves into the still very male dominated economics field. A kind of collective ‘you’re kidding me, right? These five female economists are letting the secret out – and inviting people to flip the priorities. A growing number are listening – even the Pope (see below).

All question concepts long considered sacrosanct. Here are four messages they share:

Get Over It – Challenge the Orthodoxy

Described as “one of the most forward-thinking economists of our times,” Mariana Mazzucato is foremost among the flame throwers.  A professor at University College London and the Founder/Director of the UCL Institute for Innovation and Public Purpose, she asks fundamental questions about how ‘value’ has been defined, who decides what that means, and who gets to measure it. Her TED talk, provocatively titled “What is economic value? And who creates it?” lays down the gauntlet. If some people are value creators,” she asks, what does that make everyone else? “The couch potatoes? The value extractors? The value destroyers?” She wants to make economics explicitly serve the people, rather than explain their servitude.

Stephanie Kelton takes on our approach to debt and spoofs the simplistic metaphors, like comparing national income and expenditure to ‘family budgets’ in an attempt to prove how dangerous debt is. In her upcoming book, The Deficit Myth (June 2020), she argues they are not at all similar; what household can print additional money, or set interest rates? Debt should be rebranded as a strategic investment in the future. Deficits can be used in ways good or bad but are themselves a neutral and powerful policy tool. “They can fund unjust wars that destabilize the world and cost millions their lives,” she writes, “or they can be used to sustain life and build a more just economy that works for the many and not just the few.” Like all the economists profiled here, she’s pointing at the mind and the meaning behind the money.

Get Green Growth – Reshaping Growth Beyond GDP

Kate Raworth, a Senior Research Associate at Oxford University’s Environmental Change Institute, is the author of Doughnut Economics. She challenges our obsession with growth, and its outdated measures. The concept of Gross Domestic Product (GDP), was created in the 1930s and is being applied in the 21st century to an economy ten times larger. GDP’s limited scope (eg. ignoring the value of unpaid labour like housework and parenting or making no distinction between revenues from weapons or water) has kept us “financially, politically and socially addicted to growth” without integrating its costs on people and planet. She is pushing for new visual maps and metaphors to represent sustainable growth that doesn’t compromise future generations. What this means is moving away from the linear, upward moving line of ‘progress’ ingrained in us all, to a “regenerative and distributive” model designed to engage everyone and shaped like … a doughnut (food and babies figure prominently in these women’s metaphors). 

Carlota Perez doesn’t want to stop or slow growth, she wants to dematerialize it. “Green won’t spread by guilt and fear, we need aspiration and desire,” she says. Her push is towards a redefinition of the ‘good life’ and the need for “smart green growth” to be fuelled by a desire for new, attractive and aspirational lifestyles. Lives will be built on a circular economy that multiplies services and intangibles which offer limitless (and less environmentally harmful) growth. She points to every technological revolution creating new lifestyles. She says we can see it emerging, as it has in the past, among the educated, the wealthy and the young: more services rather than more things, active and creative work, a focus on health and care, a move to solar power, intense use of the internet, a preference for customisation over conformity, renting vs owning, and recycling over waste. As these new lifestyles become widespread, they offer immense opportunities for innovation and new jobs to service them.

Get Good Government – The Strategic Role of the State

All these economists want the state to play a major role. Women understand viscerally how reliant the underdogs of any system are on the inclusivity of the rules of the game. “It shapes the context to create a positive sum game” for both the public and business, says Perez. You need an active state to “tilt the playing field toward social good.” Perez outlines five technological revolutions, starting with the industrial one. She suggests we’re halfway through the fifth, the age of Tech & Information. Studying the repetitive arcs of each revolution enables us to see the opportunity of the extraordinary moment we are in. It’s the moment to shape the future for centuries to come. But she balances economic sustainability with the need for social sustainability, warning that one without the other is asking for trouble.

Mariana Mazzucato challenges governments to be more ambitious. They gain confidence and public trust by remembering and communicating what they are there to do. In her mind that is ensuring the public good. This takes vision and strategy, two ingredients she says are too often sorely lacking. Especially post-COVID, purpose needs to be the driver determining the ‘directionality’ of focus, investments and public/ private partnerships. Governments should be using their power – both of investment and procurement – to orient efforts towards the big challenges on our horizon, not just the immediate short-term recovery. They should be putting conditions on the massive financial bail outs they are currently handing out. She points to the contrast in imagination and impact between airline bailouts in Austria and the UK. The Austrian airlines are getting government aid on the condition they meet agreed emissions targets. The UK is supporting airlines without any conditionality, a huge missed opportunity to move towards larger, broader goals of building a better and greener economy out of the crisis.

Get Real – Beyond the Formulae and Into the Field

All of these economists also argue for getting out of the theories and into the field. They reject the idea of nerdy theoretical calculations done within the confines of a university tower and challenge economists to experiment and test their formulae in the real world.

Esther Duflo, Professor of Poverty Alleviation and Development Economics at MIT, is the major proponent of bringing what is accepted practice in medicine to the field of economics: field trials with randomised control groups. She rails against the billions poured into aid without any actual understanding or measurement of the returns. She gently accuses us of being no better with our 21st century approaches to problems like immunisation, education or malaria than any medieval doctor, throwing money and solutions at things with no idea of their impact. She and her husband, Abhijit Banerjee, have pioneered randomised control trials across hundreds of locations in different countries of the world, winning a Nobel Prize for Economics in 2019 for the insights.

They test, for example, how to get people to use bed nets against malaria. Nets are a highly effective preventive measure but getting people to acquire and use them has been a hard nut to crack. Duflo set up experiments to answer the conundrums: If people have to pay for nets, will they value them more? If they are free, will they use them? If they get them free once, will this discourage future purchases? As it turns out, based on these comparisons, take-up is best if nets are initially given, “people don’t get used to handouts, they get used to nets,” and will buy them – and use them – once they understand their effectiveness. Hence, she concludes, we can target policy and money towards impact.

Mazzucato is also hands-on with a number of governments around the world, including Denmark, the UK, Austria, South Africa and even the Vatican, where she has just signed up for weekly calls contributing to a post-Covid policy. ‘I believe [her vision] can help to think about the future,’ Pope Francis said after reading her book, The Value of Everything: Making and Taking in the Global Economy. No one can accuse her of being stuck in an ivory tower. Like Duflo, she is elbow-deep in creating new answers to seemingly intractable problems.

She warns that we don’t want to go back to normal after Covid-19. Normal was what got us here. Instead, she invites governments to use the crisis to embed ‘directionality’ towards more equitable public good into their recovery strategies and investments. Her approach is to define ambitious ‘missions’ which can focus minds and bring together broad coalitions of stakeholders to create solutions to support them. The original NASA mission to the moon is an obvious precursor model. Why, anyone listening to her comes away thinking, did we forget purpose in our public spending? And why, when so much commercial innovation and profit has grown out of government basic research spending, don’t a greater share of the fruits of success return to promote the greater good?

Economics has long remained a stubbornly male domain and men continue to dominate mainstream thinking. Yet, over time, ideas once considered without value become increasingly visible. The move from outlandish to acceptable to policy is often accelerated by crisis. Emerging from this crisis, five smart economists are offering an innovative range of new ideas about a greener, healthier and more inclusive way forward. Oh, and they happen to be women.

How big science failed to unlock the mysteries of the human brain (MIT Technology Review)

technologyreview.com

Large, expensive efforts to map the brain started a decade ago but have largely fallen short. It’s a good reminder of just how complex this organ is.

Emily Mullin

August 25, 2021


In September 2011, a group of neuroscientists and nanoscientists gathered at a picturesque estate in the English countryside for a symposium meant to bring their two fields together. 

At the meeting, Columbia University neurobiologist Rafael Yuste and Harvard geneticist George Church made a not-so-modest proposal: to map the activity of the entire human brain at the level of individual neurons and detail how those cells form circuits. That knowledge could be harnessed to treat brain disorders like Alzheimer’s, autism, schizophrenia, depression, and traumatic brain injury. And it would help answer one of the great questions of science: How does the brain bring about consciousness? 

Yuste, Church, and their colleagues drafted a proposal that would later be published in the journal Neuron. Their ambition was extreme: “a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits.” Like the Human Genome Project a decade earlier, they wrote, the brain project would lead to “entirely new industries and commercial ventures.” 

New technologies would be needed to achieve that goal, and that’s where the nanoscientists came in. At the time, researchers could record activity from just a few hundred neurons at once—but with around 86 billion neurons in the human brain, it was akin to “watching a TV one pixel at a time,” Yuste recalled in 2017. The researchers proposed tools to measure “every spike from every neuron” in an attempt to understand how the firing of these neurons produced complex thoughts. 

The audacious proposal intrigued the Obama administration and laid the foundation for the multi-year Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013. President Obama called it the “next great American project.” 

But it wasn’t the first audacious brain venture. In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells’ many connections. “We can do it within 10 years,” he boasted during a 2009 TED talk

In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel. A new era of neuroscience had begun. 

An impossible dream?

A decade later, the US project is winding down, and the EU project faces its deadline to build a digital brain. So how did it go? Have we begun to unwrap the secrets of the human brain? Or have we spent a decade and billions of dollars chasing a vision that remains as elusive as ever? 

From the beginning, both projects had critics.

EU scientists worried about the costs of the Markram scheme and thought it would squeeze out other neuroscience research. And even at the original 2011 meeting in which Yuste and Church presented their ambitious vision, many of their colleagues argued it simply wasn’t possible to map the complex firings of billions of human neurons. Others said it was feasible but would cost too much money and generate more data than researchers would know what to do with. 

In a blistering article appearing in Scientific American in 2013, Partha Mitra, a neuroscientist at the Cold Spring Harbor Laboratory, warned against the “irrational exuberance” behind the Brain Activity Map and questioned whether its overall goal was meaningful. 

Even if it were possible to record all spikes from all neurons at once, he argued, a brain doesn’t exist in isolation: in order to properly connect the dots, you’d need to simultaneously record external stimuli that the brain is exposed to, as well as the behavior of the organism. And he reasoned that we need to understand the brain at a macroscopic level before trying to decode what the firings of individual neurons mean.  

Others had concerns about the impact of centralizing control over these fields. Cornelia Bargmann, a neuroscientist at Rockefeller University, worried that it would crowd out research spearheaded by individual investigators. (Bargmann was soon tapped to co-lead the BRAIN Initiative’s working group.)

There isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it.

While the US initiative sought input from scientists to guide its direction, the EU project was decidedly more top-down, with Markram at the helm. But as Noah Hutton documents in his 2020 film In Silico, Markram’s grand plans soon unraveled. As an undergraduate studying neuroscience, Hutton had been assigned to read Markram’s papers and was impressed by his proposal to simulate the human brain; when he started making documentary films, he decided to chronicle the effort. He soon realized, however, that the billion-dollar enterprise was characterized more by infighting and shifting goals than by breakthrough science.

In Silico shows Markram as a charismatic leader who needed to make bold claims about the future of neuroscience to attract the funding to carry out his particular vision. But the project was troubled from the outset by a major issue: there isn’t a single, agreed-upon theory of how the brain works, and not everyone in the field agreed that building a simulated brain was the best way to study it. It didn’t take long for those differences to arise in the EU project. 

In 2014, hundreds of experts across Europe penned a letter citing concerns about oversight, funding mechanisms, and transparency in the Human Brain Project. The scientists felt Markram’s aim was premature and too narrow and would exclude funding for researchers who sought other ways to study the brain. 

“What struck me was, if he was successful and turned it on and the simulated brain worked, what have you learned?” Terry Sejnowski, a computational neuroscientist at the Salk Institute who served on the advisory committee for the BRAIN Initiative, told me. “The simulation is just as complicated as the brain.” 

The Human Brain Project’s board of directors voted to change its organization and leadership in early 2015, replacing a three-member executive committee led by Markram with a 22-member governing board. Christoph Ebell, a Swiss entrepreneur with a background in science diplomacy, was appointed executive director. “When I took over, the project was at a crisis point,” he says. “People were openly wondering if the project was going to go forward.”

But a few years later he was out too, after a “strategic disagreement” with the project’s host institution. The project is now focused on providing a new computational research infrastructure to help neuroscientists store, process, and analyze large amounts of data—unsystematic data collection has been an issue for the field—and develop 3D brain atlases and software for creating simulations.

The US BRAIN Initiative, meanwhile, underwent its own changes. Early on, in 2014, responding to the concerns of scientists and acknowledging the limits of what was possible, it evolved into something more pragmatic, focusing on developing technologies to probe the brain. 

New day

Those changes have finally started to produce results—even if they weren’t the ones that the founders of each of the large brain projects had originally envisaged. 

Last year, the Human Brain Project released a 3D digital map that integrates different aspects of human brain organization at the millimeter and micrometer level. It’s essentially a Google Earth for the brain. 

And earlier this year Alipasha Vaziri, a neuroscientist funded by the BRAIN Initiative, and his team at Rockefeller University reported in a preprint paper that they’d simultaneously recorded the activity of more than a million neurons across the mouse cortex. It’s the largest recording of animal cortical activity yet made, if far from listening to all 86 billion neurons in the human brain as the original Brain Activity Map hoped.

The US effort has also shown some progress in its attempt to build new tools to study the brain. It has speeded the development of optogenetics, an approach that uses light to control neurons, and its funding has led to new high-density silicon electrodes capable of recording from hundreds of neurons simultaneously. And it has arguably accelerated the development of single-cell sequencing. In September, researchers using these advances will publish a detailed classification of cell types in the mouse and human motor cortexes—the biggest single output from the BRAIN Initiative to date.

While these are all important steps forward, though, they’re far from the initial grand ambitions. 

Lasting legacy

We are now heading into the last phase of these projects—the EU effort will conclude in 2023, while the US initiative is expected to have funding through 2026. What happens in these next years will determine just how much impact they’ll have on the field of neuroscience.

When I asked Ebell what he sees as the biggest accomplishment of the Human Brain Project, he didn’t name any one scientific achievement. Instead, he pointed to EBRAINS, a platform launched in April of this year to help neuroscientists work with neurological data, perform modeling, and simulate brain function. It offers researchers a wide range of data and connects many of the most advanced European lab facilities, supercomputing centers, clinics, and technology hubs in one system. 

“If you ask me ‘Are you happy with how it turned out?’ I would say yes,” Ebell said. “Has it led to the breakthroughs that some have expected in terms of gaining a completely new understanding of the brain? Perhaps not.” 

Katrin Amunts, a neuroscientist at the University of Düsseldorf, who has been the Human Brain Project’s scientific research director since 2016, says that while Markram’s dream of simulating the human brain hasn’t been realized yet, it is getting closer. “We will use the last three years to make such simulations happen,” she says. But it won’t be a big, single model—instead, several simulation approaches will be needed to understand the brain in all its complexity. 

Meanwhile, the BRAIN Initiative has provided more than 900 grants to researchers so far, totaling around $2 billion. The National Institutes of Health is projected to spend nearly $6 billion on the project by the time it concludes. 

For the final phase of the BRAIN Initiative, scientists will attempt to understand how brain circuits work by diagramming connected neurons. But claims for what can be achieved are far more restrained than in the project’s early days. The researchers now realize that understanding the brain will be an ongoing task—it’s not something that can be finalized by a project’s deadline, even if that project meets its specific goals.

“With a brand-new tool or a fabulous new microscope, you know when you’ve got it. If you’re talking about understanding how a piece of the brain works or how the brain actually does a task, it’s much more difficult to know what success is,” says Eve Marder, a neuroscientist at Brandeis University. “And success for one person would be just the beginning of the story for another person.” 

Yuste and his colleagues were right that new tools and techniques would be needed to study the brain in a more meaningful way. Now, scientists will have to figure out how to use them. But instead of answering the question of consciousness, developing these methods has, if anything, only opened up more questions about the brain—and shown just how complex it is. 

“I have to be honest,” says Yuste. “We had higher hopes.”

Emily Mullin is a freelance journalist based in Pittsburgh who focuses on biotechnology.

The one number you need to know about climate change (MIT Technology Review)

technologyreview.com

David Rotman – April 24, 2019

The social cost of carbon could guide us toward intellinget policies – only if we knew what it was.

In contrast to the existential angst currently in fashion around climate change, there’s a cold-eyed calculation that its advocates, mostly economists, like to call the most important number you’ve never heard of.

It’s the social cost of carbon. It reflects the global damage of emitting one ton of carbon dioxide into the sky, accounting for its impact in the form of warming temperatures and rising sea levels. Economists, who have squabbled over the right number for a decade, see it as a powerful policy tool that could bring rationality to climate decisions. It’s what we should be willing to pay to avoid emitting that one more ton of carbon.

Welcome to climate change

This story was part of our May 2019 issue

For most of us, it’s a way to grasp how much our carbon emissions will affect the world’s health, agriculture, and economy for the next several hundred years. Maximilian Auffhammer, an economist at the University of California, Berkeley, describes it this way: it’s approximately the damage done by driving from San Francisco to Chicago, assuming that about a ton of carbon dioxide spits out of the tailpipe over those 2,000 miles.

Common estimates of the social cost of that ton are $40 to $50. The cost of the fuel for the journey in an average car is currently around $225. In other words, you’d pay roughly 20% more to take the social cost of the trip into account.

The number is contentious, however. A US federal working group in 2016, convened by President Barack Obama, calculated it at around $40, while the Trump administration has recently put it at $1 to $7. Some academic researchers cite numbers as high as $400 or more.

Why so wide a range? It depends on how you value future damages. And there are uncertainties over how the climate will respond to emissions. But another reason is that we actually have very little insight into just how climate change will affect us over time. Yes, we know there’ll be fiercer storms and deadly wildfires, heat waves, droughts, and floods. We know the glaciers are melting rapidly and fragile ocean ecosystems are being destroyed. But what does that mean for the livelihood or life expectancy of someone in Ames, Iowa, or Bangalore, India, or Chelyabinsk, Russia?

For the first time, vast amounts of data on the economic and social effects of climate change are becoming available, and so is the computational power to make sense of it. Taking this opportunity to compute a precise social cost of carbon could help us decide how much to invest and which problems to tackle first.

“It is the single most important number in the global economy,” says Solomon Hsiang, a climate policy expert at Berkeley. “Getting it right is incredibly important. But right now, we have almost no idea what it is.”

That could soon change.

The cost of death

In the past, calculating the social cost of carbon typically meant estimating how climate change would slow worldwide economic growth. Computer models split the world into at most a dozen or so regions and then averaged the predicted effects of climate change to get the impact on global GDP over time. It was at best a crude number.

Over the last several years, economists, data scientists, and climate scientists have worked together to create far more detailed and localized maps of impacts by examining how temperatures, sea levels, and precipitation patterns have historically affected things like mortality, crop yields, violence, and labor productivity. This data can then be plugged into increasingly sophisticated climate models to see what happens as the planet continues to warm.

The wealth of high-resolution data makes a far more precise number possible—at least in theory. Hsiang is co-director of the Climate Impact Lab, a team of some 35 scientists from institutions including the University of Chicago, Berkeley, Rutgers, and the Rhodium Group, an economic research organization. Their goal is to come up with a number by looking at about 24,000 different regions and adding together the diverse effects that each will experience over the coming hundreds of years in health, human behavior, and economic activity.

It’s a huge technical and computational challenge, and it will take a few years to come up with a single number. But along the way, the efforts to better understand localized damages are creating a nuanced and disturbing picture of our future.

So far, the researchers have found that climate change will kill far more people than once thought. Michael Greenstone, a University of Chicago economist who co-directs the Climate Impact Lab with Hsiang, says that previous mortality estimates had looked at seven wealthy cities, most in relatively cool climates. His group looked at data gleaned from 56% of the world’s population. It found that the social cost of carbon due to increased mortality alone is $30, nearly as high as the Obama administration’s estimate for the social cost of all climate impacts. An additional 9.1 million people will die every year by 2100, the group estimates, if climate change is left unchecked (assuming a global population of 12.7 billion people).

Unfairly Distributed

However, while the Climate Impact Lab’s analysis showed that 76% of the world’s population would suffer from higher mortality rates, it found that warming temperatures would actually save lives in a number of northern regions. That’s consistent with other recent research; the impacts of climate change will be remarkably uneven.

The variations are significant even within some countries. In 2017, Hsiang and his collaborators calculated climate impacts county by county in the United States. They found that every degree of warming would cut the country’s GDP by about 1.2%, but the worst-hit counties could see a drop of around 20%.

If climate change is left to run unchecked through the end of the century, the southern and southwestern US will be devastated by rising rates of mortality and crop failure. Labor productivity will slow, and energy costs (especially due to air-conditioning) will rise. In contrast, the northwestern and parts of the northeastern US will benefit.

“It is a massive restructuring of wealth,” says Hsiang. This is the most important finding of the last several years of climate economics, he adds. By examining ever smaller regions, you can see “the incredible winners and losers.” Many in the climate community have been reluctant to talk about such findings, he says. “But we have to look [the inequality] right in the eye.”

The social cost of carbon is typically calculated as a single global number. That makes sense, since the damage of a ton of carbon emitted in one place is spread throughout the world. But last year Katharine Ricke, a climate scientist at UC San Diego and the Scripps Institution of Oceanography, published the social costs of carbon for specific countries to help parse out regional differences.

India is the big loser. Not only does it have a fast-growing economy that will be slowed, but it’s already a hot country that will suffer greatly from getting even hotter. “India bears a huge share of the global social cost of carbon—more than 20%,” says Ricke. It also stands out for how little it has actually contributed to the world’s carbon emissions. “It’s a serious equity issue,” she says.

Estimating the global social cost of carbon also raises a vexing question: How do you put a value on future damages? We should invest now to help our children and grandchildren avoid suffering, but how much? This is hotly and often angrily debated among economists.

A standard tool in economics is the discount rate, used to calculate how much we should invest now for a payoff years from now. The higher the discount rate, the less you value the future benefit. William Nordhaus, who won the 2018 Nobel Prize in economics for pioneering the use of models to show the macroeconomic effects of climate change, has used a discount rate of around 4%. The relatively high rate suggests we should invest conservatively now. In sharp contrast, a landmark 2006 report by British economist Nicholas Stern used a discount rate of 1.4%, concluding that we should begin investing much more heavily to slow climate change. 

There’s an ethical dimension to these calculations. Wealthy countries whose prosperity has been built on fossil fuels have an obligation to help poorer countries. The climate winners can’t abandon the losers. Likewise, we owe future generations more than just financial considerations. What’s the value of a world free from the threat of catastrophic climate events—one with healthy and thriving natural ecosystems?

Outrage

Enter the Green New Deal (GND). It’s the sweeping proposal issued earlier this year by Representative Alexandria Ocasio-Cortez and other US progressives to address everything from climate change to inequality. It cites the dangers of temperature increases beyond the UN goal of 1.5 °C and makes a long list of recommendations. Energy experts immediately began to bicker over its details: Is achieving 100% renewables in the next 12 years really feasible? (Probably not.) Should it include nuclear power, which many climate activists now argue is essential for reducing emissions?

In reality, the GND has little to say about actual policies and there’s barely a hint of how it will attack its grand challenges, from providing a secure retirement for all to fostering family farms to ensuring access to nature. But that’s not the point. The GND is a cry of outrage against what it calls “the twin crises of climate change and worsening income inequality.” It’s a political attempt to make climate change part of the wider discussion about social justice. And, at least from the perspective of climate policy, it’s right in arguing that we can’t tackle global warming without considering broader social and economic issues.

The work of researchers like Ricke, Hsiang, and Greenstone supports that stance. Not only do their findings show that global warming can worsen inequality and other social ills; they provide evidence that aggressive action is worth it. Last year, researchers at Stanford calculated that limiting warming to 1.5 °C would save upwards of $20 trillion worldwide by the end of the century. Again, the impacts were mixed—the GDPs of some countries would be harmed by aggressive climate action. But the conclusion was overwhelming: more than 90% of the world’s population would benefit. Moreover, the cost of keeping temperature increases limited to 1.5 °C would be dwarfed by the long-term savings.

Nevertheless, the investments will take decades to pay for themselves. Renewables and new clean technologies may lead to a boom in manufacturing and a robust economy, but the Green New Deal is wrong to paper over the financial sacrifices we’ll need to make in the near term.

That is why climate remedies are such a hard sell. We need a global policy—but, as we’re always reminded, all politics is local. Adding 20% to the cost of that San Francisco–Chicago trip might not seem like much, but try to convince a truck driver in a poor county in Florida that raising the price of fuel is wise economic policy. A much smaller increase sparked the gilets jaunes riots in France last winter. That is the dilemma, both political and ethical, that we all face with climate change.

The new IPCC Report includes – get this, good news (Yale Climate Connections)

Yale Climate Connections

By Dana Nuccitelli August 12, 2021

As the Intergovernmental Panel on Climate Change (IPCC) released its Sixth Assessment Report, summarized nicely on these pages by Bob Henson, much of the associated media coverage carried a tone of inevitable doom.

These proclamations of unavoidable adverse outcomes center around the fact that in every scenario considered by IPCC, within the next decade average global temperatures will likely breach the aspirational goal set in the Paris climate agreement of limiting global warming to 1.5 degrees Celsius (2.7 degrees Fahrenheit) above pre-industrial temperatures. The report also details a litany of extreme weather events like heatwaves, droughts, wildfires, floods, and hurricanes that will all worsen as long as global temperatures continue to rise.

While United Nations Secretary-General António Guterres rightly called the report a “code red for humanity,” tucked into it are details illustrating that if  BIG IF top-emitting countries respond to the IPCC’s alarm bells with aggressive efforts to curb carbon pollution, the worst climate outcomes remain avoidable.

The IPCC’s future climate scenarios

In the Marvel film Avengers: Infinity War, the Dr. Strange character goes forward in time to view 14,000,605 alternate futures to see all the possible outcomes of the Avengers’ coming conflict. Lacking the fictional Time Stone used in this gambit, climate scientists instead ran hundreds of simulations of several different future carbon emissions scenarios using a variety of climate models. Like Dr. Strange, climate scientists’ goal is to determine the range of possible outcomes given different actions taken by the protagonists: in this case, various measures to decarbonize the global economy.

The scenarios considered by IPCC are called Shared Socioeconomic Pathways (SSPs). The best-case climate scenario, called SSP1, involves a global shift toward sustainable management of global resources and reduced inequity. The next scenario, SSP2, is more of a business-as-usual path with slow and uneven progress toward sustainable development goals and persisting income inequality and environmental degradation. SSP3 envisions insurgent nationalism around the world with countries focusing on their short-term domestic best interests, resulting in persistent and worsening inequality and environmental degradation. Two more scenarios, SSP4 and SSP5, consider even greater inequalities and fossil fuel extraction, but seem at odds with an international community that has agreed overwhelmingly to aim for the Paris climate targets.

The latest IPCC report’s model runs simulated two SSP1 scenarios that would achieve the Paris targets of limiting global warming to 1.5 and 2°C (2.7 and 3.6°F); one SSP2 scenario in which temperatures approach 3°C (5.4°F) in the year 2100; an SSP3 scenario with about 4°C (7.2°F) global warming by the end of the century; and one SSP5 ‘burn all the fossil fuels possible’ scenario resulting in close to 5°C (9°F), again by 2100.

Projected global average surface temperature change in each of the five SSP scenarios. (Source: IPCC Sixth Assessment Report)

The report’s SSP3-7.0 pathway (the latter number represents the eventual global energy imbalance caused by the increased greenhouse effect, in watts per square meter), is considered by many experts to be a realistic worst-case scenario, with global carbon emissions continuing to rise every year throughout the 21st century. Such an outcome would represent a complete failure of international climate negotiations and policies and would likely result in catastrophic consequences, including widespread species extinctions, food and water shortages, and disastrous extreme weather events.

Scenario SSP2-4.5 is more consistent with government climate policies that are currently in place. It envisions global carbon emissions increasing another 10% over the next decade before reaching a plateau that’s maintained until carbon pollution slowly begins to decline starting in the 2050s. Global carbon emissions approach but do not reach zero by the end of the century. Even in this unambitious scenario, the very worst climate change impacts might be averted, although the resulting climate impacts would be severe.

Most encouragingly, the report’s two SSP1 scenarios illustrate that the Paris targets remain within reach. To stay below the main Paris target of 2°C (3.6°F) warming, global carbon emissions in SSP1-2.6 plateau essentially immediately and begin to decline after 2025 at a modest rate of about 2% per year for the first decade, then accelerating to around 3% per year the next decade, and continuing along a path of consistent year-to-year carbon pollution cuts before reaching zero around 2075. The IPCC concluded that once global carbon emissions reach zero, temperatures will stop rising. Toward the end of the century, emissions in SSP1-2.6 move into negative territory as the IPCC envisions that efforts to remove carbon from the atmosphere via natural and technological methods (like sequestering carbon in agricultural soils and scrubbing it from the atmosphere through direct air capture) outpace overall fossil fuel emissions.

Meeting the aspirational Paris goal of limiting global warming to 1.5°C (2.7°F) in SSP1-1.9 would be extremely challenging, given that global temperatures are expected to breach this level within about a decade. This scenario similarly envisions that global carbon emissions peak immediately and that they decline much faster than in SSP1-2.6, at a rate of about 6% per year from 2025 to 2035 and 9% per year over the following decade, reaching net zero by around the year 2055 and becoming net negative afterwards.

Global carbon dioxide emissions (in billions of tons per year) from 2015 to 2100 in each of the five SSP scenarios. (Source: IPCC Sixth Assessment Report)

For perspective, global carbon emissions fell by about 6-7% in 2020 as a result of restrictions associated with the COVID-19 pandemic and are expected to rebound by a similar amount in 2021. As IPCC report contributor Zeke Hausfather noted, this scenario also relies on large-scale carbon sequestration technologies that currently do not exist, without which global emissions would have to reach zero a decade sooner.

More warming means more risk

The new IPCC report details that, depending on the region, climate change has already worsened extreme heat, drought, fires, floods, and hurricanes, and those will only become more damaging and destructive as temperatures continue to rise. The IPCC’s 2018 “1.5°C Report” had entailed the differences in climate consequences in a 2°C vs. 1.5°C world, as summarized at this site by Bruce Lieberman.

Consider that in the current climate of just over 1°C (2°F) warmer than pre-industrial temperatures, 40 countries this summer alone have experienced extreme flooding, including more than a year’s worth of rain falling within 24 hours in Zhengzhou, China. Many regions have also experienced extreme heat, including the deadly Pacific Northwest heatwave and dangerously hot conditions during the Olympics in Tokyo. Siberia, Greece, Italy, and the US west coast are experiencing explosive wildfires, including the “truly frightening fire behavior” of the Dixie fire, which broke the record as the largest single wildfire on record in California. The IPCC report warned of “compound events” like heat exacerbating drought, which in turn fuels more dangerous wildfires, as is happening in California.

Western North America (WNA) and the Mediterranean (MED) regions are those for which climate scientists have the greatest confidence that human-caused global warming is exacerbating drought by drying out the soil. (Source: IPCC Sixth Assessment Report)
The southwestern United States and Mediterranean are also among the regions for which climate scientists have the greatest confidence that climate change will continue to increase drought risk and severity. (Source: IPCC Sixth Assessment Report)

The IPCC report notes that the low-emissions SSP1 scenarios “would lead to substantially smaller changes” in these sorts of climate impact drivers than the higher-emissions scenarios. It also points out that with the world currently at around 1°C of warming, the intensity of extreme weather will be twice as bad compared to today’s conditions if temperatures reach 2°C (1°C hotter than today) than if the warming is limited to 1.5°C (0.5°C hotter than today), and quadruple as bad if global warming reaches 3°C (2°C hotter than today). For example, what was an extreme once-in-50-years heat wave in the late-1800s now occurs once per decade, which would rise to almost twice per decade at 1.5°C,  and nearly three times per decade at 2°C global warming.

The increasing frequency and intensity of what used to be 1-in-50-year extreme heat as global temperatures rise. (Source: IPCC Sixth Assessment Report)

Climate’s fate has yet to be written

At the same time, there is no tipping point temperature at which it becomes “too late” to curb climate change and its damaging consequences. Every additional bit of global warming above current temperatures will result in increased risks of worsening extreme weather of the sorts currently being experienced around the world. Achieving the aspirational 1.5°C Paris target may be politically infeasible, but most countries (137 total) have either committed to or are in the process of setting a target for net zero emissions by 2050 (including the United States) or 2060 (including China).

That makes the SSP1 scenarios and limiting global warming to less than 2°C a distinct possibility, depending on how successful countries are at following through with decarbonization plans over the coming three decades. And with its proposed infrastructure bipartisan and budget reconciliation legislative plans – for which final enactment of each remains another big IF – the United States could soon implement some of the bold investments and policies necessary to set the world’s second-largest carbon polluter on a track consistent with the Paris targets.

As Texas Tech climate scientist Katharine Hayhoe put it,

Again and again, assessment after assessment, the IPCC has already made it clear. Climate change puts at risk every aspect of human life as we know it … We are already starting to experience those risks today; but we know what we need to do to avoid the worst future impacts. The difference between a fossil fuel versus a clean energy future is nothing less than the future of civilization as we know it.

Back to the Avengers: They had only one chance in 14 million to save the day, and they succeeded. Time is running short, but policymakers’ odds of meeting the Paris targets remain much better than that. There are no physical constraints playing the role of Thanos in our story; only political barriers stand between humanity and a prosperous clean energy future, although those can sometimes be the most difficult types of barriers to overcome.

Also see:    Key takeaways from the new IPCC report

Eight key takeaways from the IPCC report that prove we need to put in the work to fight climate change (Technology News, Firstpost)

firstpost.com


The new IPCC report is “a code red for humanity.”

Aug 13, 2021 20:25:56 IST

The new IPCC report is “a code red for humanity”, says UN Secretary-General António Guterres.

Established in 1988 by United Nations Environment Programme (UNEP) and the World Meteorological Organisation (WMO), the Intergovernmental Panel on Climate Change (IPCC) assesses climate change science. Its new report is a warning sign for policy makers all over the world.

On 26 October 2014, Peia Kararaua, 16, swims in the flooded area of Aberao village in Kiribati. Kiribati is one of the countries most affected by sea level rise. During high tides many villages become inundated making large parts of them uninhabitable.....On 22 March 2017, a UNICEF report projects that some 600 million children – or 1 in 4 children worldwide – will be living in areas where water demand far outstrips supply by 2040. Climate change is one of the key drivers of water stress, which occurs when more than 80 per cent of the water available for agriculture, industry and domestic use is withdrawn annually. According to the report “Thirsting for a Future”, warmer temperatures, rising sea levels, increased floods, droughts and melting ice affect the quality and availability of water. Population growth, increased water consumption, and an even higher demand for water largely due to industrialization, are also draining water resources worldwide, forcing children to use unsafe water, which exposes them to potentially deadly diseases like cholera and diahrroea. The poorest and most vulnerable children will be most impacted, as millions of them already live in areas with low access to safe water and sanitation. The impact of climate change on water sources is not inevitable, the report says, citing a series of recommendations that can help curb its effect on the lives of children.

In this picture taken on 26 October, 2014, Peia Kararaua, 16, swims in the flooded area of Aberao village in Kiribati. Kiribati is one of the countries worst hit by the sea level rise since high tides mean many villages are inundated, making them uninhabitable. Image credit: UNICEF/Sokhin

This was the first time the approval meeting for the report was conducted online. There were 234 authors from the world over who clocked in 186 hours working together to get this report released.

For the first time, the report offers an interactive atlas for people to see what has already happened and what may happen in the future to where they live.

“This report tells us that recent changes in the climate are widespread, rapid and intensifying, unprecedented in thousands of years,” said IPCC Vice-Chair Ko Barrett.

UNEP Executive Director Inger Andersen that scientists have been issuing these messages for more than three decades, but the world hasn’t listened.

Here are the most important takeaways from the report:

Humans are to be blamed

Human activity is the cause of climate change and this is an unequivocal fact. All the warming caused in the pre-industrial times had been generated by the burning of fossil fuels such as coal, oil, wood, and natural gas.

Global temperatures have already risen by 1.1 degrees Celsius since the 19th century. They have reached their highest in over 100,000 years, and only a fraction of that increase has come from natural forces.

Michael Mann told the Independent the effects of climate change will be felt in all corners of the world and will worsen, especially since “the IPCC has connected the dots on climate change and the increase in severe extreme weather events… considerably more directly than previous assessments.”

We will overshoot the 1.5 C mark

According to the report’s highly optimistic-to-reckless scenarios, even if we do everything right and start reducing emissions now, we will still overshoot the 1.5C mark by 2030. But, we will see a drop in temperatures to around 1.4 C.

Control emissions, Earth will do the rest

According to the report, if we start working to bring our emissions under control, we will be able to decrease warming, even if we overshoot the 1.5C limit.

The changes we are living through are unprecedented; however, they are reversible to a certain extent. And it will take a lot of time for nature to heal. We can do this by reducing our greenhouse gas (GHG) emissions. While we might see some benefits quickly, “it could take 20-30 years to see global temperatures stabilise” says the IPCC.

Sea level rise

Global oceans have risen about 20 centimetres (eight inches) since 1900, and the rate of increase has nearly tripled in the last decade. Crumbling and melting ice sheets atop Antarctica (especially in Greenland) have replaced glacier melt as the main drivers.

If global warming is capped at 2 C, the ocean watermark will go up about half a metre over the 21st century. It will continue rising to nearly two metres by 2300 — twice the amount predicted by the IPCC in 2019.

Because of uncertainty over ice sheets, scientists cannot rule out a total rise of two metres by 2100 in a worst-case emissions scenario.

CO2 is at all-time high

CO2 levels were greater in 2019 than they had been in “at least two million years.”  Methane and nitrous oxide levels, the second and third major contributors of warming respectively, were higher in 2019 than at any point in “at least 800,000 years,” reported the Independent.

Control methane

The report includes more data than ever before on methane (CH4), the second most important greenhouse gas after CO2, and warns that failure to curb emissions could undermine Paris Agreement goals.

Human-induced sources are roughly divided between leaks from natural gas production, coal mining and landfills on one side, and livestock and manure handling on the other.

CH4 lingers in the atmosphere only a fraction as long as CO2, but is far more efficient at trapping heat. CH4 levels are their highest in at least 800,000 years.

Natural allies are weakened

Since about 1960, forests, soil and oceans have absorbed 56 percent of all the CO2 humanity has released into the atmosphere — even as those emissions have increased by half. Without nature’s help, Earth would already be a much hotter and less hospitable place.

But these allies in our fight against global heating — known in this role as carbon sinks — are showing signs of saturatation, and the percentage of human-induced carbon they soak up is likely to decline as the century unfolds.

Suck it out

The report suggests that warming could be brought back down via “negative emissions.” We could cool down the planet by sucking out or sequestering the carbon from the atmosphere. While this is a viable suggestion that has been thrown around and there have been small-scale studies that have tried to do this, the technology is not yet perfect. The panel said that could be done starting about halfway through this century but doesn’t explain how, and many scientists are skeptical about its feasibility.

Cities will bear the brunt

Experts warn that the impact of some elements of climate change, like heat, floods and sea-level rise in coastal areas, may be exacerbated in cities. Furthermore, IPCC experts warn that low-probability scenarios, like an ice sheet collapse or rapid changes in ocean circulation, cannot be ruled out.

Also read: Leaders and experts speak up after the release of the new IPCC report

Global warming begets more warming, new paleoclimate study finds (Science Daily)

Date: August 11, 2021

Source: Massachusetts Institute of Technology

Summary: Global warming begets more, extreme warming, new paleoclimate study finds. Researchers observe a ‘warming bias’ over the past 66 million years that may return if ice sheets disappear.


It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.

The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

“The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”

Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and co-founder and co-director of MIT’s Lorenz Center.

A volatile push

For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.

For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years.

“When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”

The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

“This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”

“It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.

A warming multiplier

The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.

So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.

“Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”

“Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”

This research was supported, in part, by MIT’s School of Science.


Story Source:

Materials provided by Massachusetts Institute of Technology. Original written by Jennifer Chu. Note: Content may be edited for style and length.


Journal Reference:

  1. Constantin W. Arnscheidt, Daniel H. Rothman. Asymmetry of extreme Cenozoic climate–carbon cycle events. Science Advances, 2021; 7 (33): eabg6864 DOI: 10.1126/sciadv.abg6864

We read the 4000-page IPCC climate report so you don’t have to (Quartz)

qz.com

Amanda Shendruk, Tim McDonnell, David Yanofsky, Michael J. Coren

Published August 10, 2021

[Check the original publication here for the text of the report with most important parts highlighted.]


The most important takeaways from the new Intergovernmental Panel on Climate Change report are easily summarized: Global warming is happening, it’s caused by human greenhouse gas emissions, and the impacts are very bad (in some cases, catastrophic). Every fraction of a degree of warming we can prevent by curbing emissions substantially reduces this damage. It’s a message that hasn’t changed much since the first IPCC report in 1990.

But to reach these conclusions (and ratchet up confidence in their findings), hundreds of scientists from universities around the globe spent years combing through the peer-reviewed literature—at least 14,000 papers—on everything from cyclones to droughts.

The final Aug. 9 report is nearly 4,000 pages long. While much of it is written in inscrutable scientific jargon, if you want to understand the scientific case for man-made global warming, look no further. We’ve reviewed the data,  summarized the main points, and created an interactive graphic showing a “heat map” of scientists’ confidence in their conclusions. The terms describing statistical confidence range from very high confidence (a 9 out of 10 chance) to very low confidence (a 1 in 10 chance). Just hover over the graphic [here] and click to see what they’ve written.

Here’s your guide to the IPCC’s latest assessment.

CH 1: Framing, context, methods

The first chapter comes out swinging with a bold political charge: It concludes with “high confidence” that the plans countries so far have put forward to reduce emissions are “insufficient” to keep warming well below 2°C, the goal enshrined in the 2015 Paris Agreement. While unsurprising on its own, it is surprising for a document that had to be signed off on by the same government representatives it condemns. It then lists advancements in climate science since the last IPCC report, as well as key evidence behind the conclusion that human-caused global warming is “unequivocal.”

Highlights

👀Scientists’ ability to observe the physical climate system has continued to improve and expand.

📈Since the last IPCC report, new techniques have provided greater confidence in attributing changes in extreme events to human-caused climate change.

🔬The latest generation of climate models is better at representing natural processes, and higher-resolution models that better capture smaller-scale processes and extreme events have become available.

CH 2: Changing state of the climate system

Chapter 2 looks backward in time to compare the current rate of climate changes to those that happened in the past. That comparison clearly reveals human fingerprints on the climate system. The last time global temperatures were comparable to today was 125,000 years ago, the concentration of atmospheric carbon dioxide is higher than anytime in the last 2 million years, and greenhouse gas emissions are rising faster than anytime in the last 800,000 years.

Highlights

🥵Observed changes in the atmosphere, oceans, cryosphere, and biosphere provide unequivocal evidence of a world that has warmed. Over the past several decades, key indicators of the climate system are increasingly at levels unseen in centuries to millennia, and are changing at rates unprecedented in at least the last 2000 years

🧊Annual mean Arctic sea ice coverage levels are the lowest since at least 1850. Late summer levels are the lowest in the past 1,000 years.

🌊Global mean sea level (GMSL) is rising, and the rate of GMSL rise since the 20th century is faster than over any preceding century in at least the last three millennia. Since 1901, GMSL has risen by 0.20 [0.15–0.25] meters, and the rate of rise is accelerating.

CH 3: Human influence on the climate system

Chapter 3 leads with the IPCC’s strongest-ever statement on the human impact on the climate: “It is unequivocal that human influence has warmed the global climate system since pre-industrial times” (the last IPCC report said human influence was “clear”). Specifically, the report blames humanity for nearly all of the 1.1°C increase in global temperatures observed since the Industrial Revolution (natural forces played a tiny role as well), and the loss of sea ice, rising temperatures, and acidity in the ocean.

🌍Human-induced greenhouse gas forcing is the main driver of the observed changes in hot and cold extremes.

🌡️The likely range of warming in global-mean surface air temperature (GSAT) in 2010–2019 relative to 1850–1900 is 0.9°C–1.2°C. Of that, 0.8°C–1.3°C is attributable to human activity, while natural forces contributed −0.1°C–0.1°C.

😬Combining the attributable contributions from melting ice and the expansion of warmer water, it is very likely that human influence was the main driver of the observed global mean sea level rise since at least 1970.

CH 4: Future global climate: Scenario-based projections and near-term information

Chapter 4 holds two of the report’s most important conclusions: Climate change is happening faster than previously understood, and the likelihood that the global temperature increase can stay within the Paris Agreement goal of 1.5°C is extremely slim. The 2013 IPCC report projected that temperatures could exceed 1.5°C in the 2040s; here, that timeline has been advanced by a decade to the “early 2030s” in the median scenario. And even in the lowest-emission scenario, it is “more likely than not” to occur by 2040.

Highlights

🌡️By 2030, in all future warming scenarios, globally averaged surface air temperature in any individual year could exceed 1.5°C relative to 1850–1900.

🌊Under all scenarios, it is virtually certain that global mean sea level will continue to rise through the 21st century.

💨Even if enough carbon were removed from the atmosphere that global emissions become net negative, some climate change impacts, such as sea level rise, will be not reversed for at least several centuries.

CH 5: Global carbon and other biochemical cycles and feedbacks

Chapter 5 quantifies the level by which atmospheric CO2 and methane concentrations have increased since 1750 (47% and 156% respectively) and addresses the ability of oceans and other natural systems to soak those emissions up. The more emissions increase, the less they can be offset by natural sinks—and in a high-emissions scenario, the loss of forests from wildfires becomes so severe that land-based ecosystems become a net source of emissions, rather than a sink (this is already happening to a degree in the Amazon).

Highlights

🌲The CO2 emitted from human activities during the decade of 2010–2019 was distributed between three Earth systems: 46% accumulated in the atmosphere, 23% was taken up by the ocean, and 31% was stored by vegetation.

📉The fraction of emissions taken up by land and ocean is expected to decline as the CO2 concentration increases.

💨Global temperatures rise in a near-linear relationship to cumulative CO2 emissions. In other words, to halt global warming, net emissions must reach zero.

CH 6: Short-lived climate forcers

Chapter 6 is all about methane, particulate matter, aerosols, hydrofluorocarbons, and other non-CO2 gases that don’t linger very long in the atmosphere (just a few hours, in some cases) but exert a tremendous influence on the climate while they do. In cases, that influence might be cooling, but their net impact has been to contribute to warming. Because they are short-lived, the future abundance and impact of these gases are highly variable in the different socioeconomic pathways considered in the report. These gases have a huge impact on the respiratory health of people around the world.

Highlights

⛽The sectors most responsible for warming from short-lived climate forcers are those dominated by methane emissions: fossil fuel production and distribution, agriculture, and waste management.

🧊In the next two decades, it is very likely that emissions from short-lived climate forcers will cause a warming relative to 2019, in addition to the warming from long-lived greenhouse gases like CO2.

🌏Rapid decarbonization leads to air quality improvements, but on its own is not sufficient to achieve, in the near term, air quality guidelines set by the World Health Organization, especially in parts of Asia and in some other highly polluted regions.

CH 7: The Earth’s energy budget, climate feedbacks, and climate sensitivity

Climate sensitivity is a measure of how much the Earth responds to changes in greenhouse gas concentrations. For every doubling of atmospheric CO2, temperatures go up by about 3°C, this chapter concludes. That’s about the same level scientists have estimated for several decades, but over time the range of uncertainty around that estimate has narrowed. The energy budget is a calculation of how much energy is flowing into the Earth system from the sun. Put together these metrics paint a picture of the human contribution to observed warming.

🐻‍❄️The Arctic warms more quickly than the Antarctic due to differences in radiative feedbacks and ocean heat uptake between the poles.

🌊Because of existing greenhouse gas concentrations, energy will continue to accumulate in the Earth system until at least the end of the 21st century, even under strong emissions reduction scenarios.

☁️The net effect of changes in clouds in response to global warming is to amplify human-induced warming. Compared to the last IPCC report, major advances in the understanding of cloud processes have increased the level of confidence in the cloud feedback cycle.

CH 8: Water cycle changes

This chapter catalogs what happens to water in a warming world. Although instances of drought are expected to become more common and more severe, wet parts of the world will get wetter as the warmer atmosphere is able to carry more water. Total net precipitation will increase, yet the thirstier atmosphere will make dry places drier. And within any one location, the difference in precipitation between the driest and wettest month will likely increase. But rainstorms are complex phenomenon and typically happen at a scale that is smaller than the resolution of most climate models, so specific local predictions about monsoon patterns remains an area of relatively high uncertainty.

Highlights

🌎Increased evapotranspiration will decrease soil moisture over the Mediterranean, southwestern North America, south Africa, southwestern South America, and southwestern Australia.

🌧️Summer monsoon precipitation is projected to increase for the South, Southeast and East Asian monsoon domains, while North American monsoon precipitation is projected to decrease. West African monsoon precipitation is projected to increase over the Central Sahel and decrease over the far western Sahel.

🌲Large-scale deforestation has likely decreased evapotranspiration and precipitation and increased runoff over the deforested regions. Urbanization has increased local precipitation and runoff intensity.

CH 9: Ocean, cryosphere, and sea level change

Most of the heat trapped by greenhouse gases is ultimately absorbed by the oceans. Warmer water expands, contributing significantly to sea level rise, and the slow, deep circulation of ocean water is a key reason why global temperatures don’t turn on a dime in relation to atmospheric CO2. Marine animals are feeling this heat, as scientists have documented that the frequency of marine heatwaves has doubled since the 1980s. Meanwhile, glaciers, polar sea ice, the Greenland ice sheet, and global permafrost are all rapidly melting. Overall sea levels have risen about 20 centimeters since 1900, and the rate of sea level rise is increasing.

Highlights

📈Global mean sea level rose faster in the 20th century than in any prior century over the last three millennia.

🌡️The heat content of the global ocean has increased since at least 1970 and will continue to increase over the 21st century. The associated warming will likely continue until at least 2300 even for low-emission scenarios because of the slow circulation of the deep ocean.

🧊The Arctic Ocean will likely become practically sea ice–free during the seasonal sea ice minimum for the first time before 2050 in all considered SSP scenarios.

CH 10: Linking global to regional climate change

Since 1950, scientists have clearly detected how greenhouse gas emissions from human activity are changing regional temperatures. Climate models can predict regional climate impacts. Where data are limited, statistical methods help identify local impacts (especially in challenging terrain such as mountains). Cities, in particular, will warm faster as a result of urbanization. Global warming extremes in urban areas will be even more pronounced, especially during heatwaves. Although global models largely agree, it is more difficult to consistently predict regional climate impacts across models.

Highlights

⛰️Some local-scale phenomena such as sea breezes and mountain wind systems can not be well represented by the resolution of most climate models.

🌆The difference in observed warming trends between cities and their surroundings can partly be attributed to urbanization. Future urbanization will amplify the projected air temperature change in cities regardless of the characteristics of the background climate.

😕Statistical methods are improving to downscale global climate models to more accurately depict local or regional projections.

CH 11: Weather and climate extreme events in a changing climate

Better data collection, modeling, and means scientists are more confident than ever in understanding the role of rising greenhouse gas concentration in weather and climate extremes.  We are virtually certain humans are behind observed temperature extremes.

Human activity is more making extreme weather and temperatures more intense and frequent, especially rain, droughts, and tropical cyclones. While even 1.5°C of warming will make events more severe, the intensity of extreme events is expected to at least double with 2°C of global warming compared today’s conditions, and quadruple with 3°C of warming. As global warming accelerates, historically unprecedented climatic events are likely to occur.

Highlights

🌡️It is an established fact that human-induced greenhouse gas emissions have led to an increased frequency and/or intensity of some weather and climate extremes since pre-industrial time, in particular for temperature extremes.

🌎Even relatively small incremental increases in global warming cause statistically significant changes in extremes.

🌪️The occurrence of extreme events is unprecedented in the observed record, and will increase with increasing global warming.

⛈️Relative to present-day conditions, changes in the intensity of extremes would be at least double at 2°C, and quadruple at 3°C of global warming.

CH 12: Climate change information for regional impact and for risk assessment

Climate models are getting better, more precise, and more accurate at predicting regional impacts. We know a lot more than we did in 2014 (the release of AR5). Our climate is already different compared ti the early or mid-20th century and we’re seeing big changes to mean temperatures, growing season, extreme heat, ocean acidification, and deoxygenation, and Arctic sea ice loss. Expect more changes by mid-century: more rain in the northern hemisphere, less rain in a few regions (the Mediterranean and South Africa), as well as sea-level rise along all coasts. Overall, there is high confidence that mean and extreme temperatures will rise over land and sea. Major widespread damages are expected, but also benefits are possible in some places.

Highlights

🌏Every region of the world will experience concurrent changes in multiple climate impact drivers by mid-century.

🌱Climate change is already resulting in significant societal and environmental impacts and will induce major socio-economic damages in the future. In some cases, climate change can also lead to beneficial conditions which can be taken into account in adaptation strategies.

🌨️The impacts of climate change depend not only on physical changes in the climate itself, but also on whether humans take steps to limit their exposure and vulnerability.


What we did:

The visualization of confidence is only for the executive summary at the beginning of each chapter. If a sentence had a confidence associated with it, the confidence text was removed and a color applied instead. If a sentence did not have an associated confidence, that doesn’t mean scientists do not feel confident about the content; they may be using likelihood (or certainty) language in that instance instead. We chose to only visualize confidence, as it is used more often in the report. Highlights were drawn from the text of the report but edited and in some cases rephrased for clarity.

Capitalism is in crisis. To save it, we need to rethink economic growth. (MIT Technological Review)

technologyreview.com

The failure of capitalism to solve our biggest problems is prompting many to question one of its basic precepts.

David Rotman


This story was part of our November 2020 issue

October 14, 2020

No wonder many in the US and Europe have begun questioning the underpinnings of capitalism—particularly its devotion to free markets and its faith in the power of economic growth to create prosperity and solve our problems. 

The antipathy to growth is not new; the term “degrowth” was coined in the early 1970s. But these days, worries over climate change, as well as rising inequality, are prompting its reemergence as a movement. 

Calls for “the end of growth” are still on the economic fringe, but degrowth arguments have been taken up by political movements as different as the Extinction Rebellion and the populist Five Star Movement in Italy. “And all you can talk about is money and fairy tales of eternal economic growth. How dare you!” thundered Greta Thunberg, the young Swedish climate activist, to an audience of diplomats and politicians at UN Climate Week last year.

At the core of the degrowth movement is a critique of capitalism itself. In Less Is More: How Degrowth Will Save the World, Jason Hickel writes: “Capitalism is fundamentally dependent on growth.” It is, he says, “not growth for any particular purpose, mind you, but growth for its own sake.”

That mindless growth, Hickel and his fellow degrowth believers contend, is very bad both for the planet and for our spiritual well-being. We need, Hickel writes, to develop “new theories of being” and rethink our place in the “living world.” (Hickel goes on about intelligent plants and their ability to communicate, which is both controversial botany and confusing economics.) It’s tempting to dismiss it all as being more about social engineering of our lifestyles than about actual economic reforms. 

Though Hickel, an anthropologist, offers a few suggestions (“cut advertising” and “end planned obsolescence”), there’s little about the practical steps that would make a no-growth economy work. Sorry, but talking about plant intelligence won’t solve our woes; it won’t feed hungry people or create well-paying jobs. 

Still, the degrowth movement does have a point: faced with climate change and the financial struggles of many workers, capitalism isn’t getting it done. 

Slow growth

Even some economists outside the degrowth camp, while not entirely rejecting the importance of growth, are questioning our blind devotion to it. 

One obvious factor shaking their faith is that growth has been lousy for decades. There have been exceptions to this economic sluggishness—the US during the late 1990s and early 2000s and developing countries like China as they raced to catch up. But some scholars, notably Robert Gordon, whose 2016 book The Rise and Fall of American Growth triggered much economic soul-searching, are realizing that slow growth might be the new normal, not some blip, for much of the world. 

Gordon held that growth “ended on October 16, 1973, or thereabouts,” write MIT economists Esther Duflo and Abhijit Banerjee, who won the 2019 Nobel Prize, in Good Economics for Hard Times. Referencing Gordon, they single out the day when the OPEC oil embargo began; GDP growth in the US and Europe never fully recovered. 

The pair are of course being somewhat facetious in tracing the end of growth to a particular day. Their larger point: robust growth seemingly disappeared almost overnight, and no one knows what happened.

Duflo and Banerjee offer possible explanations, only to dismiss them. They write: “The bottom line is that despite the best efforts of generations of economists, the deep mechanisms of persistent economic growth remain elusive.” Nor do we know how to revive it. They conclude: “Given that, we will argue, it may be time to abandon our profession’s obsession with growth.”

In this perspective, growth is not the villain of today’s capitalism, but—at least as measured by GDP—it’s an aspiration that is losing its relevance. Slow growth is nothing to worry about, says Dietrich Vollrath, an economist at the University of Houston, at least not in rich countries. It’s largely the result of lower birth rates—a shrinking workforce means less output—and a shift to services to meet the demands of wealthier consumers. In any case, says Vollrath, with few ways to change it, we might as well embrace slow growth. “It is what it is,” he says. 

Vollrath says when his book Fully Grown: Why a Stagnant Economy Is a Sign of Success came out last January, he “was adopted by the degrowthers.” But unlike them, he’s indifferent to whether growth ends or not; rather, he wants to shift the discussion to ways of creating more sustainable technologies and achieving other social goals, whether the changes boost growth or not. “There is now a disconnect between GDP and whether things are getting better,” he says.

Living better

Though the US is the world’s largest economy as measured by GDP, it is doing poorly on indicators such as environmental performance and access to quality education and health care, according to the Social Progress Index, released late this summer by a Washington-based think tank. In the annual ranking (done before the covid pandemic), the US came in 28th, far behind other wealthy countries, including ones with slower GDP growth rates.

“You can churn out all the GDP you want,” says Rebecca Henderson, an economist at Harvard Business School, “but if the suicide rates go up, and the depression rates go up, and the rate of children dying before they’re four goes up, it’s not the kind of society you want to build.” We need to “stop relying totally on GDP,” she says. “It should be just one metric among many.”

Part of the problem, she suggests, is “a failure to imagine that capitalism can be done differently, that it can operate without toasting the planet.”

In her perspective, the US needs to start measuring and valuing growth according to its impact on climate change and access to essential services like health care. “We need self-aware growth,” says Henderson. “Not growth at any cost.” 

Daron Acemoglu, another MIT economist, is calling for a “new growth strategy” aimed at creating technologies needed to solve our most pressing problems. Acemoglu describes today’s growth as being driven by large corporations committed to digital technologies, automation, and AI. This concentration of innovation in a few dominant companies has led to inequality and, for many, wage stagnation. 

People in Silicon Valley, he says, often acknowledge to him that this is a problem but argue, “It’s what technology wants. It’s the path of technology.” Acemoglu disagrees; we make deliberate choices about which technologies we invent and use, he says.

Acemoglu argues that growth should be directed by market incentives and by regulation. That, he believes, is the best way to make sure we create and deploy technologies that society needs, rather than ones that simply generate massive profits for a few. 

Which technologies are those? “I don’t know exactly,” he says. “I’m not clairvoyant. It hasn’t been a priority to develop such technologies, and we’re not aware of the capabilities.”

Turning such a strategy into reality will depend on politics. And the reasoning of academic economists like Acemoglu and Henderson, one fears, is not likely to be popular politically—ignoring as it does the loud calls for the end of growth from the left and the self-confident demands for continued unfettered free markets on the right. 

But for those not willing to give up on a future of growth and the vast promise of innovation to improve lives and save the planet, expanding our technological imagination is the only the real choice.

Rewriting capitalism: some must-reads

  • Reimagining Capitalism in a World on Fire, BY REBECCA HENDERSON
    The Harvard Business School economist argues that companies can play an important role in improving the world.
  • Good Economics for Hard Times, BY ABHIJIT V. BANERJEE AND ESTHER DUFLO
    The MIT economists and 2019 Nobel laureates explain the challenges of boosting growth both in rich countries and in poor ones, where they do much of their research.
  • Fully Grown: Why a Stagnant Economy Is a Sign of Success, BY DIETRICH VOLLRATH
    The University of Houston economist argues that slow growth in rich countries like the United States is just fine, but we need to make the benefits from it more inclusive.
  • Less Is More: How Degrowth Will Save the World, BY JASON HICKEL
    A leading voice in the degrowth movement provides an overview of the argument for ending growth. It’s a convincing diagnosis of the problems we’re facing; how an end to growth will solve any of them is less clear.

MIT Predicted in 1972 That Society Will Collapse This Century. New Research Shows We’re on Schedule (Motherboard)

A 1972 MIT study predicted that rapid economic growth would lead to societal collapse in the mid 21st century. A new paper shows we’re unfortunately right on schedule.

By Nafeez Ahmed – July 14, 2021, 10:00am

A remarkable new study by a director at one of the largest accounting firms in the world has found that a famous, decades-old warning from MIT about the risk of industrial civilization collapsing appears to be accurate based on new empirical data. 

As the world looks forward to a rebound in economic growth following the devastation wrought by the pandemic, the research raises urgent questions about the risks of attempting to simply return to the pre-pandemic ‘normal.’

In 1972, a team of MIT scientists got together to study the risks of civilizational collapse. Their system dynamics model published by the Club of Rome identified impending ‘limits to growth’ (LtG) that meant industrial civilization was on track to collapse sometime within the 21st century, due to overexploitation of planetary resources.

The controversial MIT analysis generated heated debate, and was widely derided at the time by pundits who misrepresented its findings and methods. But the analysis has now received stunning vindication from a study written by a senior director at professional services giant KPMG, one of the ‘Big Four’ accounting firms as measured by global revenue.

Limits to growth

The study was published in the Yale Journal of Industrial Ecology in November 2020 and is available on the KPMG website. It concludes that the current business-as-usual trajectory of global civilization is heading toward the terminal decline of economic growth within the coming decade—and at worst, could trigger societal collapse by around 2040.

The study represents the first time a top analyst working within a mainstream global corporate entity has taken the ‘limits to growth’ model seriously. Its author, Gaya Herrington, is Sustainability and Dynamic System Analysis Lead at KPMG in the United States. However, she decided to undertake the research as a personal project to understand how well the MIT model stood the test of time.

The study itself is not affiliated or conducted on behalf of KPMG, and does not necessarily reflect the views of KPMG. Herrington performed the research as an extension of her Masters thesis at Harvard University in her capacity as an advisor to the Club of Rome. However, she is quoted explaining her project on the KPMG website as follows: 

“Given the unappealing prospect of collapse, I was curious to see which scenarios were aligning most closely with empirical data today. After all, the book that featured this world model was a bestseller in the 70s, and by now we’d have several decades of empirical data which would make a comparison meaningful. But to my surprise I could not find recent attempts for this. So I decided to do it myself.”

Titled ‘Update to limits to growth: Comparing the World3 model with empirical data’, the study attempts to assess how MIT’s ‘World3’ model stacks up against new empirical data. Previous studies that attempted to do this found that the model’s worst-case scenarios accurately reflected real-world developments. However, the last study of this nature was completed in 2014. 

The risk of collapse 

Herrington’s new analysis examines data across 10 key variables, namely population, fertility rates, mortality rates, industrial output, food production, services, non-renewable resources, persistent pollution, human welfare, and ecological footprint. She found that the latest data most closely aligns with two particular scenarios, ‘BAU2’ (business-as-usual) and ‘CT’ (comprehensive technology). 

“BAU2 and CT scenarios show a halt in growth within a decade or so from now,” the study concludes. “Both scenarios thus indicate that continuing business as usual, that is, pursuing continuous growth, is not possible. Even when paired with unprecedented technological development and adoption, business as usual as modelled by LtG would inevitably lead to declines in industrial capital, agricultural output, and welfare levels within this century.”

Study author Gaya Herrington told Motherboard that in the MIT World3 models, collapse “does not mean that humanity will cease to exist,” but rather that “economic and industrial growth will stop, and then decline, which will hurt food production and standards of living… In terms of timing, the BAU2 scenario shows a steep decline to set in around 2040.”

image3.png

The ‘Business-as-Usual’ scenario (Source: Herrington, 2021)

The end of growth? 

In the comprehensive technology (CT) scenario, economic decline still sets in around this date with a range of possible negative consequences, but this does not lead to societal collapse.

image1.png

The ‘Comprehensive Technology’ scenario (Source: Herrington, 2021)

Unfortunately, the scenario which was the least closest fit to the latest empirical data happens to be the most optimistic pathway known as ‘SW’ (stabilized world), in which civilization follows a sustainable path and experiences the smallest declines in economic growth—based on a combination of technological innovation and widespread investment in public health and education.

image2.png

The ‘Stabilized World’ Scenario (Source: Herrington, 2021)

Although both the business-as-usual and comprehensive technology scenarios point to the coming end of economic growth in around 10 years, only the BAU2 scenario “shows a clear collapse pattern, whereas CT suggests the possibility of future declines being relatively soft landings, at least for humanity in general.” 

Both scenarios currently “seem to align quite closely not just with observed data,” Herrington concludes in her study, indicating that the future is open.   

A window of opportunity 

While focusing on the pursuit of continued economic growth for its own sake will be futile, the study finds that technological progress and increased investments in public services could not just avoid the risk of collapse, but lead to a new stable and prosperous civilization operating safely within planetary boundaries. But we really have only the next decade to change course. 

“At this point therefore, the data most aligns with the CT and BAU2 scenarios which indicate a slowdown and eventual halt in growth within the next decade or so, but World3 leaves open whether the subsequent decline will constitute a collapse,” the study concludes. Although the ‘stabilized world’ scenario “tracks least closely, a deliberate trajectory change brought about by society turning toward another goal than growth is still possible. The LtG work implies that this window of opportunity is closing fast.”

In a presentation at the World Economic Forum in 2020 delivered in her capacity as a KPMG director, Herrington argued for ‘agrowth’—an agnostic approach to growth which focuses on other economic goals and priorities.  

“Changing our societal priorities hardly needs to be a capitulation to grim necessity,” she said. “Human activity can be regenerative and our productive capacities can be transformed. In fact, we are seeing examples of that happening right now. Expanding those efforts now creates a world full of opportunity that is also sustainable.” 

She noted how the rapid development and deployment of vaccines at unprecedented rates in response to the COVID-19 pandemic demonstrates that we are capable of responding rapidly and constructively to global challenges if we choose to act. We need exactly such a determined approach to the environmental crisis.

“The necessary changes will not be easy and pose transition challenges but a sustainable and inclusive future is still possible,” said Herrington. 

The best available data suggests that what we decide over the next 10 years will determine the long-term fate of human civilization. Although the odds are on a knife-edge, Herrington pointed to a “rapid rise” in environmental, social and good governance priorities as a basis for optimism, signalling the change in thinking taking place in both governments and businesses. She told me that perhaps the most important implication of her research is that it’s not too late to create a truly sustainable civilization that works for all.

What AI still can’t do (MIT Technology Review)

technologyreview.com

Brian Bergstein

February 19, 2020


Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

Elias Bareinboim
Elias Bareinboim: AI systems are clueless when it comes to causation.

Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.

His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.

As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.

But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.

Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.

An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.

Performing miracles

The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant—the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.

Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.

Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.

Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.

Judea Pearl
Judea Pearl: His theory of causal reasoning has transformed science.

Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.

Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.

Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.

One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).

The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.

The last mile

Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.

He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.

Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”

He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.

Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”

That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.

He also doesn’t think it’s that far off: “This is the last mile before the victory.”

What if?

Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.

As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.

For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.

You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.”

Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.

This story was part of our March 2020 issue.

The predictions issue

Ten million reasons to vaccinate the world (The Economist)

economist.com

Our model reveals the true course of the pandemic. Here is what to do next

May 15th 2021 8-10 minutos


THIS WEEK we publish our estimate of the true death toll from covid-19. It tells the real story of the pandemic. But it also contains an urgent warning. Unless vaccine supplies reach poorer countries, the tragic scenes now unfolding in India risk being repeated elsewhere. Millions more will die.

Using known data on 121 variables, from recorded deaths to demography, we have built a pattern of correlations that lets us fill in gaps where numbers are lacking. Our model suggests that covid-19 has already claimed 7.1m-12.7m lives. Our central estimate is that 10m people have died who would otherwise be living. This tally of “excess deaths” is over three times the official count, which nevertheless is the basis for most statistics on the disease, including fatality rates and cross-country comparisons.

The most important insight from our work is that covid-19 has been harder on the poor than anyone knew. Official figures suggest that the pandemic has struck in waves, and that the United States and Europe have been hit hard. Although South America has been ravaged, the rest of the developing world seemed to get off lightly.

Our modelling tells another story. When you count all the bodies, you see that the pandemic has spread remorselessly from the rich, connected world to poorer, more isolated places. As it has done so, the global daily death rate has climbed steeply.

Death rates have been very high in some rich countries, but the overwhelming majority of the 6.7m or so deaths that nobody counted were in poor and middle-income ones. In Romania and Iran excess deaths are more than double the number officially put down to covid-19. In Egypt they are 13 times as big. In America the difference is 7.1%.

India, where about 20,000 are dying every day, is not an outlier. Our figures suggest that, in terms of deaths as a share of population, Peru’s pandemic has been 2.5 times worse than India’s. The disease is working its way through Nepal and Pakistan. Infectious variants spread faster and, because of the tyranny of exponential growth, overwhelm health-care systems and fill mortuaries even if the virus is no more lethal.

Ultimately the way to stop this is vaccination. As an example of collaboration and pioneering science, covid-19 vaccines rank with the Apollo space programme. Within just a year of the virus being discovered, people could be protected from severe disease and death. Hundreds of millions of them have benefited.

However, in the short run vaccines will fuel the divide between rich and poor. Soon, the only people to die from covid-19 in rich countries will be exceptionally frail or exceptionally unlucky, as well as those who have spurned the chance to be vaccinated. In poorer countries, by contrast, most people will have no choice. They will remain unprotected for many months or years.

The world cannot rest while people perish for want of a jab costing as little as $4 for a two-dose course. It is hard to think of a better use of resources than vaccination. Economists’ central estimate for the direct value of a course is $2,900—if you include factors like long covid and the effect of impaired education, the total is much bigger. The benefit from an extra 1bn doses supplied by July would be worth hundreds of billions of dollars. Less circulating virus means less mutation, and so a lower chance of a new variant that reinfects the vaccinated.

Supplies of vaccines are already growing. By the end of April, according to Airfinity, an analytics firm, vaccine-makers produced 1.7bn doses, 700m more than the end of March and ten times more than January. Before the pandemic, annual global vaccine capacity was roughly 3.5bn doses. The latest estimates are that total output in 2021 will be almost 11bn. Some in the industry predict a global surplus in 2022.

And yet the world is right to strive to get more doses in more arms sooner. Hence President Joe Biden has proposed waiving intellectual-property claims on covid-19 vaccines. Many experts argue that, because some manufacturing capacity is going begging, millions more doses might become available if patent-owners shared their secrets, including in countries that today are at the back of the queue. World-trade rules allow for a waiver. When invoke them if not in the throes of a pandemic?

We believe that Mr Biden is wrong. A waiver may signal that his administration cares about the world, but it is at best an empty gesture and at worst a cynical one.

A waiver will do nothing to fill the urgent shortfall of doses in 2021. The head of the World Trade Organisation, the forum where it will be thrashed out, warns there may be no vote until December. Technology transfer would take six months or so to complete even if it started today. With the new mRNA vaccines made by Pfizer and Moderna, it may take longer. Supposing the tech transfer was faster than that, experienced vaccine-makers would be unavailable for hire and makers could not obtain inputs from suppliers whose order books are already bursting. Pfizer’s vaccine requires 280 inputs from suppliers in 19 countries. No firm can recreate that in a hurry.

In any case, vaccine-makers do not appear to be hoarding their technology—otherwise output would not be increasing so fast. They have struck 214 technology-transfer agreements, an unprecedented number. They are not price-gouging: money is not the constraint on vaccination. Poor countries are not being priced out of the market: their vaccines are coming through COVAX, a global distribution scheme funded by donors.

In the longer term, the effect of a waiver is unpredictable. Perhaps it will indeed lead to technology being transferred to poor countries; more likely, though, it will cause harm by disrupting supply chains, wasting resources and, ultimately, deterring innovation. Whatever the case, if vaccines are nearing a surplus in 2022, the cavalry will arrive too late.

A needle in time

If Mr Biden really wants to make a difference, he can donate vaccine right now through COVAX. Rich countries over-ordered because they did not know which vaccines would work. Britain has ordered more than nine doses for each adult, Canada more than 13. These will be urgently needed elsewhere. It is wrong to put teenagers, who have a minuscule risk of dying from covid-19, before the elderly and health-care workers in poor countries. The rich world should not stockpile boosters to cover the population many times over on the off-chance that they may be needed. In the next six months, this could yield billions of doses of vaccine.

Countries can also improve supply chains. The Serum Institute, an Indian vaccine-maker, has struggled to get parts such as filters from America because exports were gummed up by the Defence Production Act (DPA), which puts suppliers on a war-footing. Mr Biden authorised a one-off release, but he should be focusing the DPA on supplying the world instead. And better use needs to be made of finished vaccine. In some poor countries, vaccine languishes unused because of hesitancy and chaotic organisation. It makes sense to prioritise getting one shot into every vulnerable arm, before setting about the second.

Our model is not predictive. However it does suggest that some parts of the world are particularly vulnerable—one example is South-East Asia, home to over 650m people, which has so far been spared mass fatalities for no obvious reason. Covid-19 has not yet run its course. But vaccines have created the chance to save millions of lives. The world must not squander it. ■

Dig deeper

All our stories relating to the pandemic and the vaccines can be found on our coronavirus hub. You can also listen to The Jab, our podcast on the race between injections and infections, and find trackers showing the global roll-out of vaccines, excess deaths by country and the virus’s spread across Europe and America.

This article appeared in the Leaders section of the print edition under the headline “Vaccinating the world”

Understanding fruit fly behavior may be next step toward autonomous vehicles (Science Daily)

Could the way drosophila use antennae to sense heat help us teach self-driving cars make decisions?

Date: April 6, 2021

Source: Northwestern University

Summary: With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research shows us we may be better off putting fruit flies behind the wheel instead of robots.


With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research from Northwestern University shows us we may be better off putting fruit flies behind the wheel instead of robots.

Drosophila have been subjects of science as long as humans have been running experiments in labs. But given their size, it’s easy to wonder what can be learned by observing them. Research published today in the journal Nature Communications demonstrates that fruit flies use decision-making, learning and memory to perform simple functions like escaping heat. And researchers are using this understanding to challenge the way we think about self-driving cars.

“The discovery that flexible decision-making, learning and memory are used by flies during such a simple navigational task is both novel and surprising,” said Marco Gallio, the corresponding author on the study. “It may make us rethink what we need to do to program safe and flexible self-driving vehicles.”

According to Gallio, an associate professor of neurobiology in the Weinberg College of Arts and Sciences, the questions behind this study are similar to those vexing engineers building cars that move on their own. How does a fruit fly (or a car) cope with novelty? How can we build a car that is flexibly able to adapt to new conditions?

This discovery reveals brain functions in the household pest that are typically associated with more complex brains like those of mice and humans.

“Animal behavior, especially that of insects, is often considered largely fixed and hard-wired — like machines,” Gallio said. “Most people have a hard time imagining that animals as different from us as a fruit fly may possess complex brain functions, such as the ability to learn, remember or make decisions.”

To study how fruit flies tend to escape heat, the Gallio lab built a tiny plastic chamber with four floor tiles whose temperatures could be independently controlled and confined flies inside. They then used high-resolution video recordings to map how a fly reacted when it encountered a boundary between a warm tile and a cool tile. They found flies were remarkably good at treating heat boundaries as invisible barriers to avoid pain or harm.

Using real measurements, the team created a 3D model to estimate the exact temperature of each part of the fly’s tiny body throughout the experiment. During other trials, they opened a window in the fly’s head and recorded brain activity in neurons that process external temperature signals.

Miguel Simões, a postdoctoral fellow in the Gallio lab and co-first author of the study, said flies are able to determine with remarkable accuracy if the best path to thermal safety is to the left or right. Mapping the direction of escape, Simões said flies “nearly always” escape left when they approach from the right, “like a tennis ball bouncing off a wall.”

“When flies encounter heat, they have to make a rapid decision,” Simões said. “Is it safe to continue, or should it turn back? This decision is highly dependent on how dangerous the temperature is on the other side.”

Observing the simple response reminded the scientists of one of the classic concepts in early robotics.

“In his famous book, the cyberneticist Valentino Braitenberg imagined simple models made of sensors and motors that could come close to reproducing animal behavior,” said Josh Levy, an applied math graduate student and a member of the labs of Gallio and applied math professor William Kath. “The vehicles are a combination of simple wires, but the resulting behavior appears complex and even intelligent.”

Braitenberg argued that much of animal behavior could be explained by the same principles. But does that mean fly behavior is as predictable as that of one of Braitenberg’s imagined robots?

The Northwestern team built a vehicle using a computer simulation of fly behavior with the same wiring and algorithm as a Braitenberg vehicle to see how closely they could replicate animal behavior. After running model race simulations, the team ran a natural selection process of sorts, choosing the cars that did best and mutating them slightly before recombining them with other high-performing vehicles. Levy ran 500 generations of evolution in the powerful NU computing cluster, building cars they ultimately hoped would do as well as flies at escaping the virtual heat.

This simulation demonstrated that “hard-wired” vehicles eventually evolved to perform nearly as well as flies. But while real flies continued to improve performance over time and learn to adopt better strategies to become more efficient, the vehicles remain “dumb” and inflexible. The researchers also discovered that even as flies performed the simple task of escaping the heat, fly behavior remains somewhat unpredictable, leaving space for individual decisions. Finally, the scientists observed that while flies missing an antenna adapt and figure out new strategies to escape heat, vehicles “damaged” in the same way are unable to cope with the new situation and turn in the direction of the missing part, eventually getting trapped in a spin like a dog chasing its tail.

Gallio said the idea that simple navigation contains such complexity provides fodder for future work in this area.

Work in the Gallio lab is supported by the NIH (Award No. R01NS086859 and R21EY031849), a Pew Scholars Program in the Biomedical Sciences and a McKnight Technological Innovation in Neuroscience Awards.


Story Source:

Materials provided by Northwestern University. Original written by Lila Reynolds. Note: Content may be edited for style and length.


Journal Reference:

  1. José Miguel Simões, Joshua I. Levy, Emanuela E. Zaharieva, Leah T. Vinson, Peixiong Zhao, Michael H. Alpert, William L. Kath, Alessia Para, Marco Gallio. Robustness and plasticity in Drosophila heat avoidance. Nature Communications, 2021; 12 (1) DOI: 10.1038/s41467-021-22322-w

Bill Gates e o problema com o solucionismo climático (MIT Technology Review)

Bill Gates e o problema com o solucionismo climático

Natureza e espaço

Focar em soluções tecnológicas para mudanças climáticas parece uma tentativa para se desviar dos obstáculos políticos mais desafiadores.

By MIT Technology Review, 6 de abril de 2021

Em seu novo livro Como evitar um desastre climático, Bill Gates adota uma abordagem tecnológica para compreender a crise climática. Gates começa com os 51 bilhões de toneladas de gases com efeito de estufa criados por ano. Ele divide essa poluição em setores com base em seu impacto, passando pelo elétrico, industrial e agrícola para o de transporte e construção civil. Do começo ao fim, Gates se mostra  adepto a diminuir as complexidades do desafio climático, dando ao leitor heurísticas úteis para distinguir maiores problemas tecnológicos (cimento) de menores (aeronaves).

Presente nas negociações climáticas de Paris em 2015, Gates e dezenas de indivíduos bem-afortunados lançaram o Breakthrough Energy, um fundo de capital de investimento interdependente lobista empenhado em conduzir pesquisas. Gates e seus companheiros investidores argumentaram que tanto o governo federal quanto o setor privado estão investindo pouco em inovação energética. A Breakthrough pretende preencher esta lacuna, investindo em tudo, desde tecnologia nuclear da próxima geração até carne vegetariana com sabor de carne bovina. A primeira rodada de US$ 1 bilhão do fundo de investimento teve alguns sucessos iniciais, como a Impossible Foods, uma fabricante de hambúrgueres à base de plantas. O fundo anunciou uma segunda rodada de igual tamanho em janeiro.

Um esforço paralelo, um acordo internacional chamado de Mission Innovation, diz ter convencido seus membros (o setor executivo da União Europeia junto com 24 países incluindo China, os EUA, Índia e o Brasil) a investirem um adicional de US$ 4,6 bilhões por ano desde 2015 para a pesquisa e desenvolvimento da energia limpa.

Essas várias iniciativas são a linha central para o livro mais recente de Gates, escrito a partir de uma perspectiva tecno-otimista. “Tudo que aprendi a respeito do clima e tecnologia me deixam otimista… se agirmos rápido o bastante, [podemos] evitar uma catástrofe climática,” ele escreveu nas páginas iniciais.

Como muitos já assinalaram, muito da tecnologia necessária já existe, muito pode ser feito agora. Por mais que Gates não conteste isso, seu livro foca nos desafios tecnológicos que ele acredita que ainda devem ser superados para atingir uma maior descarbonização. Ele gasta menos tempo nos percalços políticos, escrevendo que pensa “mais como um engenheiro do que um cientista político.” Ainda assim, a política, com toda a sua desordem, é o principal impedimento para o progresso das mudanças climáticas. E engenheiros devem entender como sistemas complexos podem ter ciclos de feedback que dão errado.

Sim, ministro

Kim Stanley Robinson, este sim pensa como um cientista político. O começo de seu romance mais recente The Ministry for the Future (ainda sem tradução para o português), se passa apenas a alguns anos no futuro, em 2025, quando uma onda de calor imensa atinge a Índia, matando milhões de pessoas. A protagonista do livro, Mary Murphy, comanda uma agência da ONU designada a representar os interesses das futuras gerações em uma tentativa de unir os governos mundiais em prol de uma solução climática. Durante todo o livro a equidade intergeracional e várias formas de políticas distributivas em foco.

Se você já viu os cenários que o Painel Intergovernamental sobre Mudanças Climáticas (IPCC) desenvolve para o futuro, o livro de Robinson irá parecer familiar. Sua história questiona as políticas necessárias para solucionar a crise climática, e ele certamente fez seu dever de casa. Apesar de ser um exercício de imaginação, há momentos em que o romance se assemelha mais a um seminário de graduação sobre ciências sociais do que a um trabalho de ficção escapista. Os refugiados climáticos, que são centrais para a história, ilustram a forma como as consequências da poluição atingem a população global mais pobre com mais força. Mas os ricos produzem muito mais carbono.

Ler Gates depois de Robinson evidencia a inextricável conexão entre desigualdade e mudanças climáticas. Os esforços de Gates sobre a questão do clima são louváveis. Mas quando ele nos diz que a riqueza combinada das pessoas apoiando seu fundo de investimento é de US$ 170 bilhões, ficamos um pouco intrigados que estes tenham dedicado somente US$ 2 bilhões para soluções climáticas, menos de 2% de seus ativos. Este fato por si só é um argumento favorável para taxar fortunas: a crise climática exige ação governamental. Não pode ser deixado para o capricho de bilionários.

Quanto aos bilionários, Gates é possivelmente um dos bonzinhos. Ele conta histórias sobre como usa sua fortuna para ajudar os pobres e o planeta. A ironia dele escrever um livro sobre mudanças climáticas quando voa em um jato particular e detém uma mansão de 6.132 m² não é algo que passa despercebido pelo leitor, e nem por Gates, que se autointitula um “mensageiro imperfeito sobre mudanças climáticas”. Ainda assim, ele é inquestionavelmente um aliado do movimento climático.

Mas ao focar em inovações tecnológicas, Gates minimiza a participação dos combustíveis fósseis na obstrução deste progresso. Peculiarmente, o ceticismo climático não é mencionado no livro. Lavando as mãos no que diz respeito à polarização política, Gates nunca faz conexão com seus colegas bilionários Charles e David Koch, que enriqueceram com os petroquímicos e têm desempenhado papel de destaque na reprodução do negacionismo climático.

Por exemplo, Gates se admira que para a vasta maioria dos americanos aquecedores elétricos são na verdade mais baratos do que continuar a usar combustíveis fósseis. Para ele, as pessoas não adotarem estas opções mais econômicas e sustentáveis é um enigma. Mas, não é assim. Como os jornalistas Rebecca Leber e Sammy Roth reportaram em  Mother Jones  e no  Los Angeles Times, a indústria do gás está investindo em defensores e criando campanhas de marketing para se opor à eletrificação e manter as pessoas presas aos combustíveis fósseis.

Essas forças de oposição são melhor vistas no livro do Robinson do que no de Gates. Gates teria se beneficiado se tivesse tirado partido do trabalho que Naomi Oreskes, Eric Conway, Geoffrey Supran, entre outros, têm feito para documentar os esforços persistentes das empresas de combustíveis fósseis em semear dúvida sobre a ciência climática para a população.

No entanto, uma coisa que Gates e Robinson têm em comum é a opinião de que a geoengenharia, intervenções monumentais para combater os sintomas ao invés das causas das mudanças climáticas, venha a ser inevitável. Em The Ministry for the Future, a geoengenharia solar, que vem a ser a pulverização de partículas finas na atmosfera para refletir mais do calor solar de volta para o espaço, é usada na sequência dos acontecimentos da onda de calor mortal que inicia a história. E mais tarde, alguns cientistas vão aos polos e inventam elaborados métodos para remover água derretida de debaixo de geleiras para evitar que avançasse para o mar. Apesar de alguns contratempos, eles impedem a subida do nível do mar em vários metros. É possível imaginar Gates aparecendo no romance como um dos primeiros a financiar estes esforços. Como ele próprio observa em seu livro, ele tem investido em pesquisa sobre geoengenharia solar há anos.

A pior parte

O título do novo livro de Elizabeth Kolbert, Under a White Sky (ainda sem tradução para o português), é uma referência a esta tecnologia nascente, já que implementá-la em larga escala pode alterar a cor do céu de azul para branco.
Kolbert observa que o primeiro relatório sobre mudanças climáticas foi parar na mesa do presidente Lyndon Johnson em 1965. Este relatório não argumentava que deveríamos diminuir as emissões de carbono nos afastando de combustíveis fósseis. No lugar, defendia mudar o clima por meio da geoengenharia solar, apesar do termo ainda não ter sido inventado. É preocupante que alguns se precipitem imediatamente para essas soluções arriscadas em vez de tratar a raiz das causas das mudanças climáticas.

Ao ler Under a White Sky, somos lembrados das formas com que intervenções como esta podem dar errado. Por exemplo, a cientista e escritora Rachel Carson defendeu importar espécies não nativas como uma alternativa a utilizar pesticidas. No ano após o seu livro Primavera Silenciosa ser publicado, em 1962, o US Fish and Wildlife Service trouxe carpas asiáticas para a América pela primeira vez, a fim de controlar algas aquáticas. Esta abordagem solucionou um problema, mas criou outro: a disseminação dessa espécie invasora ameaçou às locais e causou dano ambiental.

Como Kolbert observa, seu livro é sobre “pessoas tentando solucionar problemas criados por pessoas tentando solucionar problemas.” Seu relato cobre exemplos incluindo esforços malfadados de parar a disseminação das carpas, as estações de bombeamento em Nova Orleans que aceleram o afundamento da cidade e as tentativas de seletivamente reproduzir corais que possam tolerar temperaturas mais altas e a acidificação do oceano. Kolbert tem senso de humor e uma percepção aguçada para consequências não intencionais. Se você gosta do seu apocalipse com um pouco de humor, ela irá te fazer rir enquanto Roma pega fogo.

Em contraste, apesar de Gates estar consciente das possíveis armadilhas das soluções tecnológicas, ele ainda enaltece invenções como plástico e fertilizante como vitais. Diga isso para as tartarugas marinhas engolindo lixo plástico ou as florações de algas impulsionadas por fertilizantes destruindo o ecossistema do Golfo do México.

Com níveis perigosos de dióxido de carbono na atmosfera, a geoengenharia pode de fato se provar necessária, mas não deveríamos ser ingênuos sobre os riscos. O livro de Gates tem muitas ideias boas e vale a pena a leitura. Mas para um panorama completo da crise que enfrentamos, certifique-se de também ler Robinson e Kolbert.

Cerejeiras florescem mais cedo no Japão em 1,2 mil anos (Folha de S.Paulo)

f5.folha.uol.com.br

Kazuhiro Nogi – 24.mar.2021/AFP 4-5 minutos


São Paulo

O florescer das famosas cerejeiras brancas e rosas leva milhares às ruas e parques do Japão para observar o fenômeno, que dura poucos dias e é reverenciado há mais de mil anos. Mas este ano a antecipação da florada tem preocupado cientistas, pois indica impacto nas mudanças climáticas.

Segundo registros da Universidade da Prefeitura de Osaka, em 2021, as famosas cerejeiras brancas e rosas floresceram totalmente em 26 de março em Quioto, a data mais antecipada em 12 séculos. As floradas mais cedo foram registradas em 27 de março dos anos 1612, 1409 e 1236.

A instituição conseguiu identificar a antecipação do fenômeno porque tem um banco de dados completo dos registros das floradas ao longo dos séculos. Os registros começaram no ano 812 e incluem documentos judiciais da Quioto Imperial, a antiga capital do Japão e diários medievais.

O professor de ciência ambiental da universidade da Prefeitura de Osaka, Yasuyuki Aono, responsável por compilar um banco de dados, disse à Agência Reuters que o fenômeno costuma ocorrer em abril, mas à medida que as temperaturas sobem, o início da floração é mais cedo.

Kazuhiro Nogui, 24.mar.2021/AFP

“As flores de cerejeira são muito sensíveis à temperatura. A floração e a plena floração podem ocorrer mais cedo ou mais tarde, dependendo apenas da temperatura. A temperatura era baixa na década de 1820, mas subiu cerca de 3,5 graus Celsius até hoje”, disse.

Segundo ele, as estações deste ano, em particular, influenciaram as datas de floração. O inverno foi muito frio, mas a primavera veio rápida e excepcionalmente quente, então “os botões estão completamente despertos depois de um descanso suficiente”.

Na capital Tóquio, as cerejeiras atingiram o máximo da florada em 22 de março, o segundo ano mais cedo já registrado. “À medida que as temperaturas globais aumentam, as geadas da última Primavera estão ocorrendo mais cedo e a floração está ocorrendo mais cedo”, afirmou Lewis Ziska, da Universidade de Columbia, à CNN.

A Agência Meteorológica do Japão acompanha ainda 58 cerejeiras “referência” no país. Neste ano, 40 já atingiram o pico de floração e 14 o fizeram em tempo recorde. As árvores normalmente florescem por cerca de duas semanas todos os anos. “Podemos dizer que é mais provável por causa do impacto do aquecimento global”, disse Shunji Anbe, funcionário da divisão de observações da agência.

Dados Organização Meteorológica Mundial divulgados em janeiro mostram que as temperaturas globais em 2020 estiveram entre as mais altas já registradas e rivalizaram com 2016 com o ano mais quente de todos os tempos.

As flores de cerejeira têm longas raízes históricas e culturais no Japão, anunciando a Primavera e inspirando artistas e poetas ao longo dos séculos. Sua fragilidade é vista como um símbolo de vida, morte e renascimento.

Atualmente, as pessoas se reúnem sob as flores de cerejeiras a cada primavera para festas hanami (observação das flores), passeiam em parques e fazem piqueniques embaixo dos galhos e abusar das selfies. Mas, neste ano, a florada de cerejeiras veio e se foi em um piscar de olhos.

Com o fim do estado de emergência para conter a pandemia de Covid-19 em todas as regiões do Japão, muitas pessoas se aglomeraram em locais populares de exibição no fim de semana, embora o número de pessoas tenha sido menor do que em anos normais.

How Facebook got addicted to spreading misinformation (MIT Tech Review)

technologyreview.com

Karen Hao, March 11, 2021


Joaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history, and Quiñonero had been previously scheduled to speak at a conference on, among other things, “the intersection of AI, ethics, and privacy” at the company. He considered canceling, but after debating it with his communications director, he’d kept his allotted time.

As he stepped up to face the room, he began with an admission. “I’ve just had the hardest five days in my tenure at Facebook,” he remembers saying. “If there’s criticism, I’ll accept it.”

The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. It compounded fears that the algorithms that determine what people see on the platform were amplifying fake news and hate speech, and that Russian hackers had weaponized them to try to sway the election in Trump’s favor. Millions began deleting the app; employees left in protest; the company’s market capitalization plunged by more than $100 billion after its July earnings call.

In the ensuing months, Mark Zuckerberg began his own apologizing. He apologized for not taking “a broad enough view” of Facebook’s responsibilities, and for his mistakes as a CEO. Internally, Sheryl Sandberg, the chief operating officer, kicked off a two-year civil rights audit to recommend ways the company could prevent the use of its platform to undermine democracy.

Finally, Mike Schroepfer, Facebook’s chief technology officer, asked Quiñonero to start a team with a directive that was a little vague: to examine the societal impact of the company’s algorithms. The group named itself the Society and AI Lab (SAIL); last year it combined with another team working on issues of data privacy to form Responsible AI.

Quiñonero was a natural pick for the job. He, as much as anybody, was the one responsible for Facebook’s position as an AI powerhouse. In his six years at Facebook, he’d created some of the first algorithms for targeting users with content precisely tailored to their interests, and then he’d diffused those algorithms across the company. Now his mandate would be to make them less harmful.

Facebook has consistently pointed to the efforts by Quiñonero and others as it seeks to repair its reputation. It regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”

In the spring of 2020, it was apparently my turn. Ari Entin, Facebook’s AI communications director, asked in an email if I wanted to take a deeper look at the company’s AI work. After talking to several of its AI leaders, I decided to focus on Quiñonero. Entin happily obliged. As not only the leader of the Responsible AI team but also the man who had made Facebook into an AI-driven company, Quiñonero was a solid choice to use as a poster boy.

He seemed a natural choice of subject to me, too. In the years since he’d formed his team following the Cambridge Analytica scandal, concerns about the spread of lies and hate speech on Facebook had only grown. In late 2018 the company admitted that this activity had helped fuel a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon. All these dangerous falsehoods were metastasizing thanks to the AI capabilities Quiñonero had helped build. The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by. Fixing this problem, to me, seemed like core Responsible AI territory.

I began video-calling Quiñonero regularly. I also spoke to Facebook executives, current and former employees, industry peers, and external experts. Many spoke on condition of anonymity because they’d signed nondisclosure agreements or feared retaliation. I wanted to know: What was Quiñonero’s team doing to rein in the hate and lies on its platform?

Joaquin Quinonero Candela
Joaquin Quiñonero Candela outside his home in the Bay Area, where he lives with his wife and three kids.

But Entin and Quiñonero had a different agenda. Each time I tried to bring up these topics, my requests to speak about them were dropped or redirected. They only wanted to discuss the Responsible AI team’s plan to tackle one specific kind of problem: AI bias, in which algorithms discriminate against particular user groups. An example would be an ad-targeting algorithm that shows certain job or housing opportunities to white people but not to minorities.

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

In March of 2012, Quiñonero visited a friend in the Bay Area. At the time, he was a manager in Microsoft Research’s UK office, leading a team using machine learning to get more visitors to click on ads displayed by the company’s search engine, Bing. His expertise was rare, and the team was less than a year old. Machine learning, a subset of AI, had yet to prove itself as a solution to large-scale industry problems. Few tech giants had invested in the technology.

Quiñonero’s friend wanted to show off his new employer, one of the hottest startups in Silicon Valley: Facebook, then eight years old and already with close to a billion monthly active users (i.e., those who have logged in at least once in the past 30 days). As Quiñonero walked around its Menlo Park headquarters, he watched a lone engineer make a major update to the website, something that would have involved significant red tape at Microsoft. It was a memorable introduction to Zuckerberg’s “Move fast and break things” ethos. Quiñonero was awestruck by the possibilities. Within a week, he had been through interviews and signed an offer to join the company.

His arrival couldn’t have been better timed. Facebook’s ads service was in the middle of a rapid expansion as the company was preparing for its May IPO. The goal was to increase revenue and take on Google, which had the lion’s share of the online advertising market. Machine learning, which could predict which ads would resonate best with which users and thus make them more effective, could be the perfect tool. Shortly after starting, Quiñonero was promoted to managing a team similar to the one he’d led at Microsoft.

Joaquin Quinonero Candela
Quiñonero started raising chickens in late 2019 as a way to unwind from the intensity of his job.

Unlike traditional algorithms, which are hard-coded by engineers, machine-learning algorithms “train” on input data to learn the correlations within it. The trained algorithm, known as a machine-learning model, can then automate future decisions. An algorithm trained on ad click data, for example, might learn that women click on ads for yoga leggings more often than men. The resultant model will then serve more of those ads to women. Today at an AI-based company like Facebook, engineers generate countless models with slight variations to see which one performs best on a given problem.

Facebook’s massive amounts of user data gave Quiñonero a big advantage. His team could develop models that learned to infer the existence not only of broad categories like “women” and “men,” but of very fine-grained categories like “women between 25 and 34 who liked Facebook pages related to yoga,” and targeted ads to them. The finer-grained the targeting, the better the chance of a click, which would give advertisers more bang for their buck.

Within a year his team had developed these models, as well as the tools for designing and deploying new ones faster. Before, it had taken Quiñonero’s engineers six to eight weeks to build, train, and test a new model. Now it took only one.

News of the success spread quickly. The team that worked on determining which posts individual Facebook users would see on their personal news feeds wanted to apply the same techniques. Just as algorithms could be trained to predict who would click what ad, they could also be trained to predict who would like or share what post, and then give those posts more prominence. If the model determined that a person really liked dogs, for instance, friends’ posts about dogs would appear higher up on that user’s news feed.

Quiñonero’s success with the news feed—coupled with impressive new AI research being conducted outside the company—caught the attention of Zuckerberg and Schroepfer. Facebook now had just over 1 billion users, making it more than eight times larger than any other social network, but they wanted to know how to continue that growth. The executives decided to invest heavily in AI, internet connectivity, and virtual reality.

They created two AI teams. One was FAIR, a fundamental research lab that would advance the technology’s state-of-the-art capabilities. The other, Applied Machine Learning (AML), would integrate those capabilities into Facebook’s products and services. In December 2013, after months of courting and persuasion, the executives recruited Yann LeCun, one of the biggest names in the field, to lead FAIR. Three months later, Quiñonero was promoted again, this time to lead AML. (It was later renamed FAIAR, pronounced “fire.”)

“That’s how you know what’s on his mind. I was always, for a couple of years, a few steps from Mark’s desk.”

Joaquin Quiñonero Candela

In his new role, Quiñonero built a new model-development platform for anyone at Facebook to access. Called FBLearner Flow, it allowed engineers with little AI experience to train and deploy machine-learning models within days. By mid-2016, it was in use by more than a quarter of Facebook’s engineering team and had already been used to train over a million models, including models for image recognition, ad targeting, and content moderation.

Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.

Zuckerberg, who sat in the center of Building 20, the main office at the Menlo Park headquarters, placed the new FAIR and AML teams beside him. Many of the original AI hires were so close that his desk and theirs were practically touching. It was “the inner sanctum,” says a former leader in the AI org (the branch of Facebook that contains all its AI teams), who recalls the CEO shuffling people in and out of his vicinity as they gained or lost his favor. “That’s how you know what’s on his mind,” says Quiñonero. “I was always, for a couple of years, a few steps from Mark’s desk.”

With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement. The process is still the same today. Teams train up a new machine-learning model on FBLearner, whether to change the ranking order of posts or to better catch content that violates Facebook’s community standards (its rules on what is and isn’t allowed on the platform). Then they test the new model on a small subset of Facebook’s users to measure how it changes engagement metrics, such as the number of likes, comments, and shares, says Krishna Gade, who served as the engineering manager for news feed from 2016 to 2018.

If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining.

But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

While Facebook may have been oblivious to these consequences in the beginning, it was studying them by 2016. In an internal presentation from that year, reviewed by the Wall Street Journal, a company researcher, Monica Lee, found that Facebook was not only hosting a large number of extremist groups but also promoting them to its users: “64% of all extremist group joins are due to our recommendation tools,” the presentation said, predominantly thanks to the models behind the “Groups You Should Join” and “Discover” features.

“The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?”

A former AI researcher who joined in 2018

In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.

Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.

The researcher’s team also found that users with a tendency to post or engage with melancholy content—a possible sign of depression—could easily spiral into consuming increasingly negative material that risked further worsening their mental health. The team proposed tweaking the content-ranking models for these users to stop maximizing engagement alone, so they would be shown less of the depressing stuff. “The question for leadership was: Should we be optimizing for engagement if you find that somebody is in a vulnerable state of mind?” he remembers. (A Facebook spokesperson said she could not find documentation for this proposal.)

But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.

One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”

Facebook disputes this characterization, saying the team that worked on this effort has since successfully predicted which users were at risk and increased the number of wellness checks performed. But the company does not release data on the accuracy of its predictions or how many wellness checks turned out to be real emergencies.

That former employee, meanwhile, no longer lets his daughter use Facebook.

Quiñonero should have been perfectly placed to tackle these problems when he created the SAIL (later Responsible AI) team in April 2018. His time as the director of Applied Machine Learning had made him intimately familiar with the company’s algorithms, especially the ones used for recommending posts, ads, and other content to users.

It also seemed that Facebook was ready to take these problems seriously. Whereas previous efforts to work on them had been scattered across the company, Quiñonero was now being granted a centralized team with leeway in his mandate to work on whatever he saw fit at the intersection of AI and society.

At the time, Quiñonero was engaging in his own reeducation about how to be a responsible technologist. The field of AI research was paying growing attention to problems of AI bias and accountability in the wake of high-profile studies showing that, for example, an algorithm was scoring Black defendants as more likely to be rearrested than white defendants who’d been arrested for the same or a more serious offense. Quiñonero began studying the scientific literature on algorithmic fairness, reading books on ethical engineering and the history of technology, and speaking with civil rights experts and moral philosophers.

Joaquin Quinonero Candela

Over the many hours I spent with him, I could tell he took this seriously. He had joined Facebook amid the Arab Spring, a series of revolutions against oppressive Middle Eastern regimes. Experts had lauded social media for spreading the information that fueled the uprisings and giving people tools to organize. Born in Spain but raised in Morocco, where he’d seen the suppression of free speech firsthand, Quiñonero felt an intense connection to Facebook’s potential as a force for good.

Six years later, Cambridge Analytica had threatened to overturn this promise. The controversy forced him to confront his faith in the company and examine what staying would mean for his integrity. “I think what happens to most people who work at Facebook—and definitely has been my story—is that there’s no boundary between Facebook and me,” he says. “It’s extremely personal.” But he chose to stay, and to head SAIL, because he believed he could do more for the world by helping turn the company around than by leaving it behind.

“I think if you’re at a company like Facebook, especially over the last few years, you really realize the impact that your products have on people’s lives—on what they think, how they communicate, how they interact with each other,” says Quiñonero’s longtime friend Zoubin Ghahramani, who helps lead the Google Brain team. “I know Joaquin cares deeply about all aspects of this. As somebody who strives to achieve better and improve things, he sees the important role that he can have in shaping both the thinking and the policies around responsible AI.”

At first, SAIL had only five people, who came from different parts of the company but were all interested in the societal impact of algorithms. One founding member, Isabel Kloumann, a research scientist who’d come from the company’s core data science team, brought with her an initial version of a tool to measure the bias in AI models.

The team also brainstormed many other ideas for projects. The former leader in the AI org, who was present for some of the early meetings of SAIL, recalls one proposal for combating polarization. It involved using sentiment analysis, a form of machine learning that interprets opinion in bits of text, to better identify comments that expressed extreme points of view. These comments wouldn’t be deleted, but they would be hidden by default with an option to reveal them, thus limiting the number of people who saw them.

And there were discussions about what role SAIL could play within Facebook and how it should evolve over time. The sentiment was that the team would first produce responsible-AI guidelines to tell the product teams what they should or should not do. But the hope was that it would ultimately serve as the company’s central hub for evaluating AI projects and stopping those that didn’t follow the guidelines.

Former employees described, however, how hard it could be to get buy-in or financial support when the work didn’t directly improve Facebook’s growth. By its nature, the team was not thinking about growth, and in some cases it was proposing ideas antithetical to growth. As a result, it received few resources and languished. Many of its ideas stayed largely academic.

On August 29, 2018, that suddenly changed. In the ramp-up to the US midterm elections, President Donald Trump and other Republican leaders ratcheted up accusations that Facebook, Twitter, and Google had anti-conservative bias. They claimed that Facebook’s moderators in particular, in applying the community standards, were suppressing conservative voices more than liberal ones. This charge would later be debunked, but the hashtag #StopTheBias, fueled by a Trump tweet, was rapidly spreading on social media.

For Trump, it was the latest effort to sow distrust in the country’s mainstream information distribution channels. For Zuckerberg, it threatened to alienate Facebook’s conservative US users and make the company more vulnerable to regulation from a Republican-led government. In other words, it threatened the company’s growth.

Facebook did not grant me an interview with Zuckerberg, but previous reporting has shown how he increasingly pandered to Trump and the Republican leadership. After Trump was elected, Joel Kaplan, Facebook’s VP of global public policy and its highest-ranking Republican, advised Zuckerberg to tread carefully in the new political environment.

On September 20, 2018, three weeks after Trump’s #StopTheBias tweet, Zuckerberg held a meeting with Quiñonero for the first time since SAIL’s creation. He wanted to know everything Quiñonero had learned about AI bias and how to quash it in Facebook’s content-moderation models. By the end of the meeting, one thing was clear: AI bias was now Quiñonero’s top priority. “The leadership has been very, very pushy about making sure we scale this aggressively,” says Rachad Alao, the engineering director of Responsible AI who joined in April 2019.

It was a win for everybody in the room. Zuckerberg got a way to ward off charges of anti-conservative bias. And Quiñonero now had more money and a bigger team to make the overall Facebook experience better for users. They could build upon Kloumann’s existing tool in order to measure and correct the alleged anti-conservative bias in content-moderation models, as well as to correct other types of bias in the vast majority of models across the platform.

This could help prevent the platform from unintentionally discriminating against certain users. By then, Facebook already had thousands of models running concurrently, and almost none had been measured for bias. That would get it into legal trouble a few months later with the US Department of Housing and Urban Development (HUD), which alleged that the company’s algorithms were inferring “protected” attributes like race from users’ data and showing them ads for housing based on those attributes—an illegal form of discrimination. (The lawsuit is still pending.) Schroepfer also predicted that Congress would soon pass laws to regulate algorithmic discrimination, so Facebook needed to make headway on these efforts anyway.

(Facebook disputes the idea that it pursued its work on AI bias to protect growth or in anticipation of regulation. “We built the Responsible AI team because it was the right thing to do,” a spokesperson said.)

But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.

A chart titled "natural engagement pattern" that shows allowed content on the X axis, engagement on the Y axis, and an exponential increase in engagement as content nears the policy line for prohibited content.

But then he showed another chart with the inverse relationship. Rather than rewarding content that came close to violating the community standards, Zuckerberg wrote, Facebook could choose to start “penalizing” it, giving it “less distribution and engagement” rather than more. How would this be done? With more AI. Facebook would develop better content-moderation models to detect this “borderline content” so it could be retroactively pushed lower in the news feed to snuff out its virality, he said.

A chart titled "adjusted to discourage borderline content" that shows the same chart but the curve inverted to reach no engagement when it reaches the policy line.

The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy. “Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught,” wrote the Times. “It’s never going to go to zero,” Schroepfer told the publication.

Meanwhile, the algorithms that recommend this content still work to maximize engagement. This means every toxic post that escapes the content-moderation filters will continue to be pushed higher up the news feed and promoted to reach a larger audience. Indeed, a study from New York University recently found that among partisan publishers’ Facebook pages, those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots. “That just kind of got me,” says a former employee who worked on integrity issues from 2018 to 2019. “We fully acknowledged [this], and yet we’re still increasing engagement.”

But Quiñonero’s SAIL team wasn’t working on this problem. Because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, the team stayed focused on bias. And even after it merged into the bigger Responsible AI team, it was never mandated to work on content-recommendation systems that might limit the spread of misinformation. Nor has any other team, as I confirmed after Entin and another spokesperson gave me a full list of all Facebook’s other initiatives on integrity issues—the company’s umbrella term for problems including misinformation, hate speech, and polarization.

A Facebook spokesperson said, “The work isn’t done by one specific team because that’s not how the company operates.” It is instead distributed among the teams that have the specific expertise to tackle how content ranking affects misinformation for their part of the platform, she said. But Schroepfer told me precisely the opposite in an earlier interview. I had asked him why he had created a centralized Responsible AI team instead of directing existing teams to make progress on the issue. He said it was “best practice” at the company.

“[If] it’s an important area, we need to move fast on it, it’s not well-defined, [we create] a dedicated team and get the right leadership,” he said. “As an area grows and matures, you’ll see the product teams take on more work, but the central team is still needed because you need to stay up with state-of-the-art work.”

When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

Over the last two years, Quiñonero’s team has built out Kloumann’s original tool, called Fairness Flow. It allows engineers to measure the accuracy of machine-learning models for different user groups. They can compare a face-detection model’s accuracy across different ages, genders, and skin tones, or a speech-recognition algorithm’s accuracy across different languages, dialects, and accents.

Fairness Flow also comes with a set of guidelines to help engineers understand what it means to train a “fair” model. One of the thornier problems with making algorithms fair is that there are different definitions of fairness, which can be mutually incompatible. Fairness Flow lists four definitions that engineers can use according to which suits their purpose best, such as whether a speech-recognition model recognizes all accents with equal accuracy or with a minimum threshold of accuracy.

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

This last problem came to the fore when the company had to deal with allegations of anti-conservative bias.

In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

“I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

Ellery Roberts Biddle, editorial director of Ranking Digital Rights

This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

And ahead of the 2020 election, Facebook policy executives used this excuse, according to the New York Times, to veto or weaken several proposals that would have reduced the spread of hateful and damaging content.

Facebook disputed the Wall Street Journal’s reporting in a follow-up blog post, and challenged the New York Times’s characterization in an interview with the publication. A spokesperson for Kaplan’s team also denied to me that this was a pattern of behavior, saying the cases reported by the Post, the Journal, and the Times were “all individual instances that we believe are then mischaracterized.” He declined to comment about the retraining of misinformation models on the record.

Many of these incidents happened before Fairness Flow was adopted. But they show how Facebook’s pursuit of fairness in the service of growth had already come at a steep cost to progress on the platform’s other challenges. And if engineers used the definition of fairness that Kaplan’s team had adopted, Fairness Flow could simply systematize behavior that rewarded misinformation instead of helping to combat it.

Often “the whole fairness thing” came into play only as a convenient way to maintain the status quo, the former researcher says: “It seems to fly in the face of the things that Mark was saying publicly in terms of being fair and equitable.”

The last time I spoke with Quiñonero was a month after the US Capitol riots. I wanted to know how the storming of Congress had affected his thinking and the direction of his work.

In the video call, it was as it always was: Quiñonero dialing in from his home office in one window and Entin, his PR handler, in another. I asked Quiñonero what role he felt Facebook had played in the riots and whether it changed the task he saw for Responsible AI. After a long pause, he sidestepped the question, launching into a description of recent work he’d done to promote greater diversity and inclusion among the AI teams.

I asked him the question again. His Facebook Portal camera, which uses computer-vision algorithms to track the speaker, began to slowly zoom in on his face as he grew still. “I don’t know that I have an easy answer to that question, Karen,” he said. “It’s an extremely difficult question to ask me.”

Entin, who’d been rapidly pacing with a stoic poker face, grabbed a red stress ball.

I asked Quiñonero why his team hadn’t previously looked at ways to edit Facebook’s content-ranking models to tamp down misinformation and extremism. He told me it was the job of other teams (though none, as I confirmed, have been mandated to work on that task). “It’s not feasible for the Responsible AI team to study all those things ourselves,” he said. When I asked whether he would consider having his team tackle those issues in the future, he vaguely admitted, “I would agree with you that that is going to be the scope of these types of conversations.”

Near the end of our hour-long interview, he began to emphasize that AI was often unfairly painted as “the culprit.” Regardless of whether Facebook used AI or not, he said, people would still spew lies and hate speech, and that content would still spread across the platform.

I pressed him one more time. Certainly he couldn’t believe that algorithms had done absolutely nothing to change the nature of these issues, I said.

“I don’t know,” he said with a halting stutter. Then he repeated, with more conviction: “That’s my honest answer. Honest to God. I don’t know.”

Corrections: We amended a line that suggested that Joel Kaplan, Facebook’s vice president of global policy, had used Fairness Flow. He has not. But members of his team have used the notion of fairness to request the retraining of misinformation models in ways that directly contradict Responsible AI’s guidelines. We also clarified when Rachad Alao, the engineering director of Responsible AI, joined the company.

NOAA Acknowledges the New Reality of Hurricane Season (Gizmodo)

earther.gizmodo.com

Molly Taft, March 2, 2021


This combination of satellite images provided by the National Hurricane Center shows 30 hurricanes that occurred during the 2020 Atlantic hurricane season.
This combination of satellite images provided by the National Hurricane Center shows 30 hurricanes that occurred during the 2020 Atlantic hurricane season.

We’re one step closer to officially moving up hurricane season. The National Hurricane Center announced Tuesday that it would formally start issuing its hurricane season tropical weather outlooks on May 15 this year, bumping it up from the traditional start of hurricane season on June 1. The move comes after a recent spate of early season storms have raked the Atlantic.

Atlantic hurricane season runs from June 1 to November 30. That’s when conditions are most conducive to storm formation owing to warm air and water temperatures. (The Pacific ocean has its own hurricane season, which covers the same timeframe, but since waters are colder fewer hurricanes tend to form there than in the Atlantic.)

Storms have begun forming on the Atlantic earlier as ocean and air temperatures have increased due to climate change. Last year, Hurricane Arthur roared to life off the East Coast on May 16. That storm made 2020 the sixth hurricane season in a row to have a storm that formed earlier than the June 1 official start date. While the National Oceanic and Atmospheric Administration won’t be moving up the start of the season just yet, the earlier outlooks addresses the recent history.

“In the last decade, there have been 10 storms formed in the weeks before the traditional start of the season, which is a big jump,” said Sean Sublette, a meteorologist at Climate Central, who pointed out that the 1960s through 2010s saw between one and three storms each decade before the June 1 start date on average.

It might be tempting to ascribe this earlier season entirely to climate change warming the Atlantic. But technology also has a role to play, with more observations along the coast as well as satellites that can spot storms far out to sea.

“I would caution that we can’t just go, ‘hah, the planet’s warming, we’ve had to move the entire season!’” Sublette said. “I don’t think there’s solid ground for attribution of how much of one there is over the other. Weather folks can sit around and debate that for awhile.”

Earlier storms don’t necessarily mean more harmful ones, either. In fact, hurricanes earlier in the season tend to be weaker than the monsters that form in August and September when hurricane season is at its peak. But regardless of their strength, these earlier storms have generated discussion inside the NHC on whether to move up the official start date for the season, when the agency usually puts out two reports per day on hurricane activity. Tuesday’s step is not an official announcement of this decision, but an acknowledgement of the increased attention on early hurricanes.

“I would say that [Tuesday’s announcement] is the National Hurricane Center being proactive,” Sublette said. “Like hey, we know that the last few years it’s been a little busier in May than we’ve seen in the past five decades, and we know there is an awareness now, so we’re going to start issuing these reports early.”

While the jury is still out on whether climate change is pushing the season earlier, research has shown that the strongest hurricanes are becoming more common, and that climate change is likely playing a role. A study published last year found the odds of a storm becoming a major hurricanes—those Category 3 or stronger—have increase 49% in the basin since satellite monitoring began in earnest four decades ago. And when storms make landfall, sea level rise allows them to do more damage. So regardless of if climate change is pushing Atlantic hurricane season is getting earlier or not, the risks are increasing. Now, at least, we’ll have better warnings before early storms do hit.

The Coronavirus Is Plotting a Comeback. Here’s Our Chance to Stop It for Good. (New York Times)

nytimes.com

Apoorva Mandavilli


Lincoln Park in Chicago. Scientists are hopeful, as vaccinations continue and despite the emergence of variants, that we’re past the worst of the pandemic.
Lincoln Park in Chicago. Scientists are hopeful, as vaccinations continue and despite the emergence of variants, that we’re past the worst of the pandemic. Credit: Lyndon French for The New York Times
Many scientists are expecting another rise in infections. But this time the surge will be blunted by vaccines and, hopefully, widespread caution. By summer, Americans may be looking at a return to normal life.

Published Feb. 25, 2021Updated Feb. 26, 2021, 12:07 a.m. ET

Across the United States, and the world, the coronavirus seems to be loosening its stranglehold. The deadly curve of cases, hospitalizations and deaths has yo-yoed before, but never has it plunged so steeply and so fast.

Is this it, then? Is this the beginning of the end? After a year of being pummeled by grim statistics and scolded for wanting human contact, many Americans feel a long-promised deliverance is at hand.

Americans will win against the virus and regain many aspects of their pre-pandemic lives, most scientists now believe. Of the 21 interviewed for this article, all were optimistic that the worst of the pandemic is past. This summer, they said, life may begin to seem normal again.

But — of course, there’s always a but — researchers are also worried that Americans, so close to the finish line, may once again underestimate the virus.

So far, the two vaccines authorized in the United States are spectacularly effective, and after a slow start, the vaccination rollout is picking up momentum. A third vaccine is likely to be authorized shortly, adding to the nation’s supply.

But it will be many weeks before vaccinations make a dent in the pandemic. And now the virus is shape-shifting faster than expected, evolving into variants that may partly sidestep the immune system.

The latest variant was discovered in New York City only this week, and another worrisome version is spreading at a rapid pace through California. Scientists say a contagious variant first discovered in Britain will become the dominant form of the virus in the United States by the end of March.

The road back to normalcy is potholed with unknowns: how well vaccines prevent further spread of the virus; whether emerging variants remain susceptible enough to the vaccines; and how quickly the world is immunized, so as to halt further evolution of the virus.

But the greatest ambiguity is human behavior. Can Americans desperate for normalcy keep wearing masks and distancing themselves from family and friends? How much longer can communities keep businesses, offices and schools closed?

Covid-19 deaths will most likely never rise quite as precipitously as in the past, and the worst may be behind us. But if Americans let down their guard too soon — many states are already lifting restrictions — and if the variants spread in the United States as they have elsewhere, another spike in cases may well arrive in the coming weeks.

Scientists call it the fourth wave. The new variants mean “we’re essentially facing a pandemic within a pandemic,” said Adam Kucharski, an epidemiologist at the London School of Hygiene and Tropical Medicine.

A patient received comfort in the I.C.U. of Marian Regional Medical Center in Santa Maria, Calif., last month. 
Credit: Daniel Dreifuss for The New York Times

The United States has now recorded 500,000 deaths amid the pandemic, a terrible milestone. As of Wednesday morning, at least 28.3 million people have been infected.

But the rate of new infections has tumbled by 35 percent over the past two weeks, according to a database maintained by The New York Times. Hospitalizations are down 31 percent, and deaths have fallen by 16 percent.

Yet the numbers are still at the horrific highs of November, scientists noted. At least 3,210 people died of Covid-19 on Wednesday alone. And there is no guarantee that these rates will continue to decrease.

“Very, very high case numbers are not a good thing, even if the trend is downward,” said Marc Lipsitch, an epidemiologist at the Harvard T.H. Chan School of Public Health in Boston. “Taking the first hint of a downward trend as a reason to reopen is how you get to even higher numbers.”

In late November, for example, Gov. Gina Raimondo of Rhode Island limited social gatherings and some commercial activities in the state. Eight days later, cases began to decline. The trend reversed eight days after the state’s pause lifted on Dec. 20.

The virus’s latest retreat in Rhode Island and most other states, experts said, results from a combination of factors: growing numbers of people with immunity to the virus, either from having been infected or from vaccination; changes in behavior in response to the surges of a few weeks ago; and a dash of seasonality — the effect of temperature and humidity on the survival of the virus.

Parts of the country that experienced huge surges in infection, like Montana and Iowa, may be closer to herd immunity than other regions. But patchwork immunity alone cannot explain the declines throughout much of the world.

The vaccines were first rolled out to residents of nursing homes and to the elderly, who are at highest risk of severe illness and death. That may explain some of the current decline in hospitalizations and deaths.

A volunteer in the Johnson & Johnson vaccine trial received a shot in the Desmond Tutu H.I.V. Foundation Youth Center in Masiphumelele, South Africa, in December.
Credit: Joao Silva/The New York Times

But young people drive the spread of the virus, and most of them have not yet been inoculated. And the bulk of the world’s vaccine supply has been bought up by wealthy nations, which have amassed one billion more doses than needed to immunize their populations.

Vaccination cannot explain why cases are dropping even in countries where not a single soul has been immunized, like Honduras, Kazakhstan or Libya. The biggest contributor to the sharp decline in infections is something more mundane, scientists say: behavioral change.

Leaders in the United States and elsewhere stepped up community restrictions after the holiday peaks. But individual choices have also been important, said Lindsay Wiley, an expert in public health law and ethics at American University in Washington.

“People voluntarily change their behavior as they see their local hospital get hit hard, as they hear about outbreaks in their area,” she said. “If that’s the reason that things are improving, then that’s something that can reverse pretty quickly, too.”

The downward curve of infections with the original coronavirus disguises an exponential rise in infections with B.1.1.7, the variant first identified in Britain, according to many researchers.

“We really are seeing two epidemic curves,” said Ashleigh Tuite, an infectious disease modeler at the University of Toronto.

The B.1.1.7 variant is thought to be more contagious and more deadly, and it is expected to become the predominant form of the virus in the United States by late March. The number of cases with the variant in the United States has risen from 76 in 12 states as of Jan. 13 to more than 1,800 in 45 states now. Actual infections may be much higher because of inadequate surveillance efforts in the United States.

Buoyed by the shrinking rates over all, however, governors are lifting restrictions across the United States and are under enormous pressure to reopen completely. Should that occur, B.1.1.7 and the other variants are likely to explode.

“Everybody is tired, and everybody wants things to open up again,” Dr. Tuite said. “Bending to political pressure right now, when things are really headed in the right direction, is going to end up costing us in the long term.”

A fourth wave doesn’t have to be inevitable, scientists say, but the new variants will pose a significant challenge to averting that wave.
Credit: Lyndon French for The New York Times

Looking ahead to late March or April, the majority of scientists interviewed by The Times predicted a fourth wave of infections. But they stressed that it is not an inevitable surge, if government officials and individuals maintain precautions for a few more weeks.

A minority of experts were more sanguine, saying they expected powerful vaccines and an expanding rollout to stop the virus. And a few took the middle road.

“We’re at that crossroads, where it could go well or it could go badly,” said Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.

The vaccines have proved to be more effective than anyone could have hoped, so far preventing serious illness and death in nearly all recipients. At present, about 1.4 million Americans are vaccinated each day. More than 45 million Americans have received at least one dose.

A team of researchers at Fred Hutchinson Cancer Research Center in Seattle tried to calculate the number of vaccinations required per day to avoid a fourth wave. In a model completed before the variants surfaced, the scientists estimated that vaccinating just one million Americans a day would limit the magnitude of the fourth wave.

“But the new variants completely changed that,” said Dr. Joshua T. Schiffer, an infectious disease specialist who led the study. “It’s just very challenging scientifically — the ground is shifting very, very quickly.”

Natalie Dean, a biostatistician at the University of Florida, described herself as “a little more optimistic” than many other researchers. “We would be silly to undersell the vaccines,” she said, noting that they are effective against the fast-spreading B.1.1.7 variant.

But Dr. Dean worried about the forms of the virus detected in South Africa and Brazil that seem less vulnerable to the vaccines made by Pfizer and Moderna. (On Wednesday, Johnson & Johnson reported that its vaccine was relatively effective against the variant found in South Africa.)

Ccoronavirus test samples in a lab for genomic sequencing at Duke University in Durham, N.C., earlier this month.
Credit: Pete Kiehart for The New York Times

About 50 infections with those two variants have been identified in the United States, but that could change. Because of the variants, scientists do not know how many people who were infected and had recovered are now vulnerable to reinfection.

South Africa and Brazil have reported reinfections with the new variants among people who had recovered from infections with the original version of the virus.

“That makes it a lot harder to say, ‘If we were to get to this level of vaccinations, we’d probably be OK,’” said Sarah Cobey, an evolutionary biologist at the University of Chicago.

Yet the biggest unknown is human behavior, experts said. The sharp drop in cases now may lead to complacency about masks and distancing, and to a wholesale lifting of restrictions on indoor dining, sporting events and more. Or … not.

“The single biggest lesson I’ve learned during the pandemic is that epidemiological modeling struggles with prediction, because so much of it depends on human behavioral factors,” said Carl Bergstrom, a biologist at the University of Washington in Seattle.

Taking into account the counterbalancing rises in both vaccinations and variants, along with the high likelihood that people will stop taking precautions, a fourth wave is highly likely this spring, the majority of experts told The Times.

Kristian Andersen, a virologist at the Scripps Research Institute in San Diego, said he was confident that the number of cases will continue to decline, then plateau in about a month. After mid-March, the curve in new cases will swing upward again.

In early to mid-April, “we’re going to start seeing hospitalizations go up,” he said. “It’s just a question of how much.”

Hospitalizations and deaths will fall to levels low enough to reopen the country — though mask-wearing may remain necessary as a significant portion of people, including children, won’t be immunized.
Credit: Kendrick Brinson for The New York Times

Now the good news.

Despite the uncertainties, the experts predict that the last surge will subside in the United States sometime in the early summer. If the Biden administration can keep its promise to immunize every American adult by the end of the summer, the variants should be no match for the vaccines.

Combine vaccination with natural immunity and the human tendency to head outdoors as weather warms, and “it may not be exactly herd immunity, but maybe it’s sufficient to prevent any large outbreaks,” said Youyang Gu, an independent data scientist, who created some of the most prescient models of the pandemic.

Infections will continue to drop. More important, hospitalizations and deaths will fall to negligible levels — enough, hopefully, to reopen the country.

“Sometimes people lose vision of the fact that vaccines prevent hospitalization and death, which is really actually what most people care about,” said Stefan Baral, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health.

Even as the virus begins its swoon, people may still need to wear masks in public places and maintain social distance, because a significant percent of the population — including children — will not be immunized.

“Assuming that we keep a close eye on things in the summer and don’t go crazy, I think that we could look forward to a summer that is looking more normal, but hopefully in a way that is more carefully monitored than last summer,” said Emma Hodcroft, a molecular epidemiologist at the University of Bern in Switzerland.

Imagine: Groups of vaccinated people will be able to get together for barbecues and play dates, without fear of infecting one another. Beaches, parks and playgrounds will be full of mask-free people. Indoor dining will return, along with movie theaters, bowling alleys and shopping malls — although they may still require masks.

The virus will still be circulating, but the extent will depend in part on how well vaccines prevent not just illness and death, but also transmission. The data on whether vaccines stop the spread of the disease are encouraging, but immunization is unlikely to block transmission entirely.

Self-swab testing for Covid at Duke University in February.
Credit: Pete Kiehart for The New York Times

“It’s not zero and it’s not 100 — exactly where that number is will be important,” said Shweta Bansal, an infectious disease modeler at Georgetown University. “It needs to be pretty darn high for us to be able to get away with vaccinating anything below 100 percent of the population, so that’s definitely something we’re watching.”

Over the long term — say, a year from now, when all the adults and children in the United States who want a vaccine have received them — will this virus finally be behind us?

Every expert interviewed by The Times said no. Even after the vast majority of the American population has been immunized, the virus will continue to pop up in clusters, taking advantage of pockets of vulnerability. Years from now, the coronavirus may be an annoyance, circulating at low levels, causing modest colds.

Many scientists said their greatest worry post-pandemic was that new variants may turn out to be significantly less susceptible to the vaccines. Billions of people worldwide will remain unprotected, and each infection gives the virus new opportunities to mutate.

“We won’t have useless vaccines. We might have slightly less good vaccines than we have at the moment,” said Andrew Read, an evolutionary microbiologist at Penn State University. “That’s not the end of the world, because we have really good vaccines right now.”

For now, every one of us can help by continuing to be careful for just a few more months, until the curve permanently flattens.

“Just hang in there a little bit longer,” Dr. Tuite said. “There’s a lot of optimism and hope, but I think we need to be prepared for the fact that the next several months are likely to continue to be difficult.”

Credit: Lyndon French for The New York Times

Climate crisis: world is at its hottest for at least 12,000 years – study (The Guardian)

theguardian.com

Damian Carrington, Environment editor @dpcarrington

Wed 27 Jan 2021 16.00 GMT

The world’s continuously warming climate is revealed also in contemporary ice melt at glaciers, such as with this one in the Kenai mountains, Alaska (seen September 2019). Photograph: Joe Raedle/Getty Images

The planet is hotter now than it has been for at least 12,000 years, a period spanning the entire development of human civilisation, according to research.

Analysis of ocean surface temperatures shows human-driven climate change has put the world in “uncharted territory”, the scientists say. The planet may even be at its warmest for 125,000 years, although data on that far back is less certain.

The research, published in the journal Nature, reached these conclusions by solving a longstanding puzzle known as the “Holocene temperature conundrum”. Climate models have indicated continuous warming since the last ice age ended 12,000 years ago and the Holocene period began. But temperature estimates derived from fossil shells showed a peak of warming 6,000 years ago and then a cooling, until the industrial revolution sent carbon emissions soaring.

This conflict undermined confidence in the climate models and the shell data. But it was found that the shell data reflected only hotter summers and missed colder winters, and so was giving misleadingly high annual temperatures.

“We demonstrate that global average annual temperature has been rising over the last 12,000 years, contrary to previous results,” said Samantha Bova, at Rutgers University–New Brunswick in the US, who led the research. “This means that the modern, human-caused global warming period is accelerating a long-term increase in global temperatures, making today completely uncharted territory. It changes the baseline and emphasises just how critical it is to take our situation seriously.”

The world may be hotter now than any time since about 125,000 years ago, which was the last warm period between ice ages. However, scientists cannot be certain as there is less data relating to that time.

One study, published in 2017, suggested that global temperatures were last as high as today 115,000 years ago, but that was based on less data.

The new research is published in the journal Nature and examined temperature measurements derived from the chemistry of tiny shells and algal compounds found in cores of ocean sediments, and solved the conundrum by taking account of two factors.

First, the shells and organic materials had been assumed to represent the entire year but in fact were most likely to have formed during summer when the organisms bloomed. Second, there are well-known predictable natural cycles in the heating of the Earth caused by eccentricities in the orbit of the planet. Changes in these cycles can lead to summers becoming hotter and winters colder while average annual temperatures change only a little.

Combining these insights showed that the apparent cooling after the warm peak 6,000 years ago, revealed by shell data, was misleading. The shells were in fact only recording a decline in summer temperatures, but the average annual temperatures were still rising slowly, as indicated by the models.

“Now they actually match incredibly well and it gives us a lot of confidence that our climate models are doing a really good job,” said Bova.

The study looked only at ocean temperature records, but Bova said: “The temperature of the sea surface has a really controlling impact on the climate of the Earth. If we know that, it is the best indicator of what global climate is doing.”

She led a research voyage off the coast of Chile in 2020 to take more ocean sediment cores and add to the available data.

Jennifer Hertzberg, of Texas A&M University in the US, said: “By solving a conundrum that has puzzled climate scientists for years, Bova and colleagues’ study is a major step forward. Understanding past climate change is crucial for putting modern global warming in context.”

Lijing Cheng, at the International Centre for Climate and Environment Sciences in Beijing, China, recently led a study that showed that in 2020 the world’s oceans reached their hottest level yet in instrumental records dating back to the 1940s. More than 90% of global heating is taken up by the seas.

Cheng said the new research was useful and intriguing. It provided a method to correct temperature data from shells and could also enable scientists to work out how much heat the ocean absorbed before the industrial revolution, a factor little understood.

The level of carbon dioxide today is at its highest for about 4m years and is rising at the fastest rate for 66m years. Further rises in temperature and sea level are inevitable until greenhouse gas emissions are cut to net zero.

Cálculos mostram que será impossível controlar uma Inteligência Artificial super inteligente (Engenharia é:)

engenhariae.com.br

Ademilson Ramos, 23 de janeiro de 2021


Foto de Alex Knight no Unsplash

A ideia da inteligência artificial derrubar a humanidade tem sido discutida por muitas décadas, e os cientistas acabaram de dar seu veredicto sobre se seríamos capazes de controlar uma superinteligência de computador de alto nível. A resposta? Quase definitivamente não.

O problema é que controlar uma superinteligência muito além da compreensão humana exigiria uma simulação dessa superinteligência que podemos analisar. Mas se não formos capazes de compreendê-lo, é impossível criar tal simulação.

Regras como ‘não causar danos aos humanos’ não podem ser definidas se não entendermos o tipo de cenário que uma IA irá criar, sugerem os pesquisadores. Uma vez que um sistema de computador está trabalhando em um nível acima do escopo de nossos programadores, não podemos mais estabelecer limites.

“Uma superinteligência apresenta um problema fundamentalmente diferente daqueles normalmente estudados sob a bandeira da ‘ética do robô’”, escrevem os pesquisadores.

“Isso ocorre porque uma superinteligência é multifacetada e, portanto, potencialmente capaz de mobilizar uma diversidade de recursos para atingir objetivos que são potencialmente incompreensíveis para os humanos, quanto mais controláveis.”

Parte do raciocínio da equipe vem do problema da parada apresentado por Alan Turing em 1936. O problema centra-se em saber se um programa de computador chegará ou não a uma conclusão e responderá (para que seja interrompido), ou simplesmente ficar em um loop eterno tentando encontrar uma.

Como Turing provou por meio de uma matemática inteligente, embora possamos saber isso para alguns programas específicos, é logicamente impossível encontrar uma maneira que nos permita saber isso para cada programa potencial que poderia ser escrito. Isso nos leva de volta à IA, que, em um estado superinteligente, poderia armazenar todos os programas de computador possíveis em sua memória de uma vez.

Qualquer programa escrito para impedir que a IA prejudique humanos e destrua o mundo, por exemplo, pode chegar a uma conclusão (e parar) ou não – é matematicamente impossível para nós estarmos absolutamente seguros de qualquer maneira, o que significa que não pode ser contido.

“Na verdade, isso torna o algoritmo de contenção inutilizável”, diz o cientista da computação Iyad Rahwan, do Instituto Max-Planck para o Desenvolvimento Humano, na Alemanha.

A alternativa de ensinar alguma ética à IA e dizer a ela para não destruir o mundo – algo que nenhum algoritmo pode ter certeza absoluta de fazer, dizem os pesquisadores – é limitar as capacidades da superinteligência. Ele pode ser cortado de partes da Internet ou de certas redes, por exemplo.

O novo estudo também rejeita essa ideia, sugerindo que isso limitaria o alcance da inteligência artificial – o argumento é que se não vamos usá-la para resolver problemas além do escopo dos humanos, então por que criá-la?

Se vamos avançar com a inteligência artificial, podemos nem saber quando chega uma superinteligência além do nosso controle, tal é a sua incompreensibilidade. Isso significa que precisamos começar a fazer algumas perguntas sérias sobre as direções que estamos tomando.

“Uma máquina superinteligente que controla o mundo parece ficção científica”, diz o cientista da computação Manuel Cebrian, do Instituto Max-Planck para o Desenvolvimento Humano. “Mas já existem máquinas que executam certas tarefas importantes de forma independente, sem que os programadores entendam totalmente como as aprenderam.”

“Portanto, surge a questão de saber se isso poderia em algum momento se tornar incontrolável e perigoso para a humanidade.”

A pesquisa foi publicada no Journal of Artificial Intelligence Research.

Developing Algorithms That Might One Day Be Used Against You (Gizmodo)

gizmodo.com

Ryan F. Mandelbaum, Jan 24, 2021


Brian Nord is an astrophysicist and machine learning researcher.
Brian Nord is an astrophysicist and machine learning researcher. Photo: Mark Lopez/Argonne National Laboratory

Machine learning algorithms serve us the news we read, the ads we see, and in some cases even drive our cars. But there’s an insidious layer to these algorithms: They rely on data collected by and about humans, and they spit our worst biases right back out at us. For example, job candidate screening algorithms may automatically reject names that sound like they belong to nonwhite people, while facial recognition software is often much worse at recognizing women or nonwhite faces than it is at recognizing white male faces. An increasing number of scientists and institutions are waking up to these issues, and speaking out about the potential for AI to cause harm.

Brian Nord is one such researcher weighing his own work against the potential to cause harm with AI algorithms. Nord is a cosmologist at Fermilab and the University of Chicago, where he uses artificial intelligence to study the cosmos, and he’s been researching a concept for a “self-driving telescope” that can write and test hypotheses with the help of a machine learning algorithm. At the same time, he’s struggling with the idea that the algorithms he’s writing may one day be biased against him—and even used against him—and is working to build a coalition of physicists and computer scientists to fight for more oversight in AI algorithm development.

This interview has been edited and condensed for clarity.

Gizmodo: How did you become a physicist interested in AI and its pitfalls?

Brian Nord: My Ph.d is in cosmology, and when I moved to Fermilab in 2012, I moved into the subfield of strong gravitational lensing. [Editor’s note: Gravitational lenses are places in the night sky where light from distant objects has been bent by the gravitational field of heavy objects in the foreground, making the background objects appear warped and larger.] I spent a few years doing strong lensing science in the traditional way, where we would visually search through terabytes of images, through thousands of candidates of these strong gravitational lenses, because they’re so weird, and no one had figured out a more conventional algorithm to identify them. Around 2015, I got kind of sad at the prospect of only finding these things with my eyes, so I started looking around and found deep learning.

Here we are a few years later—myself and a few other people popularized this idea of using deep learning—and now it’s the standard way to find these objects. People are unlikely to go back to using methods that aren’t deep learning to do galaxy recognition. We got to this point where we saw that deep learning is the thing, and really quickly saw the potential impact of it across astronomy and the sciences. It’s hitting every science now. That is a testament to the promise and peril of this technology, with such a relatively simple tool. Once you have the pieces put together right, you can do a lot of different things easily, without necessarily thinking through the implications.

Gizmodo: So what is deep learning? Why is it good and why is it bad?

BN: Traditional mathematical models (like the F=ma of Newton’s laws) are built by humans to describe patterns in data: We use our current understanding of nature, also known as intuition, to choose the pieces, the shape of these models. This means that they are often limited by what we know or can imagine about a dataset. These models are also typically smaller and are less generally applicable for many problems.

On the other hand, artificial intelligence models can be very large, with many, many degrees of freedom, so they can be made very general and able to describe lots of different data sets. Also, very importantly, they are primarily sculpted by the data that they are exposed to—AI models are shaped by the data with which they are trained. Humans decide what goes into the training set, which is then limited again by what we know or can imagine about that data. It’s not a big jump to see that if you don’t have the right training data, you can fall off the cliff really quickly.

The promise and peril are highly related. In the case of AI, the promise is in the ability to describe data that humans don’t yet know how to describe with our ‘intuitive’ models. But, perilously, the data sets used to train them incorporate our own biases. When it comes to AI recognizing galaxies, we’re risking biased measurements of the universe. When it comes to AI recognizing human faces, when our data sets are biased against Black and Brown faces for example, we risk discrimination that prevents people from using services, that intensifies surveillance apparatus, that jeopardizes human freedoms. It’s critical that we weigh and address these consequences before we imperil people’s lives with our research.

Gizmodo: When did the light bulb go off in your head that AI could be harmful?

BN: I gotta say that it was with the Machine Bias article from ProPublica in 2016, where they discuss recidivism and sentencing procedure in courts. At the time of that article, there was a closed-source algorithm used to make recommendations for sentencing, and judges were allowed to use it. There was no public oversight of this algorithm, which ProPublica found was biased against Black people; people could use algorithms like this willy nilly without accountability. I realized that as a Black man, I had spent the last few years getting excited about neural networks, then saw it quite clearly that these applications that could harm me were already out there, already being used, and we’re already starting to become embedded in our social structure through the criminal justice system. Then I started paying attention more and more. I realized countries across the world were using surveillance technology, incorporating machine learning algorithms, for widespread oppressive uses.

Gizmodo: How did you react? What did you do?

BN: I didn’t want to reinvent the wheel; I wanted to build a coalition. I started looking into groups like Fairness, Accountability and Transparency in Machine Learning, plus Black in AI, who is focused on building communities of Black researchers in the AI field, but who also has the unique awareness of the problem because we are the people who are affected. I started paying attention to the news and saw that Meredith Whittaker had started a think tank to combat these things, and Joy Buolamwini had helped found the Algorithmic Justice League. I brushed up on what computer scientists were doing and started to look at what physicists were doing, because that’s my principal community.

It became clear to folks like me and Savannah Thais that physicists needed to realize that they have a stake in this game. We get government funding, and we tend to take a fundamental approach to research. If we bring that approach to AI, then we have the potential to affect the foundations of how these algorithms work and impact a broader set of applications. I asked myself and my colleagues what our responsibility in developing these algorithms was and in having some say in how they’re being used down the line.

Gizmodo: How is it going so far?

BN: Currently, we’re going to write a white paper for SNOWMASS, this high-energy physics event. The SNOWMASS process determines the vision that guides the community for about a decade. I started to identify individuals to work with, fellow physicists, and experts who care about the issues, and develop a set of arguments for why physicists from institutions, individuals, and funding agencies should care deeply about these algorithms they’re building and implementing so quickly. It’s a piece that’s asking people to think about how much they are considering the ethical implications of what they’re doing.

We’ve already held a workshop at the University of Chicago where we’ve begun discussing these issues, and at Fermilab we’ve had some initial discussions. But we don’t yet have the critical mass across the field to develop policy. We can’t do it ourselves as physicists; we don’t have backgrounds in social science or technology studies. The right way to do this is to bring physicists together from Fermilab and other institutions with social scientists and ethicists and science and technology studies folks and professionals, and build something from there. The key is going to be through partnership with these other disciplines.

Gizmodo: Why haven’t we reached that critical mass yet?

BN: I think we need to show people, as Angela Davis has said, that our struggle is also their struggle. That’s why I’m talking about coalition building. The thing that affects us also affects them. One way to do this is to clearly lay out the potential harm beyond just race and ethnicity. Recently, there was this discussion of a paper that used neural networks to try and speed up the selection of candidates for Ph.D programs. They trained the algorithm on historical data. So let me be clear, they said here’s a neural network, here’s data on applicants who were denied and accepted to universities. Those applicants were chosen by faculty and people with biases. It should be obvious to anyone developing that algorithm that you’re going to bake in the biases in that context. I hope people will see these things as problems and help build our coalition.

Gizmodo: What is your vision for a future of ethical AI?

BN: What if there were an agency or agencies for algorithmic accountability? I could see these existing at the local level, the national level, and the institutional level. We can’t predict all of the future uses of technology, but we need to be asking questions at the beginning of the processes, not as an afterthought. An agency would help ask these questions and still allow the science to get done, but without endangering people’s lives. Alongside agencies, we need policies at various levels that make a clear decision about how safe the algorithms have to be before they are used on humans or other living things. If I had my druthers, these agencies and policies would be built by an incredibly diverse group of people. We’ve seen instances where a homogeneous group develops an app or technology and didn’t see the things that another group who’s not there would have seen. We need people across the spectrum of experience to participate in designing policies for ethical AI.

Gizmodo: What are your biggest fears about all of this?

BN: My biggest fear is that people who already have access to technology resources will continue to use them to subjugate people who are already oppressed; Pratyusha Kalluri has also advanced this idea of power dynamics. That’s what we’re seeing across the globe. Sure, there are cities that are trying to ban facial recognition, but unless we have a broader coalition, unless we have more cities and institutions willing to take on this thing directly, we’re not going to be able to keep this tool from exacerbating white supremacy, racism, and misogyny that that already exists inside structures today. If we don’t push policy that puts the lives of marginalized people first, then they’re going to continue being oppressed, and it’s going to accelerate.

Gizmodo: How has thinking about AI ethics affected your own research?

BN: I have to question whether I want to do AI work and how I’m going to do it; whether or not it’s the right thing to do to build a certain algorithm. That’s something I have to keep asking myself… Before, it was like, how fast can I discover new things and build technology that can help the world learn something? Now there’s a significant piece of nuance to that. Even the best things for humanity could be used in some of the worst ways. It’s a fundamental rethinking of the order of operations when it comes to my research.

I don’t think it’s weird to think about safety first. We have OSHA and safety groups at institutions who write down lists of things you have to check off before you’re allowed to take out a ladder, for example. Why are we not doing the same thing in AI? A part of the answer is obvious: Not all of us are people who experience the negative effects of these algorithms. But as one of the few Black people at the institutions I work in, I’m aware of it, I’m worried about it, and the scientific community needs to appreciate that my safety matters too, and that my safety concerns don’t end when I walk out of work.

Gizmodo: Anything else?

BN: I’d like to re-emphasize that when you look at some of the research that has come out, like vetting candidates for graduate school, or when you look at the biases of the algorithms used in criminal justice, these are problems being repeated over and over again, with the same biases. It doesn’t take a lot of investigation to see that bias enters these algorithms very quickly. The people developing them should really know better. Maybe there needs to be more educational requirements for algorithm developers to think about these issues before they have the opportunity to unleash them on the world.

This conversation needs to be raised to the level where individuals and institutions consider these issues a priority. Once you’re there, you need people to see that this is an opportunity for leadership. If we can get a grassroots community to help an institution to take the lead on this, it incentivizes a lot of people to start to take action.

And finally, people who have expertise in these areas need to be allowed to speak their minds. We can’t allow our institutions to quiet us so we can’t talk about the issues we’re bringing up. The fact that I have experience as a Black man doing science in America, and the fact that I do AI—that should be appreciated by institutions. It gives them an opportunity to have a unique perspective and take a unique leadership position. I would be worried if individuals felt like they couldn’t speak their mind. If we can’t get these issues out into the sunlight, how will we be able to build out of the darkness?

Ryan F. Mandelbaum – Former Gizmodo physics writer and founder of Birdmodo, now a science communicator specializing in quantum computing and birds