Em entrevista à Folha, Mokdad diz que a tendência de casos e mortes no país é de alta e que a situação pode ser ainda pior se governo e população não levarem a crise a sério e adotarem “lockdown” por duas semanas.
“As infeções e mortes vão crescer e, o mais assustador, haverá a sobrecarga total do sistema de saúde.” Caso cumpra o confinamento total por 14 dias, explica Mokdad, o Brasil conseguirá controlar a propagação do vírus e poderá fazer a reabertura das atividades econômicas de maneira estratégica –e até mais rapidamente.
Especialista em saúde pública, diz sofrer críticas por ter um modelo que varia bastante, mas, no caso da pandemia, prefere que suas projeções se ajustem com o tempo. “Se os brasileiros ficarem em casa por duas semanas, meus números vão baixar. E não porque fiz algo errado, mas porque os brasileiros fizeram algo certo.”
Qual a situação da pandemia no Brasil? Infelizmente o que vemos no Brasil é uma tendência de aumento de casos, que vai resultar no crescimento das mortes no país. Isso se dá por várias razões. Primeiro porque o país não entrou em “lockdown” cedo para impedir a propagação do vírus. O governo e a população brasileira não levaram isso a sério e não fizeram logo as coisas certas para impedir a transmissão do vírus.
Segundo, há muita disparidade no Brasil e a Covid-19 aumenta isso. Nesse caso, é preciso proteger não só os trabalhadores de saúde mas os trabalhadores de serviços essenciais, pessoas pobres que trabalham em funções que as obrigam a sair de casa. Elas não estão protegidas e estão morrendo. A terceira e mais importante preocupação é a sobrecarga do sistema de saúde. Se o país não agir, vai haver mais casos no inverno e não haverá tempo para se preparar. É perigoso e arriscado. Se você colocar tudo isso junto, o Brasil ainda vai enfrentar sérias dificuldades diante da Covid-19.
Em duas semanas, o IHME aumentou as projeções de morte no Brasil de 88 mil para mais de 125 mil até agosto. O que aconteceu? Adicionamos mais estados [de 11 para 19] na nossa projeção, isso é uma coisa. Mas estamos vendo no Brasil mais surtos e casos do que esperávamos. O país está testando mais e encontrando mais casos, mas, mesmo quando ajustamos para os testes, há uma tendência de alta.
No Brasil há também um erro de suposição quando falamos de circulação. Os dados [de mobilidade da população] são baseados no Facebook e no Google, ou seja, em smartphones, ou seja, em pessoas mais ricas. Percebemos que a circulação não parou nas favelas, por exemplo, em lugares onde pessoas mais pobres precisam sair para trabalhar. Se as pessoas se recusarem a levar isso a sério, infelizmente vamos ver mais casos e mortes.
Quais medidas precisam ser tomadas? Fechar escolas e universidades, impedir grandes aglomerações e encontros de pessoas, fechar os estabelecimentos não essenciais, igrejas, templos e locais religiosos. Nos locais essenciais, como mercados e farmácias, é preciso estabelecer regras, limitando o número de pessoas dentro, garantindo que elas se mantenham distantes umas das outras.
A última e mais importante coisa é pedir para quem precisa sair de casa—e sabemos que há quem precise— usar máscara e manter distância de 2 metros de outras pessoas. Para o sistema de saúde, é aumentar a capacidade de tratamento, de detectar cedo a chegada de um surto, fazendo rastreamento e o isolamento de casos, o que é um desafio para o Brasil, onde muitas vezes dez pessoas vivem em uma mesma casa.
Se o Brasil não cumprir essas medidas, qual é o pior cenário para o país? As infeções e mortes vão crescer e, a parte mais assustadora, haverá a sobrecarga total do sistema de saúde. Isso vai causar mais prejuízo à economia do que se fizer o isolamento por duas semanas. Se a população ficar em casa e levar isso a sério por duas semanas, registraremos diminuição da propagação do vírus e poderemos reabrir em fases. É preciso garantir que a retomada econômica seja feita de maneira estratégica, por setores.
É possível evitar o pico de 1.500 mortes diárias em julho e as 125 mil mortes até agosto se o país parar agora? Sim. O Brasil está em uma situação muito difícil e pode ser assim por muito tempo, mas ainda há esperança. Se o governo e a população pararem por duas semanas, podemos parar a circulação do vírus e reabrir o comércio. Se você olhar para estados americanos, como Nova York, depois que há o “lockdown”, as mortes e os casos diminuem. O “lockdown” salvou muitas vidas nos EUA. Fizemos as projeções para o Brasil de 125 mil mortes até 4 de agosto, mas não significa que vai acontecer, podemos parar isso. É preciso que cada brasileiro faça sua parte.
O presidente Jair Bolsonaro é contra medidas de distanciamento social, compara a Covid-19 com uma gripezinha e defende um medicamento com eficácia não comprovada contra a doença. Como essa postura pode impactar a situação do Brasil? Aqui nos EUA temos também uma situação política nesse sentido, infelizmente. Não sou político, vejo os números e dou conselhos a partir do que concluo deles. Pelos dados, o Brasil precisa de uma ação coordenada, caso contrário, vamos ter muitas perdas.
Mas precisamos ter uma coisa clara: Covid-19 não é uma gripe, causa mais mortalidade que gripe, a gripe não causa AVC e nem ataca os pulmões da maneira que a Covid-19 ataca. Contra Covid-19 não há medicamento e ponto final. Não tem vacina. Não é possível comparar Covid-19 e gripe. Fazer isso é passar mensagem errada. Dizer para a população que é possível sair e ver quem pega a doença é inaceitável, é falha de liderança.
Como ganhar a confiança dos governos e da população com projeções que variam tanto e com tanta gente trabalhando com dados sobre o tema? Há muita gente fazendo projeção mas, pela primeira vez na história da ciência, todos concordamos. Os números podem ser diferentes, mas a mensagem mais importante é a mesma: isso é um vírus letal e temos que levá-lo a sério. Meus números mudam porque as pessoas mudam. Se os brasileiros ficarem em casa por duas semanas, meus números vão baixar. E não porque fiz algo errado, mas porque os brasileiros fizeram algo certo. Aprendemos que o modelo muda se novos dados aparecem.
O sr. já foi acusado de ser alarmista ou de produzir notícias falsas quando seus números mudam? Acusado é demais, mas tem gente que fala que meus números são mais altos ou mais baixos do que deveriam ser, e isso eu nem resposto, porque não é um debate científico, é um debate político. No debate científico está todo mundo a bordo com a mesma mensagem.
Como é trabalhar tendo isso em vista, com números tão sensíveis e poderosos? A gente não dorme muito por esses dias, é muito trabalho. É muito difícil dizer que 125 mil pessoas vão morrer no Brasil até agosto. Isso não é um número, são famílias, amigos, é muito duro.
BRASILIA (Reuters) – As Brazil’s daily COVID-19 death rate climbs to the highest in the world, a University of Washington study is warning its total death toll could climb five-fold to 125,000 by early August, adding to fears it has become a new hot spot in the pandemic.
The forecast from the University of Washington’s Institute for Health Metrics and Evaluation (IHME), released as Brazil’s daily death toll climbed past that of the United States on Monday, came with a call for lockdowns that Brazil’s president has resisted.
“Brazil must follow the lead of Wuhan, China, as well as Italy, Spain, and New York by enforcing mandates and measures to gain control of a fast-moving epidemic and reduce transmission of the coronavirus,” wrote IHME Director Dr. Christopher Murray.
Without such measures, the institute’s model shows Brazil’s daily death toll could keep climbing to until mid-July, driving shortages of critical hospital resources in Brazil, he said in a statement accompanying the findings.
On Monday, Brazil’s coronavirus deaths reported in the last 24 hours were higher than fatalities in the United States for the first time, according to the health ministry. Brazil registered 807 deaths and 620 died in the United States.
The U.S. government on Monday brought forward to Tuesday midnight enforcement of restrictions on travel to the United States from Brazil as the South American country reported the highest death toll in the world for that day.
Washington’s ban applies to foreigners traveling to the United States if they had been in Brazil in the last two weeks. Two days earlier, Brazil overtook Russia as the world’s No. 2 coronavirus hot spot in number of confirmed cases, after the United States.
Murray said the IHME forecast captures the effects of social distancing mandates, mobility trends and testing capacity, so projections could shift along with policy changes.
The model will be updated regularly as new data is released on cases, hospitalizations, deaths, testing and mobility.
Reporting by Anthony Boadle; Editing by Brad Haynes and Steve Orlofsky
Summary: At the beginning of a new wave of an epidemic, extreme care should be used when extrapolating data to determine whether lockdowns are necessary, experts say.
As the infectious virus causing the COVID-19 disease began its devastating spread around the globe, an international team of scientists was alarmed by the lack of uniform approaches by various countries’ epidemiologists to respond to it.
Germany, for example, didn’t institute a full lockdown, unlike France and the U.K., and the decision in the U.S. by New York to go into a lockdown came only after the pandemic had reached an advanced stage. Data modeling to predict the numbers of likely infections varied widely by region, from very large to very small numbers, and revealed a high degree of uncertainty.
Davide Faranda, a scientist at the French National Centre for Scientific Research (CNRS), and colleagues in the U.K., Mexico, Denmark, and Japan decided to explore the origins of these uncertainties. This work is deeply personal to Faranda, whose grandfather died of COVID-19; Faranda has dedicated the work to him.
In the journal Chaos, from AIP Publishing, the group describes why modeling and extrapolating the evolution of COVID-19 outbreaks in near real time is an enormous scientific challenge that requires a deep understanding of the nonlinearities underlying the dynamics of epidemics.
“Our physical model is based on assuming that the total population can be divided into four groups: those who are susceptible to catching the virus, those who have contracted the virus but don’t show any symptoms, those who are infected and, finally, those who recovered or died from the virus,” Faranda said.
To determine how people move from one group to another, it’s necessary to know the infection rate, incubation time and recovery time. Actual infection data can be used to extrapolate the behavior of the epidemic with statistical models.
“Because of the uncertainties in both the parameters involved in the models — infection rate, incubation period and recovery time — and the incompleteness of infections data within different countries, extrapolations could lead to an incredibly large range of uncertain results,” Faranda said. “For example, just assuming an underestimation of the last data in the infection counts of 20% can lead to a change in total infections estimations from few thousands to few millions of individuals.”
The group has also shown that this uncertainty is due to a lack of data quality and also to the intrinsic nature of the dynamics, because it is ultrasensitive to the parameters — especially during the initial growing phase. This means that everyone should be very careful extrapolating key quantities to decide whether to implement lockdown measures when a new wave of the virus begins.
“The total final infection counts as well as the duration of the epidemic are sensitive to the data you put in,” he said.
The team’s model handles uncertainty in a natural way, so they plan to show how modeling of the post-confinement phase can be sensitive to the measures taken.
“Preliminary results show that implementing lockdown measures when infections are in a full exponential growth phase poses serious limitations for their success,” said Faranda.
Davide Faranda, Isaac Pérez Castillo, Oliver Hulme, Aglaé Jezequel, Jeroen S. W. Lamb, Yuzuru Sato, Erica L. Thompson. Asymptotic estimates of SARS-CoV-2 infection counts and their sensitivity to stochastic perturbation. Chaos: An Interdisciplinary Journal of Nonlinear Science, 2020; 30 (5): 051107 DOI: 10.1063/5.0008834
Covid-19 isn’t going away soon. Two recent studies mapped out the possible shapes of its trajectory.
By Siobhan Roberts – May 8, 2020
By now we know — contrary to false predictions — that the novel coronavirus will be with us for a rather long time.
“Exactly how long remains to be seen,” said Marc Lipsitch, an infectious disease epidemiologist at Harvard’s T.H. Chan School of Public Health. “It’s going to be a matter of managing it over months to a couple of years. It’s not a matter of getting past the peak, as some people seem to believe.”
A single round of social distancing — closing schools and workplaces, limiting the sizes of gatherings, lockdowns of varying intensities and durations — will not be sufficient in the long term.
In the interest of managing our expectations and governing ourselves accordingly, it might be helpful, for our pandemic state of mind, to envision this predicament — existentially, at least — as a soliton wave: a wave that just keeps rolling and rolling, carrying on under its own power for a great distance.
The Scottish engineer and naval architect John Scott Russell first spotted a soliton in 1834 as it traveled along the Union Canal. He followed on horseback and, as he wrote in his “Report on Waves,” overtook it rolling along at about eight miles an hour, at thirty feet long and a foot or so in height. “Its height gradually diminished, and after a chase of one or two miles I lost it in the windings of the channel.”
The pandemic wave, similarly, will be with us for the foreseeable future before it diminishes. But, depending on one’s geographic location and the policies in place, it will exhibit variegated dimensions and dynamics traveling through time and space.
“There is an analogy between weather forecasting and disease modeling,” Dr. Lipsitch said. Both, he noted, are simple mathematical descriptions of how a system works: drawing upon physics and chemistry in the case of meteorology; and on behavior, virology and epidemiology in the case of infectious-disease modeling. Of course, he said, “we can’t change the weather.” But we can change the course of the pandemic — with our behavior, by balancing and coordinating psychological, sociological, economic and political factors.
Dr. Lipsitch is a co-author of two recent analyses — one from the Center for Infectious Disease Research and Policy at the University of Minnesota, the other from the Chan School published in Science — that describe a variety of shapes the pandemic wave might take in the coming months.
The Minnesota study describes three possibilities:
Scenario No. 1 depicts an initial wave of cases — the current one — followed by a consistently bumpy ride of “peaks and valleys” that will gradually diminish over a year or two.
Scenario No. 2 supposes that the current wave will be followed by a larger “fall peak,” or perhaps a winter peak, with subsequent smaller waves thereafter, similar to what transpired during the 1918-1919 flu pandemic.
Scenario No. 3 shows an intense spring peak followed by a “slow burn” with less-pronounced ups and downs.
The authors conclude that whichever reality materializes (assuming ongoing mitigation measures, as we await a vaccine), “we must be prepared for at least another 18 to 24 months of significant Covid-19 activity, with hot spots popping up periodically in diverse geographic areas.”
In the Science paper, the Harvard team — infectious-disease epidemiologist Yonatan Grad, his postdoctoral fellow Stephen Kissler, Dr. Lipsitch, his doctoral student Christine Tedijanto and their colleague Edward Goldstein — took a closer look at various scenarios by simulating the transmission dynamics using the latest Covid-19 data and data from related viruses.
The authors conveyed the results in a series of graphs — composed by Dr. Kissler and Ms. Tedijanto — that project a similarly wavy future characterized by peaks and valleys.
One figure from the paper, reinterpreted below, depicts possible scenarios (the details would differ geographically) and shows the red trajectory of Covid-19 infections in response to “intermittent social distancing” regimes represented by the blue bands.
Social distancing is turned “on” when the number of Covid-19 cases reaches a certain prevalence in the population — for instance, 35 cases per 10,000, although the thresholds would be set locally, monitored with widespread testing. It is turned “off” when cases drop to a lower threshold, perhaps 5 cases per 10,000. Because critical cases that require hospitalization lag behind the general prevalence, this strategy aims to prevent the health care system from being overwhelmed.
The green graph represents the corresponding, if very gradual, increase in population immunity.
“The ‘herd immunity threshold’ in the model is 55 percent of the population, or the level of immunity that would be needed for the disease to stop spreading in the population without other measures,” Dr. Kissler said.
Another iteration shows the effects of seasonality — a slower spread of the virus during warmer months. Theoretically, seasonal effects allow for larger intervals between periods of social distancing.
This year, however, the seasonal effects will likely be minimal, since a large proportion of the population will still be susceptible to the virus come summer. And there are other unknowns, since the underlying mechanisms of seasonality — such as temperature, humidity and school schedules — have been studied for some respiratory infections, like influenza, but not for coronaviruses. So, alas, we cannot depend on seasonality alone to stave off another outbreak over the coming summer months.
Yet another scenario takes into account not only seasonality but also a doubling of the critical-care capacity in hospitals. This, in turn, allows for social distancing to kick in at a higher threshold — say, at a prevalence of 70 cases per 10,000 — and for even longer breaks between social distancing periods:
What is clear overall is that a one-time social distancing effort will not be sufficient to control the epidemic in the long term, and that it will take a long time to reach herd immunity.
“This is because when we are successful in doing social distancing — so that we don’t overwhelm the health care system — fewer people get the infection, which is exactly the goal,” said Ms. Tedijanto. “But if infection leads to immunity, successful social distancing also means that more people remain susceptible to the disease. As a result, once we lift the social distancing measures, the virus will quite possibly spread again as easily as it did before the lockdowns.”
So, lacking a vaccine, our pandemic state of mind may persist well into 2021 or 2022 — which surprised even the experts.
“We anticipated a prolonged period of social distancing would be necessary, but didn’t initially realize that it could be this long,” Dr. Kissler said.
Claudio Maierovitch Pessanha Henriques – 6 de maio de 2020
Desde o início da epidemia de doença causada pelo novo coronavírus (Covid-19), a grande pergunta tem sido “quando acaba?” Frequentemente, são divulgadas na mídia e nas redes sociais projeções as mais variadas sobre a famosa curva da doença em vários países e no mundo, algumas recentes, mostrando a tendência de que os casos deixem de surgir no início do segundo semestre deste ano.
Tais modelos partem do pressuposto de que há uma história, uma curva natural da doença, que começa, sobe, atinge um pico e começa a cair. Vamos analisar o sentido de tal raciocínio. Muitas doenças transmissíveis agudas, quando atingem uma população nova, expandem-se rapidamente, numa velocidade que depende de seu chamado número reprodutivo básico, ou R0 (“R zero”, que estima para quantas pessoas o portador de um agente infeccioso o transmite).
Quando uma quantidade grande de pessoas tiver adoecido ou se infectado mesmo sem sintomas, os contatos entre portadores e pessoas que não tiveram a doença começam a se tornar raros. Num cenário em que pessoas sobreviventes da infecção fiquem imunes àquele agente, sua proporção cresce e a transmissão se torna cada vez mais rara. Assim, a curva, que vinha subindo, fica horizontal e começa a cair, podendo até mesmo chegar a zero, situação em que o agente deixa de circular.
Em populações grandes, é muito raro que uma doença seja completamente eliminada desta forma, por isso a incidência cresce novamente de tempos em tempos. Quando a quantidade de pessoas que não se infectaram, somada à dos bebês que nascem e pessoas sem imunidade que vieram de outros lugares é suficientemente grande, então a curva sobe novamente.
É assim, de forma simplificada, que a ciência entende a ocorrência periódica de epidemias de doenças infecciosas agudas. A história nos ilustra com numerosos exemplos, como varíola, sarampo, gripe, rubéola, poliomielite, caxumba, entre muitos outros. Dependendo das características da doença e da sociedade, são ciclos ilustrados por sofrimento, sequelas e mortes. Realmente, nesses casos, é possível estimar a duração das epidemias e, em alguns casos, até mesmo prever as próximas.
A saúde pública tem diversas ferramentas para interferir em muitos desses casos, indicados para diferentes mecanismos de transmissão, como saneamento, medidas de higiene, isolamento, combate a vetores, uso de preservativos, extinção de fontes de contaminação, vacinas e tratamentos capazes de eliminar os microrganismos. A vacinação, ação específica de saúde considerada mais efetiva, simula o que acontece naturalmente, ao aumentar a quantidade de pessoas imunes na população até que a doença deixe de circular, sem que para isso pessoas precisem adoecer.
No caso da Covid-19, há estimativas de que para a doença deixar de circular intensamente será preciso que cerca de 70% da população seja infectada. Isso se chama imunidade coletiva (também se adota a desagradável denominação “imunidade de rebanho”). Quanto à situação atual de disseminação do coronavírus Sars-CoV-2, a Organização Mundial da Saúde (OMS) calcula que até a metade de abril apenas de 2% a 3% da população mundial terá sido infectada. Estimativas para o Brasil são um pouco inferiores a essa média.
Trocando em miúdos, para que a doença atinja naturalmente seu pico no país e comece a cair, será preciso esperar que 140 milhões de pessoas se infectem. A mais conservadora (menor) taxa de letalidade encontrada nas publicações sobre a Covid-19 é de 0,36%, mais ou menos um vigésimo daquela que os números oficiais de casos e mortes revelam. Isso significa que até o Brasil atingir o pico, contaremos 500 mil mortes se o sistema de saúde não ultrapassar seus limites —e, caso isso aconteça, um número muito maior.
Atingir o pico é sinônimo de catástrofe, não é uma aposta admissível, sobretudo quando constatamos que já está esgotada a capacidade de atendimento hospitalar em várias cidades, como Manaus, Rio de Janeiro e Fortaleza —outras seguem o mesmo caminho.
A única perspectiva aceitável é evitar o pico, e a única forma de fazê-lo é com medidas rigorosas de afastamento físico. A cota de contatos entre as pessoas deve ficar reservada às atividades essenciais, entre elas saúde, segurança, cadeias de suprimento de combustíveis, alimentos, produtos de limpeza, materiais e equipamentos de uso em saúde, limpeza, manutenção e mais um ou outro setor. Alguma dose de criatividade pode permitir ampliar um pouco esse leque, desde que os meios de transporte e vias públicas permaneçam vazios o suficiente para que seja mantida a distância mínima entre as pessoas.
O monitoramento do número de casos e mortes, que revela a transmissão com duas a três semanas de defasagem, deverá ser aprimorado e utilizado em conjunto com estudos baseados em testes laboratoriais para indicar o rigor das medidas de isolamento.
Se conseguirmos evitar a tragédia maior, vamos conviver com um longo período de restrição de atividades, mais de um ano, e teremos que aprender a organizar a vida e a economia de outras formas, além de passar por alguns períodos de “lockdown” —cerca de duas semanas cada, se a curva apontar novamente para o pico.
Hoje, a situação é grave e tende a se tornar crítica. O Brasil é o país com a maior taxa de transmissão da doença; é hora de ficar em casa e, se for imprescindível sair, fazer da máscara uma parte inseparável da vestimenta e manter rigorosamente todos os cuidados indicados.
The world has never faced a hunger emergency like this, experts say. It could double the number of people facing acute hunger to 265 million by the end of this year.
Published April 22, 2020; Updated April 23, 2020, 6:39 a.m. ET
NAIROBI, Kenya — In the largest slum in Kenya’s capital, people desperate to eat set off a stampede during a recent giveaway of flour and cooking oil, leaving scores injured and two people dead.
In India, thousands of workers are lining up twice a day for bread and fried vegetables to keep hunger at bay.
And across Colombia, poor households are hanging red clothing and flags from their windows and balconies as a sign that they are hungry.
“We don’t have any money, and now we need to survive,” said Pauline Karushi, who lost her job at a jewelry business in Nairobi, and lives in two rooms with her child and four other relatives. “That means not eating much.”
The coronavirus pandemic has brought hunger to millions of people around the world. National lockdowns and social distancing measures are drying up work and incomes, and are likely to disrupt agricultural production and supply routes — leaving millions to worry how they will get enough to eat.
The coronavirus has sometimes been called an equalizer because it has sickened both rich and poor, but when it comes to food, the commonality ends. It is poor people, including large segments of poorer nations, who are now going hungry and facing the prospect of starving.
“The coronavirus has been anything but a great equalizer,” said Asha Jaffar, a volunteer who brought food to families in the Nairobi slum of Kibera after the fatal stampede. “It’s been the great revealer, pulling the curtain back on the class divide and exposing how deeply unequal this country is.”
Already, 135 million people had been facing acute food shortages, but now with the pandemic, 130 million more could go hungry in 2020, said Arif Husain, chief economist at the World Food Program, a United Nations agency. Altogether, an estimated 265 million people could be pushed to the brink of starvation by year’s end.
“We’ve never seen anything like this before,” Mr. Husain said. “It wasn’t a pretty picture to begin with, but this makes it truly unprecedented and uncharted territory.”
The world has experienced severe hunger crises before, but those were regional and caused by one factor or another — extreme weather, economic downturns, wars or political instability.
This hunger crisis, experts say, is global and caused by a multitude of factors linked to the coronavirus pandemic and the ensuing interruption of the economic order: the sudden loss in income for countless millions who were already living hand-to-mouth; the collapse in oil prices; widespread shortages of hard currency from tourism drying up; overseas workers not having earnings to send home; and ongoing problems like climate change, violence, population dislocations and humanitarian disasters.
Already, from Honduras to South Africa to India, protests and looting have broken out amid frustrations from lockdowns and worries about hunger. With classes shut down, over 368 million children have lost the nutritious meals and snacks they normally receive in school.
There is no shortage of food globally, or mass starvation from the pandemic — yet. But logistical problems in planting, harvesting and transporting food will leave poor countries exposed in the coming months, especially those reliant on imports, said Johan Swinnen, director general of the International Food Policy Research Institute in Washington.
While the system of food distribution and retailing in rich nations is organized and automated, he said, systems in developing countries are “labor intensive,” making “these supply chains much more vulnerable to Covid-19 and social distancing regulations.”
Yet even if there is no major surge in food prices, the food security situation for poor people is likely to deteriorate significantly worldwide. This is especially true for economies like Sudan and Zimbabwe that were struggling before the outbreak, or those like Iran that have increasingly used oil revenues to finance critical goods like food and medicine.
Sign up to receive an email when we publish a new story about the coronavirus outbreak.
In the sprawling Petare slum on the outskirts of the capital, Caracas, a nationwide lockdown has left Freddy Bastardo and five others in his household without jobs. Their government-supplied rations, which had arrived only once every two months before the crisis, have long run out.
“We are already thinking of selling things that we don’t use in the house to be able to eat,” said Mr. Bastardo, 25, a security guard. “I have neighbors who don’t have food, and I’m worried that if protests start, we wouldn’t be able to get out of here.”
As wages have dried up, half a million people are estimated to have left cities to walk home, setting off the nation’s “largest mass migration since independence,” said Amitabh Behar, the chief executive of Oxfam India.
On a recent evening, hundreds of migrant workers, who have been stuck in New Delhi after a lockdown was imposed in March with little warning, sat under the shade of a bridge waiting for food to arrive. The Delhi government has set up soup kitchens, yet workers like Nihal Singh go hungry as the throngs at these centers have increased in recent days.
“Instead of coronavirus, the hunger will kill us,” said Mr. Singh, who was hoping to eat his first meal in a day. Migrants waiting in food lines have fought each other over a plate of rice and lentils. Mr. Singh said he was ashamed to beg for food but had no other option.
“The lockdown has trampled on our dignity,” he said.
Refugees and people living in conflict zones are likely to be hit the hardest.
The curfews and restrictions on movement are already devastating the meager incomes of displaced people in Uganda and Ethiopia, the delivery of seeds and farming tools in South Sudan and the distribution of food aid in the Central African Republic. Containment measures in Niger, which hosts almost 60,000 refugees fleeing conflict in Mali, have led to surges in the pricing of food, according to the International Rescue Committee.
The effects of the restrictions “may cause more suffering than the disease itself,” said Kurt Tjossem, regional vice president for East Africa at the International Rescue Committee.
Ahmad Bayoush, a construction worker who had been displaced to Idlib Province in northern Syria, said he and many others had signed up to receive food from aid groups, but that it had yet to arrive.
“I am expecting real hunger if it continues like this in the north,” he said.
The pandemic is also slowing efforts to deal with the historic locust plague that has been ravaging the East and Horn of Africa. The outbreak is the worst the region has seen in decades and comes on the heels of a year marked by extreme droughts and floods. But the arrival of billions of new swarms could further deepen food insecurity, said Cyril Ferrand, head of the Food and Agriculture Organization’s resilience team in eastern Africa.
Travel bans and airport closures, Mr. Ferrand said, are interrupting the supply of pesticides that could help limit the locust population and save pastureland and crops.
As many go hungry, there is concern in a number of countries that food shortages will lead to social discord. In Colombia, residents of the coastal state of La Guajira have begun blocking roads to call attention to their need for food. In South Africa, rioters have broken into neighborhood food kiosks and faced off with the police.
And even charitable food giveaways can expose people to the virus when throngs appear, as happened in Nairobi’s shantytown of Kibera earlier this month.
“People called each other and came rushing,” said Valentine Akinyi, who works at the district government office where the food was distributed. “People have lost jobs. It showed you how hungry they are.”
Yet communities across the world are also taking matters into their own hands. Some are raising money through crowdfunding platforms, while others have begun programs to buy meals for needy families.
On a recent afternoon, Ms. Jaffar and a group of volunteers made their way through Kibera, bringing items like sugar, flour, rice and sanitary pads to dozens of families. A native of the area herself, Ms. Jaffar said she started the food drive after hearing so many stories from families who said they and their children were going to sleep hungry.
The food drive has so far reached 500 families. But with all the calls for assistance she’s getting, she said, “that’s a drop in the ocean.”
Reporting was contributed by Anatoly Kurmanaev and Isayen Herrera from Caracas, Venezuela; Paulina Villegas from Mexico City; Julie Turkewitz from Bogotá, Colombia; Ben Hubbard and Hwaida Saad from Beirut, Lebanon; Sameer Yasir from New Delhi; and Hannah Beech from Bangkok.
Nassim Nicholas Taleb is “irritated,” he told Bloomberg Television on March 31st, whenever the coronavirus pandemic is referred to as a “black swan,” the term he coined for an unpredictable, rare, catastrophic event, in his best-selling 2007 book of that title. “The Black Swan” was meant to explain why, in a networked world, we need to change business practices and social norms—not, as he recently told me, to provide “a cliché for any bad thing that surprises us.” Besides, the pandemic was wholly predictable—he, like Bill Gates, Laurie Garrett, and others, had predicted it—a white swan if ever there was one. “We issued our warning that, effectively, you should kill it in the egg,” Taleb told Bloomberg. Governments “did not want to spend pennies in January; now they are going to spend trillions.”
The warning that he referred to appeared in a January 26th paper that he co-authored with Joseph Norman and Yaneer Bar-Yam, when the virus was still mainly confined to China. The paper cautions that, owing to “increased connectivity,” the spread will be “nonlinear”—two key contributors to Taleb’s anxiety. For statisticians, “nonlinearity” describes events very much like a pandemic: an output disproportionate to known inputs (the structure and growth of pathogens, say), owing to both unknown and unknowable inputs (their incubation periods in humans, or random mutations), or eccentric interaction among various inputs (wet markets and airplane travel), or exponential growth (from networked human contact), or all three.
“These are ruin problems,” the paper states, exposure to which “leads to a certain eventual extinction.” The authors call for “drastically pruning contact networks,” and other measures that we now associate with sheltering in place and social distancing. “Decision-makers must act swiftly,” the authors conclude, “and avoid the fallacy that to have an appropriate respect for uncertainty in the face of possible irreversible catastrophe amounts to ‘paranoia.’ ” (“Had we used masks then”—in late January—“we could have saved ourselves the stimulus,” Taleb told me.)
Yet, for anyone who knows his work, Taleb’s irritation may seem a little forced. His profession, he says, is “probability.” But his vocation is showing how the unpredictable is increasingly probable. If he was right about the spread of this pandemic it’s because he has been so alert to the dangers of connectivity and nonlinearity more generally, to pandemics and other chance calamities for which COVID-19 is a storm signal. “I keep getting asked for a list of the next four black swans,” Taleb told me, and that misses his point entirely. In a way, focussing on his January warning distracts us from his main aim, which is building political structures so that societies will be better able to cope with mounting, random events.
Indeed, if Taleb is chronically irritated, it is by those economists, officials, journalists, and executives—the “naïve empiricists”—who think that our tomorrows are likely to be pretty much like our yesterdays. He explained in a conversation that these are the people who, consulting bell curves, focus on their bulging centers, and disregard potentially fatal “fat tails”—events that seem “statistically remote” but “contribute most to outcomes,” by precipitating chain reactions, say. (Last week, Dr. Phil told Fox’s Laura Ingraham that we should open up the country again, noting, wrongly, that “three hundred and sixty thousand people die each year “from swimming pools — but we don’t shut the country down for that.” In response, Taleb tweeted, “Drowning in swimming pools is extremely contagious and multiplicative.”) Naïve empiricists plant us, he argued in “The Black Swan,” in “Mediocristan.” We actually live in “Extremistan.”
Taleb, who is sixty-one, came by this impatience honestly. As a young man, he lived through Lebanon’s civil war, which was precipitated by Palestinian militias escaping a Jordanian crackdown, in 1971, and led to bloody clashes between Maronite Christians and Sunni Muslims, drawing in Shiites, Druze, and the Syrians as well. The conflict lasted fifteen years and left some ninety thousand people dead. “These events were unexplainable, but intelligent people thought they were capable of providing convincing explanations for them—after the fact,” Taleb writes in “The Black Swan.” “The more intelligent the person, the better sounding the explanation.” But how could anyone have anticipated “that people who seemed a model of tolerance could become the purest of barbarians overnight?” Given the prior cruelties of the twentieth century, the question may sound ingenuous, but Taleb experienced sudden violence firsthand. He grew fascinated, and outraged, by extrapolations from an illusory normal—the evil of banality. “I later saw the exact same illusion of understanding in business success and the financial markets,” he writes.
“Later” began in 1983, when, after university in Paris, and a Wharton M.B.A., Taleb became an options trader—“my core identity,” he says. Over the next twelve years, he conducted two hundred thousand trades, and examined seventy thousand risk-management reports. Along the way, he developed an investment strategy that entailed exposure to regular, small losses, while positioning him to benefit from irregular, massive gains—something like a venture capitalist. He explored, especially, scenarios for derivatives: asset bundles where fat tails—price volatilities, say—can either enrich or impoverish traders, and do so exponentially when they increase the scale of the movement.
These were the years, moreover, when, following Japan, large U.S. manufacturing companies were converting to “just-in-time” production, which involved integrating and synchronizing supply-chains, and forgoing stockpiles of necessary components in favor of acquiring them on an as-needed basis, often relying on single, authorized suppliers. The idea was that lowering inventory would reduce costs. But Taleb, extrapolating from trading risks, believed that “managing without buffers was irresponsible,” because “fat-tail events” can never be completely avoided. As the Harvard Business Reviewreported this month, Chinese suppliers shut down by the pandemic have stymied the production capabilities of a majority of the companies that depend on them.
The coming of global information networks deepened Taleb’s concern. He reserved a special impatience for economists who saw these networks as stabilizing—who thought that the average thought or action, derived from an ever-widening group, would produce an increasingly tolerable standard—and who believed that crowds had wisdom, and bigger crowds more wisdom. Thus networked, institutional buyers and sellers were supposed to produce more rational markets, a supposition that seemed to justify the deregulation of derivatives, in 2000, which helped accelerate the crash of 2008.
As Taleb told me, “The great danger has always been too much connectivity.” Proliferating global networks, both physical and virtual, inevitably incorporate more fat-tail risks into a more interdependent and “fragile” system: not only risks such as pathogens but also computer viruses, or the hacking of information networks, or reckless budgetary management by financial institutions or state governments, or spectacular acts of terror. Any negative event along these lines can create a rolling, widening collapse—a true black swan—in the same way that the failure of a single transformer can collapse an electricity grid.
COVID-19 has initiated ordinary citizens into the esoteric “mayhem” that Taleb’s writings portend. Who knows what will change for countries when the pandemic ends? What we do know, Taleb says, is what cannot remain the same. He is “too much a cosmopolitan” to want global networks undone, even if they could be. But he does want the institutional equivalent of “circuit breakers, fail-safe protocols, and backup systems,” many of which he summarizes in his fourth, and favorite, book, “Antifragile,” published in 2012. For countries, he envisions political and economic principles that amount to an analogue of his investment strategy: government officials and corporate executives accepting what may seem like too-small gains from their investment dollars, while protecting themselves from catastrophic loss.
Anyone who has read the Federalist Papers can see what he’s getting at. The “separation of powers” is hardly the most efficient form of government; getting something done entails a complex, time-consuming process of building consensus among distributed centers of authority. But James Madison understood that tyranny—however distant it was from the minds of likely Presidents in his own generation—is so calamitous to a republic, and so incipient in the human condition, that it must be structurally mitigated. For Taleb, an antifragile country would encourage the distribution of power among smaller, more local, experimental, and self-sufficient entities—in short, build a system that could survive random stresses, rather than break under any particular one. (His word for this beneficial distribution is “fractal.”)
We should discourage the concentration of power in big corporations, “including a severe restriction of lobbying,” Taleb told me. “When one per cent of the people have fifty per cent of the income, that is a fat tail.” Companies shouldn’t be able to make money from monopoly power, “from rent-seeking”—using that power not to build something but to extract an ever-larger part of the surplus. There should be an expansion of the powers of state and even county governments, where there is “bottom-up” control and accountability. This could incubate new businesses and foster new education methods that emphasize “action learning and apprenticeship” over purely academic certification. He thinks that “we should have a national Entrepreneurship Day.”
But Taleb doesn’t believe that the government should abandon citizens buffeted by events they can’t possibly anticipate or control. (He dedicated his book “Skin in the Game,” published in 2018, to Ron Paul and Ralph Nader.) “The state,” he told me, “should not smooth out your life, like a Lebanese mother, but should be there for intervention in negative times, like a rich Lebanese uncle.” Right now, for example, the government should, indeed, be sending out checks to unemployed and gig workers. (“You don’t bail out companies, you bail out individuals.”) He would also consider a guaranteed basic income, much as Andrew Yang, whom he admires, has advocated. Crucially, the government should be an insurer of health care, though Taleb prefers not a centrally run Medicare-for-all system but one such as Canada’s, which is controlled by the provinces. And, like responsible supply-chain managers, the federal government should create buffers against public-health disasters: “If it can spend trillions stockpiling nuclear weapons, it ought to spend tens of billions stockpiling ventilators and testing kits.”
At the same time, Taleb adamantly opposes the state taking on staggering debt. He thinks, rather, that the rich should be taxed as disproportionately as necessary, “though as locally as possible.” The key is “to build on the good days,” when the economy is growing, and reduce the debt, which he calls “intergenerational dispossession.” The government should then encourage an eclectic array of management norms: drawing up political borders, even down to the level of towns, which can, in an epidemiological emergency, be closed; having banks and corporations hold larger cash reserves, so that they can be more independent of market volatility; and making sure that manufacturing, transportation, information, and health-care systems have redundant storage and processing components. (“That’s why nature gave us two kidneys.”) Taleb is especially keen to inhibit “moral hazard,” such as that of bankers who get rich by betting, and losing, other people’s money. “In the Hammurabi Code, if a house falls in and kills you, the architect is put to death,” he told me. Correspondingly, any company or bank that gets a bailout should expect its executives to be fired, and its shareholders diluted. “If the state helps you, then taxpayers own you.”
Some of Taleb’s principles seem little more than thought experiments, or fit uneasily with others. How does one tax more locally, or close a town border? If taxpayers own corporate equities, does this mean that companies might be nationalized, broken up, or severely regulated? But asking Taleb to describe antifragility to its end is a little like asking Thomas Hobbes to nail down sovereignty. The more important challenge is to grasp the peril for which political solutions must be designed or improvised; society cannot endure with complacent conceptions of how things work. “It would seem most efficient to drive home at two hundred miles an hour,” he put it to me.“But odds are you’d never get there.”
WUHAN, China (Reuters) – Dressed in a hazmat suit, two masks and a face shield, Du Mingjun knocked on the mahogany door of a flat in a suburban district of Wuhan on a recent morning.
FILE PHOTO: Medical personnel in protective suits wave hands to a patient who is discharged from the Leishenshan Hospital after recovering from the novel coronavirus, in Wuhan, the epicentre of the novel coronavirus outbreak, in Hubei province, China March 1, 2020. China Daily via REUTERS
A man wearing a single mask opened the door a crack and, after Du introduced herself as a psychological counsellor, burst into tears.
“I really can’t take it anymore,” he said. Diagnosed with the novel coronavirus in early February, the man, who appeared to be in his 50s, had been treated at two hospitals before being transferred to a quarantine centre set up in a cluster of apartment blocks in an industrial part of Wuhan.
Why, he asked, did tests say he still had the virus more than two months after he first contracted it?
The answer to that question is a mystery baffling doctors on the frontline of China’s battle against COVID-19, even as it has successfully slowed the spread of the coronavirus across the country.
Chinese doctors in Wuhan, where the virus first emerged in December, say a growing number of cases in which people recover from the virus, but continue to test positive without showing symptoms, is one of their biggest challenges as the country moves into a new phase of its containment battle.
Those patients all tested negative for the virus at some point after recovering, but then tested positive again, some up to 70 days later, the doctors said. Many have done so over 50-60 days.
The prospect of people remaining positive for the virus, and therefore potentially infectious, is of international concern, as many countries seek to end lockdowns and resume economic activity as the spread of the virus slows. Currently, the globally recommended isolation period after exposure is 14 days.
So far, there have been no confirmations of newly positive patients infecting others, according to Chinese health officials.
China has not published precise figures for how many patients fall into this category. But disclosures by Chinese hospitals to Reuters, as well as in other media reports, indicate there are at least dozens of such cases.
In South Korea, about 1,000 people have been testing positive for four weeks or more. In Italy, the first European country ravaged by the pandemic, health officials noticed that coronavirus patients could test positive for the virus for about a month.
As there is limited knowledge available on how infectious these patients are, doctors in Wuhan are keeping them isolated for longer.
Zhang Dingyu, president of Jinyintan Hospital, where the most serious coronavirus cases were treated, said health officials recognised the isolations may be excessive, especially if patients proved not to be infectious. But, for now, it was better to do so to protect the public, he said.
He described the issue as one of the most pressing facing the hospital and said counsellors like Du are being brought in to help ease the emotional strain.
“When patients have this pressure, it also weighs on society,” he said.
DOZENS OF CASES
The plight of Wuhan’s long-term patients underlines how much remains unknown about COVID-19 and why it appears to affect different people in numerous ways, Chinese doctors say. So far global infections have hit 2.5 million with over 171,000 deaths.
As of April 21, 93% of 82,788 people with the virus in China had recovered and been discharged, official figures show.
Yuan Yufeng, a vice president at Zhongnan Hospital in Wuhan, told Reuters he was aware of a case in which the patient had positive retests after first being diagnosed with the virus about 70 days earlier.
“We did not see anything like this during SARS,” he said, referring to the 2003 Severe Acute Respiratory Syndrome outbreak that infected 8,098 people globally, mostly in China.
Patients in China are discharged after two negative nucleic acid tests, taken at least 24 hours apart, and if they no longer show symptoms. Some doctors want this requirement to be raised to three tests or more.
China’s National Health Commission directed Reuters to comments made at a briefing Tuesday when asked for comment about how this category of patients was being handled.
Wang Guiqiang, director of the infectious disease department of Peking University First Hospital, said at the briefing that the majority of such patients were not showing symptoms and very few had seen their conditions worsen.
“The new coronavirus is a new type of virus,” said Guo Yanhong, a National Health Commission official. “For this disease, the unknowns are still greater than the knowns.”
REMNANTS AND REACTIVATION
Experts and doctors struggle to explain why the virus behaves so differently in these people.
Some suggest that patients retesting as positive after previously testing negative were somehow reinfected with the virus. This would undermine hopes that people catching COVID-19 would produce antibodies that would prevent them from getting sick again from the virus.
Zhao Yan, a doctor of emergency medicine at Wuhan’s Zhongnan Hospital, said he was sceptical about the possibility of reinfection based on cases at his facility, although he did not have hard evidence.
“They’re closely monitored in the hospital and are aware of the risks, so they stay in quarantine. So I’m sure they were not reinfected.”
Jeong Eun-kyeong, director of the Korea Centers for Disease Control and Prevention, has said the virus may have been “reactivated” in 91 South Korean patients who tested positive after having been thought to be cleared of it.
Other South Korean and Chinese experts have said that remnants of the virus could have stayed in patients’ systems but not be infectious or dangerous to the host or others.
Few details have been disclosed about these patients, such as if they have underlying health conditions.
Paul Hunter, a professor at the University of East Anglia’s Norwich School of Medicine, said an unusually slow shedding of other viruses such as norovirus or influenza had been previously seen in patients with weakened immune systems.
In 2015, South Korean authorities disclosed that they had a Middle East Respiratory Syndrome patient stricken with lymphoma who showed signs of the virus for 116 days. They said his impaired immune system kept his body from ridding itself of the virus. The lymphoma eventually caused his death.
FILE PHOTO: A volunteer walks inside a convention center that was used as a makeshift hospital to treat patients with the coronavirus disease (COVID-19), in Wuhan, Hubei province, China April 9, 2020. REUTERS/Aly Song
Yuan said that even if patients develop antibodies, it did not guarantee they would become virus-free.
He said that some patients had high levels of antibodies, and still tested positive to nucleic acid tests.
“It means that the two sides are still fighting,” he said.
As could be seen in Wuhan, the virus can also inflict a heavy mental toll on those caught in a seemingly endless cycle of positive tests.
Du, who set up a therapy hotline when Wuhan’s outbreak first began, allowed Reuters in early April to join her on a visit to the suburban quarantine centre on the condition that none of the patients be identified.
One man rattled off the names of three Wuhan hospitals he had stayed at before being moved to a flat in the centre. He had taken over 10 tests since the third week of February, he said, on occasions testing negative but mostly positive.
“I feel fine and have no symptoms, but they check and it’s positive, check and it’s positive,” he said. “What is with this virus?”
Patients need to stay at the centre for at least 28 days and obtain two negative results before being allowed to leave. Patients are isolated in individual rooms they said were paid for by the government.
The most concerning case facing Du during the visit was the man behind the mahogany door; he had told medical workers the night before that he wanted to kill himself.
“I wasn’t thinking clearly,” he told Du, explaining how he had already taken numerous CT scans and nucleic acid tests, some of which tested negative, at different hospitals. He worried that he had been reinfected as he cycled through various hospitals.
His grandson missed him after being gone for so long, he said, and he worried his condition meant he would never be able to see him again.
He broke into another round of sobs. “Why is this happening to me?”
Reporting by Brenda Goh; Additional reporting by Jack Kim in Seoul, Elvira Pollina in Milan, Belen Carreno in Madrid, and Shanghai newsroom; Editing by Philip McClellan
As parts of the United States settle in for what may be the worst weeks of their local covid-19 outbreaks, a familiar refrain is sure to emerge.
Some people will complain that the death count attributed to the coronavirus is being exaggerated. Others, including researchers, have argued that covid-19 related deaths are actually being undercounted, as people die at home without being tested. Still others will point to the final death count and say that because it’s lower than X (whether that number be flu deaths, car accident deaths, or some other moving goalpost), then that means the efforts and sacrifices made for social distancing weren’t worth it—ignoring, of course, that social distancing was the reason the toll wasn’t much higher. Figuring out how deadly covid-19 truly is will take far more time to untangle than anyone would want, and no one’s likely to be fully satisfied with the answers we get.
As of April 10, there have been around 1.6 million reported cases of covid-19, the disease caused by the novel coronavirus worldwide. There have also been over 96,000 reported deaths, with over 16,000 deaths documented in the U.S. But these numbers are largely acknowledged as a very rough, possibly even misleading estimate of the problem, given the wide gaps in testing capacity across different countries and even within a country.
On the political right, many have taken to fostering conspiracy theories about these deaths. You don’t have to go far on social media to see people accusing doctors and health officials of fudging the numbers higher to make President Trump look bad or to (somehow) profit off the tragedy. Other conservative voices like the disgraced sex pest Bill O’Reilly are less paranoid but similarly dismissive, arguing that many of those who died “were on their last legs anyway.”
It’s true that older people and those with underlying health conditions are at greater risk of serious complications and death from covid-19. But the same can be said for almost every other leading cause of death, whether it’s cancer, heart attack, or diabetes. And just as living is hardly a simple affair, so too is dying. Sometimes you can point to a single factor that kills a person, but often it’s a mix of ailments, with a viral infection like covid-19 being the final shove.
The key point here is that epidemiologists and others who try to estimate how many people die from any given cause per year know the above very well. The flu, for instance, doesn’t usually kill in isolation either—it too disproportionately kills the elderly and otherwise already sick. Yet many of the same people who are now trying to downplay covid-19 deaths also argued that its early death toll wasn’t coming anywhere close to the typical seasonal flu’s annual tally (an argument meant to push back against the idea of doing anything too serious to mitigate the spread of the coronavirus).
That said, we’re much better at estimating how many deaths in the U.S. are flu-related because the influenza virus is a known entity. We have a decent sense of how many people are infected with the flu every year, how many people go to the doctor or are hospitalized, and how many people it helps kill, thanks to a well-established nationwide surveillance system. But that isn’t true for covid-19.
There’s steady evidence indicating that covid-19 cases nearly everywhere in the world are being undercounted. That’s partly because testing remains so haphazard and has inherent limitations. The most common type of covid-19 test right now, for instance, can only confirm an active infection, not whether you had a previous case (newer antibody tests can address that problem but have their own flaws). It’s also because the virus infects a still-unknown percentage of people without making them feel sick at all.
Many more people have had or will catch the coronavirus than any current tracking will ever indicate. These hidden cases are almost certainly less deadly on average than the known cases that wind up in hospitals, so it’s likely that the current documented fatality rate of covid-19 (over 5 percent worldwide) is an overestimate. But that doesn’t mean more people aren’t dying from covid-19 than are being reported.
In areas of China and Italy hit hard by the coronavirus, news reports have suggested a wide gulf between the official number of covid-19-related deaths in a town and what residents are seeing for themselves. In the U.S., there are still regions where testing is limited and people who may have died from covid-19 in their homes are never tested, including New York City. And there’s the simple harsh reality that we’re probably still in the very beginning of this pandemic.
Even if outbreaks start to peter out in the U.S. and elsewhere, there’s the risk that loosening our restrictions on distancing will fuel new ones. And even if the summer heat in the U.S. makes it harder for the virus to spread here, as some experts hope, a second wave in the fall and winter could certainly happen, much as it did for the last pandemic (a strain of flu) in 2009.
All of these variables will affect the final death toll from covid-19, as will how countries continue to respond to the crisis. Ironically, the steps we take to prevent new cases and deaths may be the very thing that makes people doubt they were necessary.
In late March, the White House and U.S. public health officials announced that they projected 100,000 to 200,000 deaths in the country by the pandemic’s end, provided everything was done to slow its spread. On Thursday, Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, said that newer modeling data has suggested the U.S. death toll may end up closer to 60,000, so long as we keep mitigating the outbreak. Almost immediately, some people chose to take it as evidence that mitigation efforts aren’t necessary and that the initial warnings about the virus were overblown—ignoring, again, that the reason for the downward revision in projected deaths is the success of social distancing.
There are still a lot of things we don’t know about the coronavirus, and many of the things we think we know are going to keep changing. But here’s something to remember.
By the end of the 2009 H1N1 flu pandemic, the World Health Organization reported that about 19,000 people were confirmed to have died from the virus. By 2013, severalstudies estimated that the true death toll was at least 10 times higher and even higher still when you took into account other causes of death indirectly worsened by the flu, like heart attacks. Knowing how deadly covid-19 will be could very well take that long to nail down too.
Another article of interest:
New York City’s covid-19 death toll is likely higher than reported, due to the fact that the…Read more
Geneva, 1 April 2020 – The World Meteorological Organization (WMO) is concerned about the impact of the COVID-19 pandemic on the quantity and quality of weather observations and forecasts, as well as atmospheric and climate monitoring.
WMO’s Global Observing System serves as a backbone for all weather and climate services and products provided by the 193 WMO Member states and territories to their citizens. It provides observations on the state of the atmosphere and ocean surface from land-, marine- and space-based instruments. This data is used for the preparation of weather analyses, forecasts, advisories and warnings.
“National Meteorological and Hydrological Services continue to perform their essential 24/7 functions despite the severe challenges posed by the Coronavirus pandemic,” said WMO Secretary-General Petteri Taalas. “We salute their dedication to protecting lives and property but we are mindful of the increasing constraints on capacity and resources,” he said.
“The impacts of climate change and growing amount of weather-related disasters continue. The COVID-19 pandemic poses an additional challenge, and may exacerbate multi-hazard risks at a single country level. Therefore it is essential that governments pay attention to their national early warning and weather observing capacities despite the COVID-19 crisis,” said Mr Taalas.
Large parts of the observing system, for instance its satellite components and many ground-based observing networks, are either partly or fully automated. They are therefore expected to continue functioning without significant degradation for several weeks, in some cases even longer. But if the pandemic lasts more than a few weeks, then missing repair, maintenance and supply work, and missing redeployments will become of increasing concern.
Some parts of the observing system are already affected. Most notably the significant decrease in air traffic has had a clear impact. In-flight measurements of ambient temperature and wind speed and direction are a very important source of information for both weather prediction and climate monitoring.
Meteorological data from aircraft
Commercial airliners contribute to the Aircraft Meteorological Data Relay programme (AMDAR), which uses onboard sensors, computers and communications systems to collect, process, format and transmit meteorological observations to ground stations via satellite or radio links.
In some parts of the world, in particular over Europe, the decrease in the number of measurements over the last couple of weeks has been dramatic (see chart below provided by EUMETNET). The countries affiliated with EUMETNET, a collaboration between the 31 national weather services in Europe, are currently discussing ways to boost the short-term capabilities of other parts of their observing networks in order to partly mitigate this loss of aircraft observations.
The AMDAR observing system has traditionally produced over 700 000 high-quality observations per day of air temperature and wind speed and direction, together with the required positional and temporal information, and with an increasing number of humidity and turbulence measurements being made.
In most developed countries, surface-based weather observations are now almost fully automated.
However, in many developing countries, the transition to automated observations is still in progress, and the meteorological community still relies on observations taken manually by weather observers and transmitted into the international networks for use in global weather and climate models.
WMO has seen a significant decrease in the availability of this type of manual observations over the last two weeks. Some of this may well be attributable to the current coronavirus situation, but it is not yet clear whether other factors may play a role as well. WMO is currently investigating this.
“At the present time, the adverse impact of the loss of observations on the quality of weather forecast products is still expected to be relatively modest. However, as the decrease in availability of aircraft weather observations continues and expands, we may expect a gradual decrease in reliability of the forecasts,” said Lars Peter Riishojgaard, Director, Earth System Branch in WMO’s Infrastructure Department.
“The same is true if the decrease in surface-based weather observations continues, in particular if the COVID-19 outbreak starts to more widely impact the ability of observers to do their job in large parts of the developing world. WMO will continue to monitor the situation, and the organization is working with its Members to mitigate the impact as much as possible,” he said.
(Map provided by WMO; countries shown in darker colors provided fewer observations over the last week than averaged for the month of January 2020 (pre-COVID-19); countries shown in black are currently not sending any data at all).
Currently, there are 16 meteorological and 50 research satellites, over 10 000 manned and automatic surface weather stations, 1 000 upper-air stations, 7 000 ships, 100 moored and 1 000 drifting buoys, hundreds of weather radars and 3 000 specially equipped commercial aircraft measure key parameters of the atmosphere, land and ocean surface every day.
For further information contact: Clare Nullis, media officer. Email email@example.com, Cell +41 79 709 13 97
Organização Meteorológica Mundial (WMO) teme que coronavírus influencie na qualidade das previsões e no monitoramento da atmosfera
A Organização Meteorológica Mundial (Word Meteorological Organization, WMO, na sigla em inglês) está preocupada com o impacto da pandemia do covid-19 na quantidade e qualidade das observações e previsões meteorológicas, bem como no monitoramento da atmosfera e do clima.
O Sistema de Observação Global da WMO serve como espinha dorsal de todos os serviços e produtos climáticos fornecidos a seus cidadãos pelos 193 estados e territórios membros da organização. Ele fornece observações sobre o estado da atmosfera e da superfície do oceano a partir de instrumentos terrestres, marinhos e espaciais. Estes dados são utilizados para a preparação de análises meteorológicas, previsão do tempo e monitoramento do clima.
“Os Serviços Meteorológicos e Hidrológicos Nacionais continuam desempenhando suas funções essenciais 24 horas por dia e sete dias por semana, apesar dos graves desafios impostos pela pandemia de coronavírus”, disse o secretário-geral da WMO, Petteri Taalas. “Saudamos sua dedicação em proteger vidas e propriedades, mas estamos atentos às crescentes restrições de capacidade e recursos”.
Taalas afirmou ainda que os impactos das mudanças climáticas e a crescente quantidade de desastres relacionados ao clima continuam. “A pandemia do Covid-19 representa um desafio que grava os riscos de vários perigos em um único país. Portanto, é essencial que os governos prestem atenção em seu alerta nacional e às capacidades de observação do clima, apesar da crise do Covid-19”.
Grande parte do sistema de observação, como os componentes de satélite e redes de observação terrestres, por exemplo, são parcialmente ou totalmente automatizados. Por isso, espera-se que continuem funcionando sem problemas significativos por várias semanas, em alguns casos até mais. Porém, se a pandemia durar mais de algumas semanas, a falta de reparos, manutenção e suprimentos, e as redistribuições se tornarão uma preocupação crescente.
Algumas partes do sistema de observação já estão sendo afetadas, com destaque para a diminuição significativa do tráfego aéreo. As medições de temperatura ambiente e da velocidade e direção do vento em voo são uma fonte muito importante de informações para a previsão do tempo e monitoramento do clima.
Dados meteorológicos de aeronaves
Aviões comerciais contribuem para o programa “Airbus Meteorological Data Relay” (AMDAR), que usa sensores, computadores e sistemas de comunicação a bordo para coletar, processar, formatar e transmitir observações meteorológicas para estações terrestres via satélite ou rádio.
Em algumas partes do mundo, em particular na Europa, a diminuição do número de medições nas últimas duas semanas tem sido dramática. Veja o gráfico fornecido pela EUMETNET.
Total de observações do sistema AMDAR em março de 2020 (Fonte: WMO)
Os países afiliados à EUMETNET, que reúne 31 serviços meteorológicos nacionais na Europa, estão atualmente discutindo maneiras de aumentar as capacidades de curto prazo de outras partes de suas redes de observação, a fim de diminuir parcialmente a perda de observações de aeronaves.
O sistema de observação AMDAR normalmente produzia por dia mais de 700 mil observações de alta qualidade de temperatura do ar, velocidade e direção do vento. Além disso, fornecia informações posicionais e temporais necessárias, com número crescente de medições de umidade e turbulência.
Observações baseadas em superfície
Na maioria dos países desenvolvidos, as observações meteorológicas de superfície são quase totalmente automatizadas. No entanto, em muitos países em desenvolvimento, como é o caso do Brasil, a transição para observações automatizadas ainda está em andamento, e a comunidade meteorológica ainda depende de observações feitas manualmente por observadores, que as transmitem às redes internacionais para uso em modelos globais de tempo e clima.
A WMO registrou diminuição significativa na disponibilidade de observação manual nas últimas duas semanas. Parte disso pode estar relacionada à situação atual de coronavírus, mas ainda não está claro se outros fatores também podem ter contribuído. A WMO está investigando outras possíveis causas.
“Atualmente, o impacto adverso da perda de observações na qualidade dos produtos para previsão do tempo ainda é relativamente pequeno. No entanto, à medida que a diminuição na disponibilidade de observações meteorológicas das aeronaves continua e se expande, podemos esperar uma queda gradual na confiabilidade das previsões”, disse Lars Peter Riishojgaard, diretor da filial do sistema terrestre no departamento de infraestrutura da WMO.
Segundo Riishojgaard, o mesmo vale se a diminuição das observações meteorológicas na superfície continuar e, em particular, se o surto de covid-19 começar a impactar de forma mais significativa a capacidade de trabalho de observadores em países subdesenvolvidos. “A WMO continuará monitorando a situação, e a organização está trabalhando com seus membros para mitigar o impacto o máximopossível”, afirmou.
Mapa fornecido pela WMO: os países mostrados em cores mais escuras forneceram menos observações na última semana do que a média do mês de janeiro de 2020 (pré-covid-19); os países mostrados em preto atualmente não estão enviando nenhum dado.
Atualmente, existem 16 satélites meteorológicos e 50 de pesquisa no mundo, além de mais de 10 mil estações meteorológicas de superfície automáticas e tripuladas, mil estações aéreas, 7 mil navios, 100 bóias ancoradas e mil flutuantes, centenas de radares meteorológicos e 3 mil estações comerciais especialmente equipadas em aeronaves, que medem parâmetros-chave da atmosfera, da terra e da superfície do oceano todos os dias.
Tradução e adaptação de Paula Soares e Amanda Sampaio, do conteúdo publicado no site da WMO – World Meteorological Organization.
Summary of the article: Strong coronavirus measures today should only last a few weeks, there shouldn’t be a big peak of infections afterwards, and it can all be done for a reasonable cost to society, saving millions of lives along the way. If we don’t take these measures, tens of millions will be infected, many will die, along with anybody else that requires intensive care, because the healthcare system will have collapsed.
Within a week, countries around the world have gone from: “This coronavirus thing is not a big deal” to declaring the state of emergency. Yet many countries are still not doing much. Why?
Every country is asking the same question: How should we respond? The answer is not obvious to them.
Some countries, like France, Spain or Philippines, have since ordered heavy lockdowns. Others, like the US, UK, or Switzerland, have dragged their feet, hesitantly venturing into social distancing measures.
Here’s what we’re going to cover today, again with lots of charts, data and models with plenty of sources:
What’s the current situation?
What options do we have?
What’s the one thing that matters now: Time
What does a good coronavirus strategy look like?
How should we think about the economic and social impacts?
When you’re done reading the article, this is what you’ll take away:
Our healthcare system is already collapsing. Countries have two options: either they fight it hard now, or they will suffer a massive epidemic. If they choose the epidemic, hundreds of thousands will die. In some countries, millions. And that might not even eliminate further waves of infections. If we fight hard now, we will curb the deaths. We will relieve our healthcare system. We will prepare better. We will learn. The world has never learned as fast about anything, ever. And we need it, because we know so little about this virus. All of this will achieve something critical: Buy Us Time.
If we choose to fight hard, the fight will be sudden, then gradual. We will be locked in for weeks, not months. Then, we will get more and more freedoms back. It might not be back to normal immediately. But it will be close, and eventually back to normal. And we can do all that while considering the rest of the economy too.
Ok, let’s do this.
1. What’s the situation?
Last week, I showed this curve:
It showed coronavirus cases across the world outside of China. We could only discern Italy, Iran and South Korea. So I had to zoom in on the bottom right corner to see the emerging countries. My entire point is that they would soon be joining these 3 cases.
Let’s see what has happened since.
As predicted, the number of cases has exploded in dozens of countries. Here, I was forced to show only countries with over 1,000 cases. A few things to note:
Spain, Germany, France and the US all have more cases than Italy when it ordered the lockdown
An additional 16 countries have more cases today than Hubei when it went under lockdown: Japan, Malaysia, Canada, Portugal, Australia, Czechia, Brazil and Qatar have more than Hubei but below 1,000 cases. Switzerland, Sweden, Norway, Austria, Belgium, Netherlands and Denmark all have above 1,000 cases.
Do you notice something weird about this list of countries? Outside of China and Iran, which have suffered massive, undeniable outbreaks, and Brazil and Malaysia, every single country in this list is among the wealthiest in the world.
Do you think this virus targets rich countries? Or is it more likely that rich countries are better able to identify the virus?
It’s unlikely that poorer countries aren’t touched. Warm and humid weather probablyhelps, but doesn’t prevent an outbreak by itself — otherwise Singapore, Malaysia or Brazil wouldn’t be suffering outbreaks.
The most likely interpretations are that the coronavirus either took longer to reach these countries because they’re less connected, or it’s already there but these countries haven’t been able to invest enough on testing to know.
Either way, if this is true, it means that most countries won’t escape the coronavirus. It’s a matter of time before they see outbreaks and need to take measures.
What measures can different countries take?
2. What Are Our Options?
Since the article last week, the conversation has changed and many countries have taken measures. Here are some of the most illustrative examples:
Measures in Spain and France
In one extreme, we have Spain and France. This is the timeline of measures for Spain:
On Thursday, 3/12, the President dismissed suggestions that the Spanish authorities had been underestimating the health threat. On Friday, they declared the State of Emergency. On Saturday, measures were taken:
People can’t leave home except for key reasons: groceries, work, pharmacy, hospital, bank or insurance company (extreme justification)
Specific ban on taking kids out for a walk or seeing friends or family (except to take care of people who need help, but with hygiene and physical distance measures)
All bars and restaurants closed. Only take-home acceptable.
All entertainment closed: sports, movies, museums, municipal celebrations…
Weddings can’t have guests. Funerals can’t have more than a handful of people.
Mass transit remains open
On Monday, land borders were shut.
Some people see this as a great list of measures. Others put their hands up in the air and cry of despair. This difference is what this article will try to reconcile.
France’s timeline of measures is similar, except they took more time to apply them, and they are more aggressive now. For example, rent, taxes and utilities are suspended for small businesses.
Measures in the US and UK
The US and UK, like countries such as Switzerland, have dragged their feet in implementing measures. Here’s the timeline for the US:
Wednesday 3/11: travel ban.
Friday: National Emergency declared. No social distancing measures
Monday: the government urges the public to avoid restaurants or bars and attend events with more than 10 people. No social distancing measure is actually enforceable. It’s just a suggestion.
Lots of states and cities are taking the initiative and mandating much stricter measures.
The UK has seen a similar set of measures: lots of recommendations, but very few mandates.
These two groups of countries illustrate the two extreme approaches to fight the coronavirus: mitigation and suppression. Let’s understand what they mean.
Option 1: Do Nothing
Before we do that, let’s see what doing nothing would entail for a country like the US:
If we do nothing: Everybody gets infected, the healthcare system gets overwhelmed, the mortality explodes, and ~10 million people die (blue bars). For the back-of-the-envelope numbers: if ~75% of Americans get infected and 4% die, that’s 10 million deaths, or around 25 times the number of US deaths in World War II.
You might wonder: “That sounds like a lot. I’ve heard much less than that!”
So what’s the catch? With all these numbers, it’s easy to get confused. But there’s only two numbers that matter: What share of people will catch the virus and fall sick, and what share of them will die. If only 25% are sick (because the others have the virus but don’t have symptoms so aren’t counted as cases), and the fatality rate is 0.6% instead of 4%, you end up with 500k deaths in the US.
If we don’t do anything, the number of deaths from the coronavirus will probably land between these two numbers. The chasm between these extremes is mostly driven by the fatality rate, so understanding it better is crucial. What really causes the coronavirus deaths?
How Should We Think about the Fatality Rate?
This is the same graph as before, but now looking at hospitalized people instead of infected and dead:
The light blue area is the number of people who would need to go to the hospital, and the darker blue represents those who need to go to the intensive care unit (ICU). You can see that number would peak at above 3 million.
Now compare that to the number of ICU beds we have in the US (50k today, we could double that repurposing other space). That’s the red dotted line.
No, that’s not an error.
That red dotted line is the capacity we have of ICU beds. Everyone above that line would be in critical condition but wouldn’t be able to access the care they need, and would likely die.
This is why people died in droves in Hubei and are now dying in droves in Italy and Iran. The Hubei fatality rate ended up better than it could have been because they built 2 hospitals nearly overnight. Italy and Iran can’t do the same; few, if any, other countries can. We’ll see what ends up happening there.
So why is the fatality rate close to 4%?
If 5% of your cases require intensive care and you can’t provide it, most of those people die. As simple as that.
These numbers only show people dying from coronavirus. But what happens if all your healthcare system is collapsed by coronavirus patients? Others also die from other ailments.
What happens if you have a heart attack but the ambulance takes 50 minutes to come instead of 8 (too many coronavirus cases) and once you arrive, there’s no ICU and no doctor available? You die.
There are 4 million admissions to the ICU in the US every year, and 500k (~13%) of them die. Without ICU beds, that share would likely go much closer to 80%. Even if only 50% died, in a year-long epidemic you go from 500k deaths a year to 2M, so you’re adding 1.5M deaths, just with collateral damage.
If the coronavirus is left to spread, the US healthcare system will collapse, and the deaths will be in the millions, maybe more than 10 million.
The same thinking is true for most countries. The number of ICU beds and ventilators and healthcare workers are usually similar to the US or lower in most countries. Unbridled coronavirus means healthcare system collapse, and that means mass death.
Unbridled coronavirus means healthcare systems collapse, and that means mass death.
By now, I hope it’s pretty clear we should act. The two options that we have are mitigation and suppression. Both of them propose to “flatten the curve”, but they go about it very differently.
Option 2: Mitigation Strategy
Mitigation goes like this: “It’s impossible to prevent the coronavirus now, so let’s just have it run its course, while trying to reduce the peak of infections. Let’s just flatten the curve a little bit to make it more manageable for the healthcare system.”
This chart appears in a very important paper published over the weekend from the Imperial College London. Apparently, it pushed the UK and US governments to change course.
It’s a very similar graph as the previous one. Not the same, but conceptually equivalent. Here, the “Do Nothing” situation is the black curve. Each one of the other curves are what would happen if we implemented tougher and tougher social distancing measures. The blue one shows the toughest social distancing measures: isolating infected people, quarantining people who might be infected, and secluding old people. This blue line is broadly the current UK coronavirus strategy, although for now they’re just suggesting it, not mandating it.
Here, again, the red line is the capacity for ICUs, this time in the UK. Again, that line is very close to the bottom. All that area of the curve on top of that red line represents coronavirus patients who would mostly die because of the lack of ICU resources.
Not only that, but by flattening the curve, the ICUs will collapse for months, increasing collateral damage.
You should be shocked. When you hear: “We’re going to do some mitigation” what they’re really saying is: “We will knowingly overwhelm the healthcare system, driving the fatality rate up by a factor of 10x at least.”
You would imagine this is bad enough. But we’re not done yet. Because one of the key assumptions of this strategy is what’s called “Herd Immunity”.
Herd Immunity and Virus Mutation
The idea is that all the people who are infected and then recover are now immune to the virus. This is at the core of this strategy: “Look, I know it’s going to be hard for some time, but once we’re done and a few million people die, the rest of us will be immune to it, so this virus will stop spreading and we’ll say goodbye to the coronavirus. Better do it at once and be done with it, because our alternative is to do social distancing for up to a year and risk having this peak happen later anyways.”
Except this assumes one thing: the virus doesn’t change too much. If it doesn’t change much, then lots of people do get immunity, and at some point the epidemic dies down
How likely is this virus to mutate? It seems it already has.
This graph represents the different mutations of the virus. You can see that the initial strains started in purple in China and then spread. Each time you see a branching on the left graph, that is a mutation leading to a slightly different variant of the virus.
This should not be surprising: RNA-based viruses like the coronavirus or the flu tend to mutate around 100 times faster than DNA-based ones—although the coronavirus mutates more slowly than influenza viruses.
Not only that, but the best way for this virus to mutate is to have millions of opportunities to do so, which is exactly what a mitigation strategy would provide: hundreds of millions of people infected.
That’s why you have to get a flu shot every year. Because there are so many flu strains, with new ones always evolving, the flu shot can never protect against all strains.
Put in another way: the mitigation strategy not only assumes millions of deaths for a country like the US or the UK. It also gambles on the fact that the virus won’t mutate too much — which we know it does. And it will give it the opportunity to mutate. So once we’re done with a few million deaths, we could be ready for a few million more — every year. This corona virus could become a recurring fact of life, like the flu, but many times deadlier.
The best way for this virus to mutate is to have millions of opportunities to do so, which is exactly what a mitigation strategy would provide.
So if neither doing nothing and mitigation will work, what’s the alternative? It’s called suppression.
Option 3: Suppression Strategy
The Mitigation Strategy doesn’t try to contain the epidemic, just flatten the curve a bit. Meanwhile, the Suppression Strategy tries to apply heavy measures to quickly get the epidemic under control. Specifically:
Go hard right now. Order heavy social distancing. Get this thing under control.
Then, release the measures, so that people can gradually get back their freedoms and something approaching normal social and economic life can resume.
What does that look like?
Under a suppression strategy, after the first wave is done, the death toll is in the thousands, and not in the millions.
Why? Because not only do we cut the exponential growth of cases. We also cut the fatality rate since the healthcare system is not completely overwhelmed. Here, I used a fatality rate of 0.9%, around what we’re seeing in South Korea today, which has been most effective at following Suppression Strategy.
Said like this, it sounds like a no-brainer. Everybody should follow the Suppression Strategy.
So why do some governments hesitate?
They fear three things:
This first lockdown will last for months, which seems unacceptable for many people.
A months-long lockdown would destroy the economy.
It wouldn’t even solve the problem, because we would be just postponing the epidemic: later on, once we release the social distancing measures, people will still get infected in the millions and die.
Here is how the Imperial College team modeled suppressions. The green and yellow lines are different scenarios of Suppression. You can see that doesn’t look good: We still get huge peaks, so why bother?
We’ll get to these questions in a moment, but there’s something more important before.
This is completely missing the point.
Presented like these, the two options of Mitigation and Suppression, side by side, don’t look very appealing. Either a lot of people die soon and we don’t hurt the economy today, or we hurt the economy today, just to postpone the deaths.
This ignores the value of time.
3. The Value of Time
In our previous post, we explained the value of time in saving lives. Every day, every hour we waited to take measures, this exponential threat continued spreading. We saw how a single day could reduce the total cases by 40% and the death toll by even more.
But time is even more valuable than that.
We’re about to face the biggest wave of pressure on the healthcare system ever seen in history. We are completely unprepared, facing an enemy we don’t know. That is not a good position for war.
What if you were about to face your worst enemy, of which you knew very little, and you had two options: Either you run towards it, or you escape to buy yourself a bit of time to prepare. Which one would you choose?
This is what we need to do today. The world has awakened. Every single day we delay the coronavirus, we can get better prepared. The next sections detail what that time would buy us:
Lower the Number of Cases
With effective suppression, the number of true cases would plummet overnight, as we saw in Hubei last week.
As of today, there are 0 daily new cases of coronavirus in the entire 60 million-big region of Hubei.
The diagnostics would keep going up for a couple of weeks, but then they would start going down. With fewer cases, the fatality rate starts dropping too. And the collateral damage is also reduced: fewer people would die from non-coronavirus-related causes because the healthcare system is simply overwhelmed.
Suppression would get us:
Fewer total cases of Coronavirus
Immediate relief for the healthcare system and the humans who run it
Reduction in fatality rate
Reduction in collateral damage
Ability for infected, isolated and quarantined healthcare workers to get better and back to work. In Italy, healthcare workers represent 8% of all contagions.
Understand the True Problem: Testing and Tracing
Right now, the UK and the US have no idea about their true cases. We don’t know how many there are. We just know the official number is not right, and the true one is in the tens of thousands of cases. This has happened because we’re not testing, and we’re not tracing.
With a few more weeks, we could get our testing situation in order, and start testing everybody. With that information, we would finally know the true extent of the problem, where we need to be more aggressive, and what communities are safe to be released from a lockdown.
We could also set up a tracing operation like the ones they have in China or other East Asia countries, where they can identify all the people that every sick person met, and can put them in quarantine. This would give us a ton of intelligence to release later on our social distancing measures: if we know where the virus is, we can target these places only. This is not rocket science: it’s the basics of how East Asia Countries have been able to control this outbreak without the kind of draconian social distancing that is increasingly essential in other countries.
The measures from this section (testing and tracing) single-handedly curbed the growth of the coronavirus in South Korea and got the epidemic under control, without a strong imposition of social distancing measures.
Build Up Capacity
The US (and presumably the UK) are about to go to war without armor.
We have masks for just two weeks, few personal protective equipments (“PPE”), not enough ventilators, not enough ICU beds, not enough ECMOs (blood oxygenation machines)… This is why the fatality rate would be so high in a mitigation strategy.
But if we buy ourselves some time, we can turn this around:
We have more time to buy equipment we will need for a future wave
We can quickly build up our production of masks, PPEs, ventilators, ECMOs, and any other critical device to reduce fatality rate.
Put in another way: we don’t need years to get our armor, we need weeks. Let’s do everything we can to get our production humming now. Countries are mobilized. People are being inventive, such as using 3D printing for ventilator parts. We can do it. We just need more time. Would you wait a few weeks to get yourself some armor before facing a mortal enemy?
This is not the only capacity we need. We will need health workers as soon as possible. Where will we get them? We need to train people to assist nurses, and we need to get medical workers out of retirement. Many countries have already started, but this takes time. We can do this in a few weeks, but not if everything collapses.
Lower Public Contagiousness
The public is scared. The coronavirus is new. There’s so much we don’t know how to do yet! People haven’t learned to stop hand-shaking. They still hug. They don’t open doors with their elbow. They don’t wash their hands after touching a door knob. They don’t disinfect tables before sitting.
Once we have enough masks, we can use them outside of the healthcare system too. Right now, it’s better to keep them for healthcare workers. But if they weren’t scarce, people should wear them in their daily lives, making it less likely that they infect other people when sick, and with proper training also reducing the likelihood that the wearers get infected. (In the meantime, wearing something is better than nothing.)
All of these are pretty cheap ways to reduce the transmission rate. The less this virus propagates, the fewer measures we’ll need in the future to contain it. But we need time to educate people on all these measures and equip them.
Understand the Virus
We know very very little about the virus. But every week, hundreds of new papers are coming.
The world is finally united against a common enemy. Researchers around the globe are mobilizing to understand this virus better.
How does the virus spread? How can contagion be slowed down? What is the share of asymptomatic carriers? Are they contagious? How much? What are good treatments? How long does it survive? On what surfaces? How do different social distancing measures impact the transmission rate? What’s their cost? What are tracing best practices? How reliable are our tests?
Clear answers to these questions will help make our response as targeted as possible while minimizing collateral economic and social damage. And they will come in weeks, not years.
Not only that, but what if we found a treatment in the next few weeks? Any day we buy gets us closer to that. Right now, there are already several candidates, such as Favipiravir, Chloroquine, or Chloroquine combined with Azithromycin. What if it turned out that in two months we discovered a treatment for the coronavirus? How stupid would we look if we already had millions of deaths following a mitigation strategy?
Understand the Cost-Benefits
All of the factors above can help us save millions of lives. That should be enough. Unfortunately, politicians can’t only think about the lives of the infected. They must think about all the population, and heavy social distancing measures have an impact on others.
Right now we have no idea how different social distancing measures reduce transmission. We also have no clue what their economic and social costs are.
Isn’t it a bit difficult to decide what measures we need for the long term if we don’t know their cost or benefit?
A few weeks would give us enough time to start studying them, understand them, prioritize them, and decide which ones to follow.
Fewer cases, more understanding of the problem, building up assets, understanding the virus, understanding the cost-benefit of different measures, educating the public… These are some core tools to fight the virus, and we just need a few weeks to develop many of them. Wouldn’t it be dumb to commit to a strategy that throws us instead, unprepared, into the jaws of our enemy?
4. The Hammer and the Dance
Now we know that the Mitigation Strategy is probably a terrible choice, and that the Suppression Strategy has a massive short-term advantage.
But people have rightful concerns about this strategy:
How long will it actually last?
How expensive will it be?
Will there be a second peak as big as if we didn’t do anything?
Here, we’re going to look at what a true Suppression Strategy would look like. We can call it the Hammer and the Dance.
First, you act quickly and aggressively. For all the reasons we mentioned above, given the value of time, we want to quench this thing as soon as possible.
One of the most important questions is: How long will this last?
The fear that everybody has is that we will be locked inside our homes for months at a time, with the ensuing economic disaster and mental breakdowns. This idea was unfortunately entertained in the famous Imperial College paper:
Do you remember this chart? The light blue area that goes from end of March to end of August is the period that the paper recommends as the Hammer, the initial suppression that includes heavy social distancing.
If you’re a politician and you see that one option is to let hundreds of thousands or millions of people die with a mitigation strategy and the other is to stop the economy for five months before going through the same peak of cases and deaths, these don’t sound like compelling options.
But this doesn’t need to be so. This paper, driving policy today, has been brutally criticized for core flaws: They ignore contact tracing (at the core of policies in South Korea, China or Singapore among others) or travel restrictions (critical in China), ignore the impact of big crowds…
The time needed for the Hammer is weeks, not months.
This graph shows the new cases in the entire Hubei region (60 million people) every day since 1/23. Within 2 weeks, the country was starting to get back to work. Within ~5 weeks it was completely under control. And within 7 weeks the new diagnostics was just a trickle. Let’s remember this was the worst region in China.
Remember again that these are the orange bars. The grey bars, the true cases, had plummeted much earlier (see Chart 9).
The measures they took were pretty similar to the ones taken in Italy, Spain or France: isolations, quarantines, people had to stay at home unless there was an emergency or had to buy food, contact tracing, testing, more hospital beds, travel bans…
Details matter, however.
China’s measures were stronger. For example, people were limited to one person per household allowed to leave home every three days to buy food. Also, their enforcement was severe. It is likely that this severity stopped the epidemic faster.
In Italy, France and Spain, measures were not as drastic, and their implementation is not as tough. People still walk on the streets, many without masks. This is likely to result in a slower Hammer: more time to fully control the epidemic.
Some people interpret this as “Democracies will never be able to replicate this reduction in cases”. That’s wrong.
For several weeks, South Korea had the worst epidemic outside of China. Now, it’s largely under control. And they did it without asking people to stay home. They achieved it mostly with very aggressive testing, contact tracing, and enforced quarantines and isolations.
The following table gives a good sense of what measures different countries have followed, and how that has impacted them (this is a work-in-progress. Feedback welcome.)
This shows how countries who were prepared, with stronger epidemiological authority, education on hygiene and social distancing, and early detection and isolation, didn’t have to pay with heavier measures afterwards.
Conversely, countries like Italy, Spain or France weren’t doing these well, and had to then apply the Hammer with the hard measures at the bottom to catch up.
The lack of measures in the US and UK is in stark contrast, especially in the US. These countries are still not doing what allowed Singapore, South Korea or Taiwan to control the virus, despite their outbreaks growing exponentially. But it’s a matter of time. Either they have a massive epidemic, or they realize late their mistake, and have to overcompensate with a heavier Hammer. There is no escape from this.
But it’s doable. If an outbreak like South Korea’s can be controlled in weeks and without mandated social distancing, Western countries, which are already applying a heavy Hammer with strict social distancing measures, can definitely control the outbreak within weeks. It’s a matter of discipline, execution, and how much the population abides by the rules.
Once the Hammer is in place and the outbreak is controlled, the second phase begins: the Dance.
If you hammer the coronavirus, within a few weeks you’ve controlled it and you’re in much better shape to address it. Now comes the longer-term effort to keep this virus contained until there’s a vaccine.
This is probably the single biggest, most important mistake people make when thinking about this stage: they think it will keep them home for months. This is not the case at all. In fact, it is likely that our lives will go back to close to normal.
In this video, the South Korea Foreign Minister explains how her country did it. It was pretty simple: efficient testing, efficient tracing, travel bans, efficient isolating and efficient quarantining.
Want to guess their measures? The same ones as in South Korea. In their case, they complemented with economic help to those in quarantine and travel bans and delays.
Is it too late for these countries and others? No. By applying the Hammer, they’re getting a new chance, a new shot at doing this right. The more they wait, the heavier and longer the hammer, but it can control the epidemics.
But what if all these measures aren’t enough?
The Dance of R
I call the months-long period between the Hammer and a vaccine or effective treatment the Dance because it won’t be a period during which measures are always the same harsh ones. Some regions will see outbreaks again, others won’t for long periods of time. Depending on how cases evolve, we will need to tighten up social distancing measures or we will be able to release them. That is the dance of R: a dance of measures between getting our lives back on track and spreading the disease, one of economy vs. healthcare.
How does this dance work?
It all turns around the R. If you remember, it’s the transmission rate. Early on in a standard, unprepared country, it’s somewhere between 2 and 3: During the few weeks that somebody is infected, they infect between 2 and 3 other people on average.
If R is above 1, infections grow exponentially into an epidemic. If it’s below 1, they die down.
During the Hammer, the goal is to get R as close to zero, as fast as possible, to quench the epidemic. In Wuhan, it is calculated that R was initially 3.9, and after the lockdown and centralized quarantine, it went down to 0.32.
But once you move into the Dance, you don’t need to do that anymore. You just need your R to stay below 1: a lot of the social distancing measures have true, hard costs on people. They might lose their job, their business, their healthy habits…
You can remain below R=1 with a few simple measures.
This is an approximation of how different types of patients respond to the virus, as well as their contagiousness. Nobody knows the true shape of this curve, but we’ve gathered data from different papers to approximate how it looks like.
Every day after they contract the virus, people have some contagion potential. Together, all these days of contagion add up to 2.5 contagions on average.
It is believed that there are some contagions already happening during the “no symptoms” phase. After that, as symptoms grow, usually people go to the doctor, get diagnosed, and their contagiousness diminishes.
For example, early on you have the virus but no symptoms, so you behave as normal. When you speak with people, you spread the virus. When you touch your nose and then open door knob, the next people to open the door and touch their nose get infected.
The more the virus is growing inside you, the more infectious you are. Then, once you start having symptoms, you might slowly stop going to work, stay in bed, wear a mask, or start going to the doctor. The bigger the symptoms, the more you distance yourself socially, reducing the spread of the virus.
Once you’re hospitalized, even if you are very contagious you don’t tend to spread the virus as much since you’re isolated.
This is where you can see the massive impact of policies like those of Singapore or South Korea:
If people are massively tested, they can be identified even before they have symptoms. Quarantined, they can’t spread anything.
If people are trained to identify their symptoms earlier, they reduce the number of days in blue, and hence their overall contagiousness
If people are isolated as soon as they have symptoms, the contagions from the orange phase disappear.
If people are educated about personal distance, mask-wearing, washing hands or disinfecting spaces, they spread less virus throughout the entire period.
Only when all these fail do we need heavier social distancing measures.
The ROI of Social Distancing
If with all these measures we’re still way above R=1, we need to reduce the average number of people that each person meets.
There are some very cheap ways to do that, like banning events with more than a certain number of people (eg, 50, 500), or asking people to work from home when they can.
Other are much, much more expensive economically, socially and ethically, such as closing schools and universities, asking everybody to stay home, or closing businesses.
This chart is made up because it doesn’t exist today. Nobody has done enough research about this or put together all these measures in a way that can compare them.
It’s unfortunate, because it’s the single most important chart that politicians would need to make decisions. It illustrates what is really going through their minds.
During the Hammer period, politicians want to lower R as much as possible, through measures that remain tolerable for the population. In Hubei, they went all the way to 0.32. We might not need that: maybe just to 0.5 or 0.6.
But during the Dance of the R period, they want to hover as close to 1 as possible, while staying below it over the long term term. That prevents a new outbreak, while eliminating the most drastic measures.
What this means is that, whether leaders realize it or not, what they’re doing is:
List all the measures they can take to reduce R
Get a sense of the benefit of applying them: the reduction in R
Get a sense of their cost: the economic, social, and ethical cost.
Stack-rank the initiatives based on their cost-benefit
Pick the ones that give the biggest R reduction up till 1, for the lowest cost.
Initially, their confidence on these numbers will be low. But that‘s still how they are thinking—and should be thinking about it.
What they need to do is formalize the process: Understand that this is a numbers game in which we need to learn as fast as possible where we are on R, the impact of every measure on reducing R, and their social and economic costs.
Only then will they be able to make a rational decision on what measures they should take.
Conclusion: Buy Us Time
The coronavirus is still spreading nearly everywhere. 152 countries have cases. We are against the clock. But we don’t need to be: there’s a clear way we can be thinking about this.
Some countries, especially those that haven’t been hit heavily yet by the coronavirus, might be wondering: Is this going to happen to me? The answer is: It probably already has. You just haven’t noticed. When it really hits, your healthcare system will be in even worse shape than in wealthy countries where the healthcare systems are strong. Better safe than sorry, you should consider taking action now.
For the countries where the coronavirus is already here, the options are clear.
On one side, countries can go the mitigation route: create a massive epidemic, overwhelm the healthcare system, drive the death of millions of people, and release new mutations of this virus in the wild.
On the other, countries can fight. They can lock down for a few weeks to buy us time, create an educated action plan, and control this virus until we have a vaccine.
Governments around the world today, including some such as the US, the UK or Switzerland have so far chosen the mitigation path.
That means they’re giving up without a fight. They see other countries having successfully fought this, but they say: “We can’t do that!”
What if Churchill had said the same thing? “Nazis are already everywhere in Europe. We can’t fight them. Let’s just give up.” This is what many governments around the world are doing today. They’re not giving you a chance to fight this. You have to demand it.
Share the Word
Unfortunately, millions of lives are still at stake. Share this article—or any similar one—if you think it can change people’s opinion. Leaders need to understand this to avert a catastrophe. The moment to act is now.
This article has been the result of a herculean effort by a group of normal citizens working around the clock to find all the relevant research available to structure it into one piece, in case it can help others process all the information that is out there about the coronavirus.
Special thanks to Dr. Carl Juneau (epidemiologist and translator of the French version), Dr. Brandon Fainstad, Pierre Djian, Jorge Peñalva, John Hsu, Genevieve Gee, Elena Baillie, Chris Martinez, Yasemin Denari, Christine Gibson, Matt Bell, Dan Walsh, Jessica Thompson, Karim Ravji, Annie Hazlehurst, and Aishwarya Khanduja. This has been a team effort.
Thank you also to Berin Szoka, Shishir Mehrotra, QVentus, Illumina, Josephine Gavignet, Mike Kidd, and Nils Barth for your advice. Thank you to my company, Course Hero, for giving me the time and freedom to focus on this.
Indeed, science was turned on its head after a discovery in 1772 near Vilui, Siberia, of an intact frozen woolly rhinoceros, which was followed by the more famous discovery of a frozen mammoth in 1787. You may be shocked, but these discoveries of frozen animals with grass still in their stomachs set in motion these two schools of thought since the evidence implied you could be eating lunch and suddenly find yourself frozen, only to be discovered by posterity.
The discovery of the woolly rhinoceros in 1772, and then frozen mammoths, sparked the imagination that things were not linear after all. These major discoveries truly contributed to the “Age of Enlightenment” where there was a burst of knowledge erupting in every field of inquisition. Such finds of frozen mammoths in Siberia continue to this day. This has challenged theories on both sides of this debate to explain such catastrophic events. These frozen animals in Siberia suggest strange events are possible even in climates that are not that dissimilar from the casts of dead victims who were buried alive after the volcanic eruption of 79 AD at Pompeii in ancient Roman Italy. Animals can be grazing and then suddenly freeze abruptly. That climate change was long before man invented the combustion engine.
Even the field of geology began to create great debates that perhaps the earth simply burst into a catastrophic convulsion and indeed the planet was cyclical — not linear. This view of sequential destructive upheavals at irregular intervals or cycles emerged during the 1700s. This school of thought was perhaps best expressed by a forgotten contributor to the knowledge of mankind, George Hoggart Toulmin in his rare 1785 book, “The Eternity of the World“:
” ••• convulsions and revolutions violent beyond our experience or conception, yet unequal to the destruction of the globe, or the whole of the human species, have both existed and will again exist ••• [terminating] ••• an astonishing succession of ages.”
In 1832, Professor A. Bernhardi argued that the North Polar ice cap had extended into the plains of Germany. To support this theory, he pointed to the existence of huge boulders that have become known as “erratics,” which he suggested were pushed by the advancing ice. This was a shocking theory for it was certainly a nonlinear view of natural history. Bernhardi was thinking out of the box. However, in natural science people listen and review theories unlike in social science where theories are ignored if they challenge what people want to believe. In 1834, Johann von Charpentier (1786-1855) argued that there were deep grooves cut into the Alpine rock concluding, as did Karl Schimper, that they were caused by an advancing Ice Age.
This body of knowledge has been completely ignored by the global warming/climate change religious cult. They know nothing about nature or cycles and they are completely ignorant of history or even that it was the discovery of these ancient creatures who froze with food in their mouths. They cannot explain these events nor the vast amount of knowledge written by people who actually did research instead of trying to cloak an agenda in pretend science.
Glaciologists have their own word, jökulhlaup(from Icelandic), to describe the spectacular outbursts when water builds up behind a glacier and then breaks loose. An example was the 1922 jökulhlaup in Iceland. Some seven cubic kilometers of water, melted by a volcano under a glacier, had rushed out in a few days. Still grander, almost unimaginably events, were floods that had swept across Washington state toward the end of the last ice age when a vast lake dammed behind a glacier broke loose. Catastrophic geologic events are not generally part of the uniformitarian geologist’s thinking. Rather, the normal view tends to be linear including events that are local or regional in size.
One example of a regional event would be the 15,000 square miles of the Channeled Scablands in eastern Washington. Initially, this spectacular erosion was thought to be the product of slow gradual processes. In 1923, J. Harlen Bretz presented a paper to the Geological Society of America suggesting the Scablands were eroded catastrophically. During the 1940s, after decades of arguing, geologists admitted that high ridges in the Scablands were the equivalent of the little ripples one sees in mud on a streambed, magnified ten thousand times. Finally, by the 1950s, glaciologists were accustomed to thinking about catastrophic regional floods. The Scablands are now accepted to have been catastrophically eroded by the “Spokane Flood.” This Spokane flood was the result of the breaching of an ice dam which had created glacial Lake Missoula. Now the United States Geological Survey estimates the flood released 500 cubic miles of water, which drained in as little as 48 hours. That rush of water gouged out millions of tons of solid rock.
When Mount St. Helens erupted in 1980, this too produced a catastrophic process whereby two hundred million cubic yards of material was deposited by volcanic flows at the base of the mountain in just a matter of hours. Then, less than two years later, there was another minor eruption, but this resulted in creating a mudflow, which carved channels through the recently deposited material. These channels, which are 1/40th the size of the Grand Canyon, exposed flat segments between the catastrophically deposited layers. This is what we see between the layers exposed in the walls of the Grand Canyon. What is clear, is that these events were relatively minor compared to a global flood. For example, the eruption of Mount St. Helens contained only 0.27 cubic miles of material compared to other eruptions, which have been as much as 950 cubic miles. That is over 2,000 times the size of Mount St. Helens!
With respect to the Grand Canyon, the specific geologic processes and timing of the formation of the Grand Canyon have always sparked lively debates by geologists. The general scientific consensus, updated at a 2010 conference, maintains that the Colorado River carved the Grand Canyon beginning 5 million to 6 million years ago. This general thinking is still linear and by no means catastrophic. The Grand Canyon is believed to have been gradually eroded. However, there is an example cyclical behavior in nature which demonstrates that water can very rapidly erode even solid rock. An example of this took place in the Grand Canyon region back on June 28th, 1983. There emerged an overflow of Lake Powell which required the use of the Glen Canyon Dam’s 40-foot diameter spillway tunnels for the first time. As the volume of water increased, the entire dam started to vibrate and large boulders spewed from one of the spillways. The spillway was immediately shut down and an inspection revealed catastrophic erosion had cut through the three-foot-thick reinforced concrete walls and eroded a hole 40 feet wide, 32 feet deep, and 150 feet long in the sandstone beneath the dam. Nobody thought such catastrophic erosion that quick was even possible.
Some have speculated that the end of the Ice Age resulted in a flood of water which had been contained by an ice dam. Like that of the Scablands, it is possible that a sudden catastrophic release of water originally carved the Grand Canyon. It is clear that both the formation of the Scablands and the evidence of how Mount St Helens unfolded, may be support for the catastrophic formation of events rather than nice, slow, and linear formations.
Then there is the Biblical Account of the Great Flood and Noah. Noah is also considered to be a Prophet of Islam. Darren Aronofsky’s film Noah was based on the biblical story of Genesis. Some Christians were angry because the film strayed from biblical Scripture. The Muslim-majority countries banned the film Noah from screening in theaters because Noah was a prophet of God in the Koran. They considered it to be blasphemous to make a film about a prophet. Many countries banned the film entirely.
The story of Noah predates the Bible. There exists the legend of the Great Flood rooted in the ancient civilizations of Mesopotamia. The Sumerian Epic of Gilgamesh dates back nearly 5,000 years which is believed to be perhaps the oldest written tale on Earth. Here too, we find an account of the great sage Utnapishtim, who is warned of an imminent flood to be unleashed by wrathful gods. He builds a vast circular-shaped boat, reinforced with tar and pitch, and carries his relatives, grains along with animals. After enduring days of storms, Utnapishtim, like Noah in Genesis, releases a bird in search of dry land. Since there is evidence that there were survivors in different parts of the world, it is merely logical that there should be more than just one.
Archaeologists generally agree that there was a historical deluge between 5,000 and 7,000 years ago which hit lands ranging from the Black Sea to what many call the cradle of civilization, which was the floodplain between the Tigris and Euphrates rivers. The translation of ancient cuneiform tablets in the 19th century confirmed the Mesopotamian Great Flood myth as an antecedent of the Noah story in the Bible.
The problem that existed was the question of just how “great” was the Great Flood? Was it regional or worldwide? The stories of the Great Flood in Western Culture clearly date back before the Bible. The region implicated has long been considered to be the Black Sea. It has been suggested that the water broke through the land by Istanbul and flooded a fertile valley on the other side much as we just looked at in the Scablands. Robert Ballard, one of the world’s best-known underwater archaeologists, who found the Titanic, set out to test that theory to search for an underwater civilization. He discovered that some four hundred feet below the surface, there was an ancient shoreline, proving that there was a catastrophic event did happen in the Black Sea. By carbon dating shells found along the underwater shoreline, Ballard dated this catastrophic event to around 5,000 BC. This may match around the time when Noah’s flood could have occurred.
Given the fact that for the entire Earth to be submerged for 40 days and 40 nights is impossible for that much water to simply vanish, we are probably looking at a Great Flood that at the very least was regional. However, there are tales of the Great Floodwhich spring from many other sources. Various ancient cultures have their own legends of a Great Flood and salvation. According to Vedic lore, a fish tells the mythic Indian king Manu of a Great Flood that will wipe out humanity. In turn, Manu also builds a ship to withstand the epic rains and is later led to a mountaintop by the same fish.
We also find an Aztec story that tells of a devout couple hiding in the hollow of a vast tree with two ears of corn as divine storms drown the wicked of the land. Creation myths from Egypt to Scandinavia also involve tidal floods of all sorts of substances purging and remaking the earth. The fact that we have Great Flood stories from India is not really a surprise since there was contact between the Middle East and India throughout recorded history. However, the Aztec story lacks the ship, but it still contains punishing the wicked and here there was certainly no direct contact, although there is evidence of cocaine use in Egypt implying there was some trade route probably through island hopping in the Pacific to the shores of India and off to Egypt. Obviously, we cannot rule out that this story of the Great Flood even made it to South America.
Then again, there is the story of Atlantis – the island that sunk beath the sea. The Atlantic Ocean covers approximately one-fifth of Earth’s surface and second in size only to the Pacific Ocean. The ocean’s name, derived from Greek mythology, means the “Sea of Atlas.” The origin of names is often very interesting clues as well. For example. New Jersey is the English Translation of Latin Nova Caesarea which appeared even on the colonial coins of the 18th century. Hence, the state of New Jersey is named after the Island of Jersey which in turn was named in the honor of Julius Caesar. So we actually have an American state named after the man who changed the world on par with Alexander the Great, for whom Alexandria of Virginia is named after with the location of the famous cemetery for veterans, where John F. Kennedy is buried.
So here the Atlantic Ocean is named after Atlas and the story of Atlantis. The original story of Atlantis comes to us from two Socratic dialogues called Timaeus and Critias, both written about 360 BC by the Greek philosopher Plato. According to the dialogues, Socrates asked three men to meet him: Timaeus of Locri, Hermocrates of Syracuse, and Critias of Athens. Socrates asked the men to tell him stories about how ancient Athens interacted with other states. Critias was the first to tell the story. Critias explained how his grandfather had met with the Athenian lawgiver Solon, who had been to Egypt where priests told the Egyptian story about Atlantis. According to the Egyptians, Solon was told that there was a mighty power based on an island in the Atlantic Ocean. This empire was called Atlantis and it ruled over several other islands and parts of the continents of Africa and Europe.
Atlantis was arranged in concentric rings of alternating water and land. The soil was rich and the engineers were technically advanced. The architecture was said to be extravagant with baths, harbor installations, and barracks. The central plain outside the city was constructed with canals and an elaborate irrigation system. Atlantis was ruled by kings but also had a civil administration. Its military was well organized. Their religious rituals were similar to that of Athens with bull-baiting, sacrifice, and prayer.
Plato told us about the metals found in Atlantis, namely gold, silver, copper, tin and the mysterious Orichalcum. Plato said that the city walls were plated with Orichalcum (Brass). This was a rare alloy metal back then which was found both in Crete as well as in the Andes, in South America. An ancient shipwreck was discovered off the coast of Sicily in 2015 which contained 39 ingots of Orichalcum. Many claimed this proved the story of Atlantis. Orichalcum was believed to have been a gold/copper alloy that was cheaper than gold, but twice the value of copper. Of course, Orichalcum was really a copper-tin or copper-zinc brass. We find in Virgil’s Aeneid, the breastplate of Turnus is described as “stiff with gold and white orichalc”.
The monetary reform of Augustus in 23BC reintroduced bronze coinage which had vanished after 84BC. Here we see the introduction of Orichalcum for the Roman sesterius and the dupondius. The Roman As was struck in near pure copper. Therefore, about 300 years after Plato, we do see Orichalcum being introduced as part of the monetary system of Rome. It is clear that Orichalcum was rare at the time Plato wrote this. Consequently, this is similar to the stories of America that there was so much gold, they paved the streets with it.
As the story is told, Atlantis was located in the Atlantic Ocean. There have been bronze-age anchors discovered at the Gates of Hercules (Straights of Gibralter) and many people proclaimed this proved Atlantis was real. However, what these proponents fail to take into account is the Minoans. The Minoans were perhaps the first International Economy. They traded far and wide even with Britain seeking tin to make bronze – henceBronze Age. Their civilization was of the Bronze Age rising civilization that arose on the island of Crete and flourished from approximately the 27th century BC to the 15th century BC – nearly 12,000 years. Their trading range and colonization extended to Spain, Egypt, Israel (Canaan), Syria (Levantine), Greece, Rhodes, and of course to Turkey (Anatolia). Many other cultures referred to them as the people from the islands in the middle of the sea. However, the Minoans had no mineral deposits. They lacked gold as well as silver or even the ability to produce large mining of copper. They appear to have copper mines in Anatolia (Turkey) in colonized cities. What has survived are examples of copper ingots that served as MONEY in trade. Keep in mind that gold at this point was rare, too rare to truly serve as MONEY. It is found largely as jewelry in tombs of royal dignitaries.
The Bronze Age emerged at different times globally appearing in Greece and China around 3,000BC but it came late to Britain reaching there about 1900BC. It is known that copper emerged as a valuable tool in Anatolia (Turkey) as early as 6,500BC, where it began to replace stone in the creation of tools. It was the development of casting copper that also appears to aid the urbanization of man in Mesopotamia. By 3,000BC, copper is in wide use throughout the Middle East and starts to move up into Europe. Copper in its pure stage appears first, and tin is eventually added creating actual bronze where a bronze sword would break a copper sword. It was this addition of tin that really propelled the transition of copper to bronze and the tin was coming from England where vast deposits existed at Cornwall. We know that the Minoans traveled into the Atlantic for trade. Anchors are not conclusive evidence of Atlantis.
As the legend unfolds, Atlantis waged an unprovoked imperialistic war on the remainder of Asia and Europe. When Atlantis attacked, Athens showed its excellence as the leader of the Greeks, the much smaller city-state the only power to stand against Atlantis. Alone, Athens triumphed over the invading Atlantean forces, defeating the enemy, preventing the free from being enslaved, and freeing those who had been enslaved. This part may certainly be embellished and remains doubtful at best. However, following this battle, there were violent earthquakes and floods, and Atlantis sank into the sea, and all the Athenian warriors were swallowed up by the earth. This appears to be almost certainly a fiction based on some ancient political realities. Still, the explosive disappearance of an island some have argued is a reference to the eruption of MinoanSantorini. The story of Atlantis does closely correlate with Plato’s notions of The Republic examining the deteriorating cycle of life in a state.
There have been theories that Atlantiswas the Azores, and still, others argue it was actually South America. That would explain to some extent the cocaine mummies in Egypt. Yet despite all these theories, usually, when there is an ancient story, despite embellishment, there is often a grain of truth hidden deep within. In this case, Atlantis may not have completely submerged, but it could have partially submerged from an earthquake at least where some people survived. Survivors could have made to either the Americas or to Africa/Europe. What is clear, is that a sudden event could have sent a tsunamiinto the Mediterranean which then broke the land mass at Istanbul and flooded the valley below transforming this region into the Black Sea becoming the story of Noah.
We also have evidence which has surfaced that the Earth was struck by a comet around 12,800 years ago. Scientific American has published that sediments from six sites across North America—Murray Springs, Ariz.; Bull Creek, Okla.; Gainey, Mich.; Topper, S.C.; Lake Hind, Manitoba; and Chobot, Alberta, have yielded tiny diamonds, which only occur in sediment exposed to extreme temperatures and pressures. The evidence surfacing implies that the Earth moved into an Ice Age killing off large mammals and setting the course for Global Cooling for the next 1300 years. This may indeed explain that catastrophic freezing of Wooly Mammoths in Siberia. Such an event could have also been responsible for the legend of Atlantis where the survivors migrated taking their stories with them.
There is also evidence surfacing from stone carvings at one of the oldest sites recorded located in Anatolia (Turkey). Using a computer programme to show where the constellations would have appeared above Turkey thousands of years ago, researchers were able to pinpoint the comet strike to 10,950BC, the exact time the Younger Dryas,which was was a return to glacial conditions and Global Cooling which temporarily reversed the gradual climatic warming after the Last Glacial Maximum that began to recede around 20,000 BC, utilizing ice core data from Greenland.
Now, there is a very big asteroid which passed by the Earth on September 16th, 2013. What is most disturbing is the fact that its cycle is 19 years so it will return in 2032. Astronomers have not been able to swear it will not hit the Earth on the next pass in 2032. It was discovered by Ukrainian astronomers with just 10 days to go back in 2013. The 2013 pass was only a distance of 4.2 million miles (6.7 million kilometers). If anything alters its orbit, then it will get closer and closer. It just so happens to line up on a cyclical basis that suggests we should begin to look at how to deflect asteroids and soon.
It definitely appears that catastrophic cooling may also be linked to the Earth being struck by a meteor, asteroids, or a comet. We are clearly headed into a period of Global Cooling and this will get worse as we head into 2032. The question becomes: Is our model also reflecting that it is once again time for an Earth change caused by an asteroid encounter? Such events are not DOOMSDAY and the end of the world. They do seem to be regional. However, a comet striking in North America would have altered the comet freezing animals in Siberia.
If there is a tiny element of truth in the story of Atlantis, the one thing it certainly proves is clear – there are ALWAYS survivors. Based upon a review of the history of civilization as well as climate, what resonates profoundly is that events follow the cyclical model of catastrophic occurrences rather than the linear steady slow progression of evolution.
Researchers describe a breakthrough in making accurate predictions of weather weeks ahead
February 20, 2018
Colorado State University
Researchers report a breakthrough in making accurate predictions of weather weeks ahead. They’ve created an empirical model fed by careful analysis of 37 years of historical weather data. Their model centers on the relationship between two well-known global weather patterns: the Madden-Julian Oscillation and the quasi-biennial oscillation.
The famously intense tropical rainstorms along Earth’s equator occur thousands of miles from the United States. But atmospheric scientists know that, like ripples in a pond, tropical weather creates powerful waves in the atmosphere that travel all the way to North America and have major impacts on weather in the U.S.
These far-flung, interconnected weather processes are crucial to making better, longer-term weather predictions than are currently possible. Colorado State University atmospheric scientists, led by professors Libby Barnes and Eric Maloney, are hard at work to address these longer-term forecasting challenges.
In a new paper in npj Climate and Atmospheric Science, the CSU researchers describe a breakthrough in making accurate predictions of weather weeks ahead. They’ve created an empirical model fed by careful analysis of 37 years of historical weather data. Their model centers on the relationship between two well-known global weather patterns: the Madden-Julian Oscillation and the quasi-biennial oscillation.
According to the study, led by former graduate researcher Bryan Mundhenk, the model, using both these phenomena, allows skillful prediction of the behavior of major rain storms, called atmospheric rivers, three and up to five weeks in advance.
“It’s impressive, considering that current state-of-the-art numerical weather models, such as NOA’s Global Forecast System, or the European Centre for Medium-Range Weather Forecasts’ operational model, are only skillful up to one to two weeks in advance,” says paper co-author Cory Baggett, a postdoctoral researcher in the Barnes and Maloney labs.
The researchers’ chief aim is improving forecast capabilities within the tricky no-man’s land of “subseasonal to seasonal” timescales: roughly three weeks to three months out. Predictive capabilities that far in advance could save lives and livelihoods, from sounding alarms for floods and mudslides to preparing farmers for long dry seasons. Barnes also leads a federal NOAA task force for improving subseasonal to seasonal forecasting, with the goal of sharpening predictions for hurricanes, heat waves, the polar vortex and more.
Atmospheric rivers aren’t actual waterways, but”rivers in the sky,” according to researchers. They’re intense plumes of water vapor that cause extreme precipitation, plumes so large they resemble rivers in satellite pictures. These “rivers” are responsible for more than half the rainfall in the western U.S.
The Madden-Julian Oscillation is a cluster of rainstorms that moves east along the Equator over 30 to 60 days. The location of the oscillation determines where atmospheric waves will form, and their eventual impact on say, California. In previous work, the researchers have uncovered key stages of the Madden-Julian Oscillation that affect far-off weather, including atmospheric rivers.
Sitting above the Madden-Julian Oscillation is a very predictable wind pattern called the quasi-biennial oscillation. Over two- to three-year periods, the winds shift east, west and back east again, and almost never deviate. This pattern directly affects the Madden-Julian Oscillation, and thus indirectly affects weather all the way to California and beyond.
The CSU researchers created a model that can accurately predict atmospheric river activity in the western U.S. three weeks from now. Its inputs include the current state of the Madden-Julian Oscillation and the quasi-biennial oscillation. Using information on how atmospheric rivers have previously behaved in response to these oscillations, they found that the quasi-biennial oscillation matters — a lot.
Armed with their model, the researchers want to identify and understand deficiencies in state-of-the-art numerical weather models that prevent them from predicting weather on these subseasonal time scales.
“It would be worthwhile to develop a good understanding of the physical relationship between the Madden-Julian Oscillation and the quasi-biennial oscillation, and see what can be done to improve models’ simulation of this relationship,” Mundhenk said.
Another logical extension of their work would be to test how well their model can forecast actual rainfall and wind or other severe weather, such as tornadoes and hail.
Bryan D. Mundhenk, Elizabeth A. Barnes, Eric D. Maloney, Cory F. Baggett. Skillful empirical subseasonal prediction of landfalling atmospheric river activity using the Madden–Julian oscillation and quasi-biennial oscillation. npj Climate and Atmospheric Science, 2018; 1 (1) DOI: 10.1038/s41612-017-0008-2
ITHACA, N.Y. – Languages have an intriguing paradox. Languages with lots of speakers, such as English and Mandarin, have large vocabularies with relatively simple grammar. Yet the opposite is also true: Languages with fewer speakers have fewer words but complex grammars.
Why does the size of a population of speakers have opposite effects on vocabulary and grammar?
Through computer simulations, a Cornell University cognitive scientist and his colleagues have shown that ease of learning may explain the paradox. Their work suggests that language, and other aspects of culture, may become simpler as our world becomes more interconnected.
Their study was published in the Proceedings of the Royal Society B: Biological Sciences.
“We were able to show that whether something is easy to learn – like words – or hard to learn – like complex grammar – can explain these opposing tendencies,” said co-author Morten Christiansen, professor of psychology at Cornell University and co-director of the Cognitive Science Program.
The researchers hypothesized that words are easier to learn than aspects of morphology or grammar. “You only need a few exposures to a word to learn it, so it’s easier for words to propagate,” he said.
But learning a new grammatical innovation requires a lengthier learning process. And that’s going to happen more readily in a smaller speech community, because each person is likely to interact with a large proportion of the community, he said. “If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”
Conversely, in a large community, like a big city, one person will talk only to a small proportion the population. This means that only a few people might be exposed to that complex grammar rule, making it harder for it to survive, he said.
This mechanism can explain why all sorts of complex cultural conventions emerge in small communities. For example, bebop developed in the intimate jazz world of 1940s New York City, and the Lindy Hop came out of the close-knit community of 1930s Harlem.
The simulations suggest that language, and possibly other aspects of culture, may become simpler as our world becomes increasingly interconnected, Christiansen said. “This doesn’t necessarily mean that all culture will become overly simple. But perhaps the mainstream parts will become simpler over time.”
Not all hope is lost for those who want to maintain complex cultural traditions, he said: “People can self-organize into smaller communities to counteract that drive toward simplification.”
His co-authors on the study, “Simpler Grammar, Larger Vocabulary: How Population Size Affects Language,” are Florencia Reali of Universidad de los Andes, Colombia, and Nick Chater of University of Warwick, England.
Para Cathy O’Neil, por trás da aparente imparcialidade ddos algoritmos escondem-se critérios nebulosos que agravam injustiças. GETTY IMAGES
Eles estão por toda parte. Nos formulários que preenchemos para vagas de emprego. Nas análises de risco a que somos submetidos em contratos com bancos e seguradoras. Nos serviços que solicitamos pelos nossos smartphones. Nas propagandas e nas notícias personalizadas que abarrotam nossas redes sociais. E estão aprofundando o fosso da desigualdade social e colocando em risco as democracias.
Definitivamente, não é com entusiasmo que a americana Cathy O’Neil enxerga a revolução dos algoritmos, sistemas capazes de organizar uma quantidade cada vez mais impressionante de informações disponíveis na internet, o chamado Big Data.
Matemática com formação em Harvard e Massachussetts Institute of Technology (MIT), duas das mais prestigiadas universidades do mundo, ela abandonou em 2012 uma bem-sucedida carreira no mercado financeiro e na cena das startups de tecnologia para estudar o assunto a fundo.
Quatro anos depois, publicou o livro Weapons of Math Destruction (Armas de Destruição em Cálculos, em tradução livre, um trocadilho com a expressão “armas de destruição em massa” em inglês) e tornou-se uma das vozes mais respeitadas no país sobre os efeitos colaterais da economia do Big Data.
A obra é recheada de exemplos de modelos matemáticos atuais que ranqueiam o potencial de seres humanos como estudantes, trabalhadores, criminosos, eleitores e consumidores. Segundo a autora, por trás da aparente imparcialidade desses sistemas, escondem-se critérios nebulosos que agravam injustiças.
É o caso dos seguros de automóveis nos Estados Unidos. Motoristas que nunca tomaram uma multa sequer, mas que tinham restrições de crédito por morarem em bairros pobres, pagavam valores consideravelmente mais altos do que aqueles com facilidade de crédito, mas já condenados por dirigirem embriagados. “Para a seguradora, é um ganha-ganha. Um bom motorista com restrição de crédito representa um risco baixo e um retorno altíssimo”, exemplifica.
Confira abaixo os principais trechos da entrevista:
BBC Brasil – Há séculos pesquisadores analisam dados para entender padrões de comportamento e prever acontecimentos. Qual é novidade trazida pelo Big Data?
Cathy O’Neil – O diferencial do Big Data é a quantidade de dados disponíveis. Há uma montanha gigantesca de dados que se correlacionam e que podem ser garimpados para produzir a chamada “informação incidental”. É incidental no sentido de que uma determinada informação não é fornecida diretamente – é uma informação indireta. É por isso que as pessoas que analisam os dados do Twitter podem descobrir em qual político eu votaria. Ou descobrir se eu sou gay apenas pela análise dos posts que curto no Facebook, mesmo que eu não diga que sou gay.
‘Essa ideia de que os robôs vão substituir o trabalho humano é muito fatalista. É preciso reagir e mostrar que essa é uma batalha política’, diz autora. GETTY IMAGES
A questão é que esse processo é cumulativo. Agora que é possível descobrir a orientação sexual de uma pessoa a partir de seu comportamento nas redes sociais, isso não vai ser “desaprendido”. Então, uma das coisas que mais me preocupam é que essas tecnologias só vão ficar melhores com o passar do tempo. Mesmo que as informações venham a ser limitadas – o que eu acho que não vai acontecer – esse acúmulo de conhecimento não vai se perder.
BBC Brasil – O principal alerta do seu livro é de que os algoritmos não são ferramentas neutras e objetivas. Pelo contrário: eles são enviesados pelas visões de mundo de seus programadores e, de forma geral, reforçam preconceitos e prejudicam os mais pobres. O sonho de que a internet pudesse tornar o mundo um lugar melhor acabou?
O’Neil – É verdade que a internet fez do mundo um lugar melhor em alguns contextos. Mas, se colocarmos numa balança os prós e os contras, o saldo é positivo? É difícil dizer. Depende de quem é a pessoa que vai responder. É evidente que há vários problemas. Só que muitos exemplos citados no meu livro, é importante ressaltar, não têm nada a ver com a internet. As prisões feitas pela polícia ou as avaliações de personalidade aplicadas em professores não têm a ver estritamente com a internet. Não há como evitar que isso seja feito, mesmo que as pessoas evitem usar a internet. Mas isso foi alimentado pela tecnologia de Big Data.
Por exemplo: os testes de personalidade em entrevistas de emprego. Antes, as pessoas se candidatavam a uma vaga indo até uma determinada loja que precisava de um funcionário. Mas hoje todo mundo se candidata pela internet. É isso que gera os testes de personalidade. Existe uma quantidade tão grande de pessoas se candidatando a vagas que é necessário haver algum filtro.
BBC Brasil – Qual é o futuro do trabalho sob os algoritmos?
O’Neil – Testes de personalidade e programas que filtram currículos são alguns exemplos de como os algoritmos estão afetando o mundo do trabalho. Isso sem mencionar os algoritmos que ficam vigiando as pessoas enquanto elas trabalham, como é o caso de professores e caminhoneiros. Há um avanço da vigilância. Se as coisas continuarem indo do jeito como estão, isso vai nos transformar em robôs.
Reprodução de propaganda no Facebook usada para influenciar as eleições nos EUA: ‘não deveriam ser permitidos anúncios personalizados, customizados’, opina autora
Mas eu não quero pensar nisso como um fato inevitável – que os algoritmos vão transformar as pessoas em robôs ou que os robôs vão substituir o trabalho dos seres humanos. Eu não quero admitir isso. Isso é algo que podemos decidir que não vai acontecer. É uma decisão política. Essa ideia de que os robôs vão substituir o trabalho humano é muito fatalista. É preciso reagir e mostrar que essa é uma batalha política. O problema é que estamos tão intimidados pelo avanço dessas tecnologias que sentimos que não há como lutar contra.
BBC Brasil – E no caso das companhias de tecnologia como a Uber? Alguns estudiosos usam o termo “gig economy” (economia de “bicos”) para se referir à organização do trabalho feita por empresas que utilizam algoritmos.
O’Neil – Esse é um ótimo exemplo de como entregamos o poder a essas empresas da gig economy, como se fosse um processo inevitável. Certamente, elas estão se saindo muito bem na tarefa de burlar legislações trabalhistas, mas isso não quer dizer que elas deveriam ter permissão para agir dessa maneira. Essas companhias deveriam pagar melhores remunerações e garantir melhores condições de trabalho.
No entanto, os movimentos que representam os trabalhadores ainda não conseguiram assimilar as mudanças que estão ocorrendo. Mas essa não é uma questão essencialmente algorítmica. O que deveríamos estar perguntando é: como essas pessoas estão sendo tratadas? E, se elas não estão sendo bem tratadas, deveríamos criar leis para garantir isso.
Eu não estou dizendo que os algoritmos não têm nada a ver com isso – eles têm, sim. É uma forma que essas companhias usam para dizer que elas não podem ser consideradas “chefes” desses trabalhadores. A Uber, por exemplo, diz que os motoristas são autônomos e que o algoritmo é o chefe. Esse é um ótimo exemplo de como nós ainda não entendemos o que se entende por “responsabilidade” no mundo dos algoritmos. Essa é uma questão em que venho trabalhando há algum tempo: que pessoas vão ser responsabilizadas pelos erros dos algoritmos?
BBC Brasil – No livro você argumenta que é possível criar algoritmos para o bem – o principal desafio é garantir transparência. Porém, o segredo do sucesso de muitas empresas é justamente manter em segredo o funcionamento dos algoritmos. Como resolver a contradição?
O’Neil – Eu não acho que seja necessária transparência para que um algoritmo seja bom. O que eu preciso saber é se ele funciona bem. Eu preciso de indicadores de que ele funciona bem, mas isso não quer dizer que eu necessite conhecer os códigos de programação desse algoritmo. Os indicadores podem ser de outro tipo – é mais uma questão de auditoria do que de abertura dos códigos.
A melhor maneira de resolver isso é fazer com que os algoritmos sejam auditados por terceiros. Não é recomendável confiar nas próprias empresas que criaram os algoritmos. Precisaria ser um terceiro, com legitimidade, para determinar se elas estão operando de maneira justa – a partir da definição de alguns critérios de justiça – e procedendo dentro da lei.
Para Cathy O’Neil, polarização política e fake news só vão parar se “fecharmos o Facebook”. DIVULGAÇÃO
BBC Brasil – Recentemente, você escreveu um artigo para o jornal New York Times defendendo que a comunidade acadêmica participe mais dessa discussão. As universidades poderiam ser esse terceiro de que você está falando?
O’Neil – Sim, com certeza. Eu defendo que as universidades sejam o espaço para refletir sobre como construir confiabilidade, sobre como requerer informações para determinar se os algoritmos estão funcionando.
BBC Brasil – Quando vieram a público as revelações de Edward Snowden de que o governo americano espionava a vida das pessoas através da internet, muita gente não se surpreendeu. As pessoas parecem dispostas a abrir mão da sua privacidade em nome da eficiência da vida virtual?
O’Neil – Eu acho que só agora estamos percebendo quais são os verdadeiros custos dessa troca. Com dez anos de atraso, estamos percebendo que os serviços gratuitos na internet não são gratuitos de maneira alguma, porque nós fornecemos nossos dados pessoais. Há quem argumente que existe uma troca consentida de dados por serviços, mas ninguém faz essa troca de forma realmente consciente – nós fazemos isso sem prestar muita atenção. Além disso, nunca fica claro para nós o que realmente estamos perdendo.
Mas não é pelo fato de a NSA (sigla em inglês para a Agência de Segurança Nacional) nos espionar que estamos entendendo os custos dessa troca. Isso tem mais a ver com os empregos que nós arrumamos ou deixamos de arrumar. Ou com os benefícios de seguros e de cartões de crédito que nós conseguimos ou deixamos de conseguir. Mas eu gostaria que isso estivesse muito mais claro.
No nível individual ainda hoje, dez anos depois, as pessoas não se dão conta do que está acontecendo. Mas, como sociedade, estamos começando a entender que fomos enganados por essa troca. E vai ser necessário um tempo para saber como alterar os termos desse acordo.
‘A Uber, por exemplo, diz que os motoristas são autônomos e que o algoritmo é o chefe. Esse é um ótimo exemplo de como nós ainda não entendemos o que se entende por “responsabilidade” no mundo dos algoritmos’, diz O’Neil. EPA
BBC Brasil – O último capítulo do seu livro fala sobre a vitória eleitoral de Donald Trump e avalia como as pesquisas de opinião e as redes sociais influenciaram na corrida à Casa Branca. No ano que vem, as eleições no Brasil devem ser as mais agitadas das últimas três décadas. Que conselho você daria aos brasileiros?
O’Neil – Meu Deus, isso é muito difícil! Está acontecendo em todas as partes do mundo. E eu não sei se isso vai parar, a não ser que fechem o Facebook – o que, a propósito, eu sugiro que façamos. Agora, falando sério: as campanhas políticas na internet devem ser permitidas, mas não deveriam ser permitidos anúncios personalizados, customizados – ou seja, todo mundo deveria receber os mesmos anúncios. Eu sei que essa ainda não é uma proposta realista, mas acho que deveríamos pensar grande porque esse problema é grande. E eu não consigo pensar em outra maneira de resolver essa questão.
É claro que isso seria um elemento de um conjunto maior de medidas porque nada vai impedir pessoas idiotas de acreditar no que elas querem acreditar – e de postar sobre isso. Ou seja, nem sempre é um problema do algoritmo. Às vezes, é um problema das pessoas mesmo. O fenômeno das fake news é um exemplo. Os algoritmos pioram a situação, personalizando as propagandas e amplificando o alcance, porém, mesmo que não existisse o algoritmo do Facebook e que as propagandas políticas fossem proibidas na internet, ainda haveria idiotas disseminando fake news que acabariam viralizando nas redes sociais. E eu não sei o que fazer a respeito disso, a não ser fechar as redes sociais.
Eu tenho três filhos, eles têm 17, 15 e 9 anos. Eles não usam redes sociais porque acham que são bobas e eles não acreditam em nada do que veem nas redes sociais. Na verdade, eles não acreditam em mais nada – o que também não é bom. Mas o lado positivo é que eles estão aprendendo a checar informações por conta própria. Então, eles são consumidores muito mais conscientes do que os da minha geração. Eu tenho 45 anos, a minha geração é a pior. As coisas que eu vi as pessoas da minha idade compartilhando após a eleição de Trump eram ridículas. Pessoas postando ideias sobre como colocar Hilary Clinton na presidência mesmo sabendo que Trump tinha vencido. Foi ridículo. A esperança é ter uma geração de pessoas mais espertas.
Since the 2008 financial crisis, colleges and universities have faced increased pressure to identify essential disciplines, and cut the rest. In 2009, Washington State University announced it would eliminate the department of theatre and dance, the department of community and rural sociology, and the German major – the same year that the University of Louisiana at Lafayette ended its philosophy major. In 2012, Emory University in Atlanta did away with the visual arts department and its journalism programme. The cutbacks aren’t restricted to the humanities: in 2011, the state of Texas announced it would eliminate nearly half of its public undergraduate physics programmes. Even when there’s no downsizing, faculty salaries have been frozen and departmental budgets have shrunk.
But despite the funding crunch, it’s a bull market for academic economists. According to a 2015 sociological study in the Journal of Economic Perspectives, the median salary of economics teachers in 2012 increased to $103,000 – nearly $30,000 more than sociologists. For the top 10 per cent of economists, that figure jumps to $160,000, higher than the next most lucrative academic discipline – engineering. These figures, stress the study’s authors, do not include other sources of income such as consulting fees for banks and hedge funds, which, as many learned from the documentary Inside Job (2010), are often substantial. (Ben Bernanke, a former academic economist and ex-chairman of the Federal Reserve, earns $200,000-$400,000 for a single appearance.)
Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline. Nor, in the case of financial economics and macroeconomics, can they point to the predictive power of their theories. Hedge funds employ cutting-edge economists who command princely fees, but routinely underperform index funds. Eight years ago, Warren Buffet made a 10-year, $1 million bet that a portfolio of hedge funds would lose to the S&P 500, and it looks like he’s going to collect. In 1998, a fund that boasted two Nobel Laureates as advisors collapsed, nearly causing a global financial crisis.
The failure of the field to predict the 2008 crisis has also been well-documented. In 2003, for example, only five years before the Great Recession, the Nobel Laureate Robert E Lucas Jr told the American Economic Association that ‘macroeconomics […] has succeeded: its central problem of depression prevention has been solved’. Short-term predictions fair little better – in April 2014, for instance, a survey of 67 economists yielded 100 per cent consensus: interest rates would rise over the next six months. Instead, they fell. A lot.
Nonetheless, surveys indicate that economists see their discipline as ‘the most scientific of the social sciences’. What is the basis of this collective faith, shared by universities, presidents and billionaires? Shouldn’t successful and powerful people be the first to spot the exaggerated worth of a discipline, and the least likely to pay for it?
In the hypothetical worlds of rational markets, where much of economic theory is set, perhaps. But real-world history tells a different story, of mathematical models masquerading as science and a public eager to buy them, mistaking elegant equations for empirical accuracy.
As an extreme example, take the extraordinary success of Evangeline Adams, a turn-of-the-20th-century astrologer whose clients included the president of Prudential Insurance, two presidents of the New York Stock Exchange, the steel magnate Charles M Schwab, and the banker J P Morgan. To understand why titans of finance would consult Adams about the market, it is essential to recall that astrology used to be a technical discipline, requiring reams of astronomical data and mastery of specialised mathematical formulas. ‘An astrologer’ is, in fact, the Oxford English Dictionary’s second definition of ‘mathematician’. For centuries, mapping stars was the job of mathematicians, a job motivated and funded by the widespread belief that star-maps were good guides to earthly affairs. The best astrology required the best astronomy, and the best astronomy was done by mathematicians – exactly the kind of person whose authority might appeal to bankers and financiers.
In fact, when Adams was arrested in 1914 for violating a New York law against astrology, it was mathematics that eventually exonerated her. During the trial, her lawyer Clark L Jordan emphasised mathematics in order to distinguish his client’s practice from superstition, calling astrology ‘a mathematical or exact science’. Adams herself demonstrated this ‘scientific’ method by reading the astrological chart of the judge’s son. The judge was impressed: the plaintiff, he observed, went through a ‘mathematical process to get at her conclusions… I am satisfied that the element of fraud… is absent here.’
Romer compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism
The enchanting force of mathematics blinded the judge – and Adams’s prestigious clients – to the fact that astrology relies upon a highly unscientific premise, that the position of stars predicts personality traits and human affairs such as the economy. It is this enchanting force that explains the enduring popularity of financial astrology, even today. The historian Caley Horan at the Massachusetts Institute of Technology described to me how computing technology made financial astrology explode in the 1970s and ’80s. ‘Within the world of finance, there’s always a superstitious, quasi-spiritual trend to find meaning in markets,’ said Horan. ‘Technical analysts at big banks, they’re trying to find patterns in past market behaviour, so it’s not a leap for them to go to astrology.’ In 2000, USA Today quoted Robin Griffiths, the chief technical analyst at HSBC, the world’s third largest bank, saying that ‘most astrology stuff doesn’t check out, but some of it does’.
Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.
‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’
Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:
When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.
The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.
Romer is not the first to elaborate the mathiness critique. In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal ‘emptiness behind a breastwork of mathematical formulas’. More recently, Deirdre N McCloskey’s The Rhetoric of Economics(1998) and Robert H Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message ‘Look at how very scientific I am.’
After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. ‘As I see it,’ he wrote, ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.’ Krugman named economists’ ‘desire… to show off their mathematical prowess’ as the ‘central cause of the profession’s failure’.
The mathiness critique isn’t limited to macroeconomics. In 2014, the Stanford financial economist Paul Pfleiderer published the paper‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, which helped to inspire Romer’s understanding of mathiness. Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’ that substitute ‘mathematical elegance’ for empirical accuracy. Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’
The notion that an entire culture – not just a few eccentric financiers – could be bewitched by empty, extravagant theories might seem absurd. How could all those people, all that math, be mistaken? This was my own feeling as I began investigating mathiness and the shaky foundations of modern economic science. Yet, as a scholar of Chinese religion, it struck me that I’d seen this kind of mistake before, in ancient Chinese attitudes towards the astral sciences. Back then, governments invested incredible amounts of money in mathematical models of the stars. To evaluate those models, government officials had to rely on a small cadre of experts who actually understood the mathematics – experts riven by ideological differences, who couldn’t even agree on how to test their models. And, of course, despite collective faith that these models would improve the fate of the Chinese people, they did not.
Astral Science in Early Imperial China, a forthcoming book by the historian Daniel P Morgan, shows that in ancient China, as in the Western world, the most valuable type of mathematics was devoted to the realm of divinity – to the sky, in their case (and to the market, in ours). Just as astrology and mathematics were once synonymous in the West, the Chinese spoke of li, the science of calendrics, which early dictionaries also glossed as ‘calculation’, ‘numbers’ and ‘order’. Li models, like macroeconomic theories, were considered essential to good governance. In the classic Book of Documents, the legendary sage king Yao transfers the throne to his successor with mention of a single duty: ‘Yao said: “Oh thou, Shun! The li numbers of heaven rest in thy person.”’
China’s oldest mathematical text invokes astronomy and divine kingship in its very title – The Arithmetical Classic of the Gnomon of the Zhou. The title’s inclusion of ‘Zhou’ recalls the mythic Eden of the Western Zhou dynasty (1045–771 BCE), implying that paradise on Earth can be realised through proper calculation. The book’s introduction to the Pythagorean theorem asserts that ‘the methods used by Yu the Great in governing the world were derived from these numbers’. It was an unquestioned article of faith: the mathematical patterns that govern the stars also govern the world. Faith in a divine, invisible hand, made visible by mathematics. No wonder that a newly discovered text fragment from 200 BCE extolls the virtues of mathematics over the humanities. In it, a student asks his teacher whether he should spend more time learning speech or numbers. His teacher replies: ‘If my good sir cannot fathom both at once, then abandon speech and fathom numbers, [for] numbers can speak, [but] speech cannot number.’
Modern governments, universities and businesses underwrite the production of economic theory with huge amounts of capital. The same was true for li production in ancient China. The emperor – the ‘Son of Heaven’ – spent astronomical sums refining mathematical models of the stars. Take the armillary sphere, such as the two-metre cage of graduated bronze rings in Nanjing, made to represent the celestial sphere and used to visualise data in three-dimensions. As Morgan emphasises, the sphere was literally made of money. Bronze being the basis of the currency, governments were smelting cash by the metric ton to pour it into li. A divine, mathematical world-engine, built of cash, sanctifying the powers that be.
The enormous investment in li depended on a huge assumption: that good government, successful rituals and agricultural productivity all depended upon the accuracy of li. But there were, in fact, no practical advantages to the continued refinement of li models. The calendar rounded off decimal points such that the difference between two models, hotly contested in theory, didn’t matter to the final product. The work of selecting auspicious days for imperial ceremonies thus benefited only in appearance from mathematical rigour. And of course the comets, plagues and earthquakes that these ceremonies promised to avert kept on coming. Farmers, for their part, went about business as usual. Occasional governmental efforts to scientifically micromanage farm life in different climes using li ended in famine and mass migration.
Like many economic models today, li models were less important to practical affairs than their creators (and consumers) thought them to be. And, like today, only a few people could understand them. In 101 BCE, Emperor Wudi tasked high-level bureaucrats – including the Great Director of the Stars – with creating a new li that would glorify the beginning of his path to immortality. The bureaucrats refused the task because ‘they couldn’t do the math’, and recommended the emperor outsource it to experts.
The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession
The debates of these ancient li experts bear a striking resemblance to those of present-day economists. In 223 CE, a petition was submitted to the emperor asking him to approve tests of a new li model developed by the assistant director of the astronomical office, a man named Han Yi.
At the time of the petition, Han Yi’s model, and its competitor, the so-called Supernal Icon, had already been subjected to three years of ‘reference’, ‘comparison’ and ‘exchange’. Still, no one could agree which one was better. Nor, for that matter, was there any agreement on how they should be tested.
In the end, a live trial involving the prediction of eclipses and heliacal risings was used to settle the debate. With the benefit of hindsight, we can see this trial was seriously flawed. The helical rising (first visibility) of planets depends on non-mathematical factors such as eyesight and atmospheric conditions. That’s not to mention the scoring of the trial, which was modelled on archery competitions. Archers scored points for proximity to the bullseye, with no consideration for overall accuracy. The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession.
None of this is to say that li models were useless or inherently unscientific. For the most part, li experts were genuine mathematical virtuosos who valued the integrity of their discipline. Despite being based on inaccurate assumptions – that the Earth was at the centre of the cosmos – their models really did work to predict celestial motions. Imperfect though the live trial might have been, it indicates that superior predictive power was a theory’s most important virtue. All of this is consistent with real science, and Chinese astronomy progressed as a science, until it reached the limits imposed by its assumptions.
However, there was no science to the belief that accurate li would improve the outcome of rituals, agriculture or government policy. No science to the Hall of Light, a temple for the emperor built on the model of a magic square. There, by numeric ritual gesture, the Son of Heaven was thought to channel the invisible order of heaven for the prosperity of man. This was quasi-theology, the belief that heavenly patterns – mathematical patterns – could be used to model every event in the natural world, in politics, even the body. Macro- and microcosm were scaled reflections of one another, yin and yang in a unifying, salvific mathematical vision. The expensive gadgets, the personnel, the bureaucracy, the debates, the competition – all of this testified to the divinely authoritative power of mathematics. The result, then as now, was overvaluation of mathematical models based on unscientific exaggerations of their utility.
In ancient China it would have been unfair to blame li experts for the pseudoscientific exploitation of their theories. These men had no way to evaluate the scientific merits of assumptions and theories – ‘science’, in a formalised, post-Enlightenment sense, didn’t really exist. But today it is possible to distinguish, albeit roughly, science from pseudoscience, astronomy from astrology. Hypothetical theories, whether those of economists or conspiracists, aren’t inherently pseudoscientific. Conspiracy theories can be diverting – even instructive – flights of fancy. They become pseudoscience only when promoted from fiction to fact without sufficient evidence.
Romer believes that fellow economists know the truth about their discipline, but don’t want to admit it. ‘If you get people to lower their shield, they’ll tell you it’s a big game they’re playing,’ he told me. ‘They’ll say: “Paul, you may be right, but this makes us look really bad, and it’s going to make it hard for us to recruit young people.”’
Demanding more honesty seems reasonable, but it presumes that economists understand the tenuous relationship between mathematical models and scientific legitimacy. In fact, many assume the connection is obvious – just as in ancient China, the connection between li and the world was taken for granted. When reflecting in 1999 on what makes economics more scientific than the other social sciences, the Harvard economist Richard B Freeman explained that economics ‘attracts stronger students than [political science or sociology], and our courses are more mathematically demanding’. In Lives of the Laureates (2004), Robert E Lucas Jr writes rhapsodically about the importance of mathematics: ‘Economic theory is mathematical analysis. Everything else is just pictures and talk.’ Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:
The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.
Even for those who agree with Romer, conflict of interest still poses a problem. Why would skeptical astronomers question the emperor’s faith in their models? In a phone conversation, Daniel Hausman, a philosopher of economics at the University of Wisconsin, put it bluntly: ‘If you reject the power of theory, you demote economists from their thrones. They don’t want to become like sociologists.’
George F DeMartino, an economist and an ethicist at the University of Denver, frames the issue in economic terms. ‘The interest of the profession is in pursuing its analysis in a language that’s inaccessible to laypeople and even some economists,’ he explained to me. ‘What we’ve done is monopolise this kind of expertise, and we of all people know how that gives us power.’
Every economist I interviewed agreed that conflicts of interest were highly problematic for the scientific integrity of their field – but only tenured ones were willing to go on the record. ‘In economics and finance, if I’m trying to decide whether I’m going to write something favourable or unfavourable to bankers, well, if it’s favourable that might get me a dinner in Manhattan with movers and shakers,’ Pfleiderer said to me. ‘I’ve written articles that wouldn’t curry favour with bankers but I did that when I had tenure.’
When mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience
Then there’s the additional problem of sunk-cost bias. If you’ve invested in an armillary sphere, it’s painful to admit that it doesn’t perform as advertised. When confronted with their profession’s lack of predictive accuracy, some economists find it difficult to admit the truth. Easier, instead, to double down, like the economist John H Cochrane at the University of Chicago. The problem isn’t too much mathematics, he writes in response to Krugman’s 2009 post-Great-Recession mea culpa for the field, but rather ‘that we don’t have enough math’. Astrology doesn’t work, sure, but only because the armillary sphere isn’t big enough and the equations aren’t good enough.
If overhauling economics depended solely on economists, then mathiness, conflict of interest and sunk-cost bias could easily prove insurmountable. Fortunately, non-experts also participate in the market for economic theory. If people remain enchanted by PhDs and Nobel Prizes awarded for the production of complicated mathematical theories, those theories will remain valuable. If they become disenchanted, the value will drop.
Economists who rationalise their discipline’s value can be convincing, especially with prestige and mathiness on their side. But there’s no reason to keep believing them. The pejorative verb ‘rationalise’ itself warns of mathiness, reminding us that we often deceive each other by making prior convictions, biases and ideological positions look ‘rational’, a word that confuses truth with mathematical reasoning. To be rational is, simply, to think in ratios, like the ratios that govern the geometry of the stars. Yet when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience. The result is people like the judge in Evangeline Adams’s trial, or the Son of Heaven in ancient China, who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.
There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.
Geneticists tell us that somewhere between 1 and 5 percent of the genome of modern Europeans and Asians consists of DNA inherited from Neanderthals, our prehistoric cousins.
At Vanderbilt University, John Anthony Capra, an evolutionary genomics professor, has been combining high-powered computation and a medical records databank to learn what a Neanderthal heritage — even a fractional one — might mean for people today.
We spoke for two hours when Dr. Capra, 35, recently passed through New York City. An edited and condensed version of the conversation follows.
Q. Let’s begin with an indiscreet question. How did contemporary people come to have Neanderthal DNA on their genomes?
A. We hypothesize that roughly 50,000 years ago, when the ancestors of modern humans migrated out of Africa and into Eurasia, they encountered Neanderthals. Matings must have occurred then. And later.
One reason we deduce this is because the descendants of those who remained in Africa — present day Africans — don’t have Neanderthal DNA.
What does that mean for people who have it?
At my lab, we’ve been doing genetic testing on the blood samples of 28,000 patients at Vanderbilt and eight other medical centers across the country. Computers help us pinpoint where on the human genome this Neanderthal DNA is, and we run that against information from the patients’ anonymized medical records. We’re looking for associations.
What we’ve been finding is that Neanderthal DNA has a subtle influence on risk for disease. It affects our immune system and how we respond to different immune challenges. It affects our skin. You’re slightly more prone to a condition where you can get scaly lesions after extreme sun exposure. There’s an increased risk for blood clots and tobacco addiction.
To our surprise, it appears that some Neanderthal DNA can increase the risk for depression; however, there are other Neanderthal bits that decrease the risk. Roughly 1 to 2 percent of one’s risk for depression is determined by Neanderthal DNA. It all depends on where on the genome it’s located.
Was there ever an upside to having Neanderthal DNA?
It probably helped our ancestors survive in prehistoric Europe. When humans migrated into Eurasia, they encountered unfamiliar hazards and pathogens. By mating with Neanderthals, they gave their offspring needed defenses and immunities.
That trait for blood clotting helped wounds close up quickly. In the modern world, however, this trait means greater risk for stroke and pregnancy complications. What helped us then doesn’t necessarily now.
Did you say earlier that Neanderthal DNA increases susceptibility to nicotine addiction?
Yes. Neanderthal DNA can mean you’re more likely to get hooked on nicotine, even though there were no tobacco plants in archaic Europe.
We think this might be because there’s a bit of Neanderthal DNA right next to a human gene that’s a neurotransmitter implicated in a generalized risk for addiction. In this case and probably others, we think the Neanderthal bits on the genome may serve as switches that turn human genes on or off.
Aside from the Neanderthals, do we know if our ancestors mated with other hominids?
We think they did. Sometimes when we’re examining genomes, we can see the genetic afterimages of hominids who haven’t even been identified yet.
A few years ago, the Swedish geneticist Svante Paabo received an unusual fossilized bone fragment from Siberia. He extracted the DNA, sequenced it and realized it was neither human nor Neanderthal. What Paabo found was a previously unknown hominid he named Denisovan, after the cave where it had been discovered. It turned out that Denisovan DNA can be found on the genomes of modern Southeast Asians and New Guineans.
Have you long been interested in genetics?
Growing up, I was very interested in history, but I also loved computers. I ended up majoring in computer science at college and going to graduate school in it; however, during my first year in graduate school, I realized I wasn’t very motivated by the problems that computer scientists worked on.
Fortunately, around that time — the early 2000s — it was becoming clear that people with computational skills could have a big impact in biology and genetics. The human genome had just been mapped. What an accomplishment! We now had the code to what makes you, you, and me, me. I wanted to be part of that kind of work.
So I switched over to biology. And it was there that I heard about a new field where you used computation and genetics research to look back in time — evolutionary genomics.
There may be no written records from prehistory, but genomes are a living record. If we can find ways to read them, we can discover things we couldn’t know any other way.
Not long ago, the two top editors of The New England Journal of Medicine published an editorial questioning “data sharing,” a common practice where scientists recycle raw data other researchers have collected for their own studies. They labeled some of the recycling researchers, “data parasites.” How did you feel when you read that?
I was upset. The data sets we used were not originally collected to specifically study Neanderthal DNA in modern humans. Thousands of patients at Vanderbilt consented to have their blood and their medical records deposited in a “biobank” to find genetic diseases.
Three years ago, when I set up my lab at Vanderbilt, I saw the potential of the biobank for studying both genetic diseases and human evolution. I wrote special computer programs so that we could mine existing data for these purposes.
That’s not being a “parasite.” That’s moving knowledge forward. I suspect that most of the patients who contributed their information are pleased to see it used in a wider way.
What has been the response to your Neanderthal research since you published it last year in the journal Science?
Some of it’s very touching. People are interested in learning about where they came from. Some of it is a little silly. “I have a lot of hair on my legs — is that from Neanderthals?”
But I received racist inquiries, too. I got calls from all over the world from people who thought that since Africans didn’t interbreed with Neanderthals, this somehow justified their ideas of white superiority.
It was illogical. Actually, Neanderthal DNA is mostly bad for us — though that didn’t bother them.
As you do your studies, do you ever wonder about what the lives of the Neanderthals were like?
It’s hard not to. Genetics has taught us a tremendous amount about that, and there’s a lot of evidence that they were much more human than apelike.
They’ve gotten a bad rap. We tend to think of them as dumb and brutish. There’s no reason to believe that. Maybe those of us of European heritage should be thinking, “Let’s improve their standing in the popular imagination. They’re our ancestors, too.’”
Some of your favourite science words are making a comeback.
2 DEC 2016
Researchers analysing several centuries of literature have spotted a strange trend in our language patterns: the words we use tend to fall in and out of favour in a cycle that lasts around 14 years.
Scientists ran computer scripts to track patterns stretching back to the year 1700 through the Google Ngram Viewer database, which monitors language use across more than 4.5 million digitised books. In doing so, they identified a strange oscillation across 5,630 common nouns.
The team says the discovery not only shows how writers and the population at large use words to express themselves – it also affects the topics we choose to discuss.
“It’s very difficult to imagine a random phenomenon that will give you this pattern,” Marcelo Montemurro from the University of Manchester in the UK told Sophia Chen at New Scientist.
“Assuming these patterns reflect some cultural dynamics, I hope this develops into better understanding of why we change the topics we discuss,” he added.“We might learn why writers get tired of the same thing and choose something new.”
The 14-year pattern of words coming into and out of widespread use was surprisingly consistent, although the researchers found that in recent years the cycles have begun to get longer by a year or two. The cycles are also more pronounced when it comes to certain words.
What’s interesting is how related words seem to rise and fall together in usage. For example, royalty-related words like “king”, “queen”, and “prince” appear to be on the crest of a usage wave, which means they could soon fall out of favour.
By contrast, a number of scientific terms, including “astronomer”, “mathematician”, and “eclipse” could soon be on the rebound, having dropped in usage recently.
According to the analysis, the same phenomenon happens with verbs as well, though not to the same extent as with nouns, and the academics found similar 14-year patterns in French, German, Italian, Russian, and Spanish, so this isn’t exclusive to English.
The study suggests that words get a certain momentum, causing more and more people to use them, before reaching a saturation point, where writers start looking for alternatives.
Montemurro and fellow researcher Damián Zanette from the National Council for Scientific and Technical Research in Argentina aren’t sure what’s causing this, although they’re willing to make some guesses.
“We expect that this behaviour is related to changes in the cultural environment that, in turn, stir the thematic focus of the writers represented in the Google database,” the researchers write in their paper.
“It’s fascinating to look for cultural factors that might affect this, but we also expect certain periodicities from random fluctuations,” biological scientist Mark Pagel, from the University of Reading in the UK, who wasn’t involved in the research, told New Scientist.
“Now and then, a word like ‘apple’ is going to be written more, and its popularity will go up,” he added. “But then it’ll fall back to a long-term average.”
It’s clear that language is constantly evolving over time, but a resource like the Google Ngram Viewer gives scientists unprecedented access to word use and language trends across the centuries, at least as far as the written word goes.
One size does not always fit all, especially when it comes to global climate models, according to climate researchers who caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.
One size does not always fit all, especially when it comes to global climate models, according to Penn State climate researchers.
“The impacts of climate change rightfully concern policy makers and stakeholders who need to make decisions about how to cope with a changing climate,” said Fuqing Zhang, professor of meteorology and director, Center for Advanced Data Assimilation and Predictability Techniques, Penn State. “They often rely upon climate model projections at regional and local scales in their decision making.”
Zhang and Michael Mann, Distinguished professor of atmospheric science and director, Earth System Science Center, were concerned that the direct use of climate model output at local or even regional scales could produce inaccurate information. They focused on two key climate variables, temperature and precipitation.
They found that projections of temperature changes with global climate models became increasingly uncertain at scales below roughly 600 horizontal miles, a distance equivalent to the combined widths of Pennsylvania, Ohio and Indiana. While climate models might provide useful information about the overall warming expected for, say, the Midwest, predicting the difference between the warming of Indianapolis and Pittsburgh might prove futile.
Regional changes in precipitation were even more challenging to predict, with estimates becoming highly uncertain at scales below roughly 1200 miles, equivalent to the combined width of all the states from the Atlantic Ocean through New Jersey across Nebraska. The difference between changing rainfall totals in Philadelphia and Omaha due to global warming, for example, would be difficult to assess. The researchers report the results of their study in the August issue of Advances in Atmospheric Sciences.
“Policy makers and stakeholders use information from these models to inform their decisions,” said Mann. “It is crucial they understand the limitation in the information the model projections can provide at local scales.”
Climate models provide useful predictions of the overall warming of the globe and the largest-scale shifts in patterns of rainfall and drought, but are considerably more hard pressed to predict, for example, whether New York City will become wetter or drier, or to deal with the effects of mountain ranges like the Rocky Mountains on regional weather patterns.
“Climate models can meaningfully project the overall global increase in warmth, rises in sea level and very large-scale changes in rainfall patterns,” said Zhang. “But they are uncertain about the potential significant ramifications on society in any specific location.”
The researchers believe that further research may lead to a reduction in the uncertainties. They caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.
“Uncertainty is hardly a reason for inaction,” said Mann. “Moreover, uncertainty can cut both ways, and we must be cognizant of the possibility that impacts in many regions could be considerably greater and more costly than climate model projections suggest.”
Physicists update predator-prey model for more clues on how bacteria evade attack from killer cells
April 29, 2016
Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Researchers have created a numerical model that explores this behavior in more detail.
Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Reporting their results in the Journal of Physics A: Mathematical and Theoretical, researchers in Europe have created a numerical model that explores this behaviour in more detail.
Using mathematical expressions, the group can examine the dynamics of a single predator hunting a herd of prey. The routine splits the hunter’s motion into a diffusive part and a ballistic part, which represent the search for prey and then the direct chase that follows.
“We would expect this to be a fairly good approximation for many animals,” explained Ralf Metzler, who led the work and is based at the University of Potsdam in Germany.
To further improve its analysis, the group, which includes scientists from the National Institute of Chemistry in Slovenia, and Sorbonne University in France, has incorporated volume effects into the latest version of its model. The addition means that prey can now inadvertently get in each other’s way and endanger their survival by blocking potential escape routes.
Thanks to this update, the team can study not just animal behaviour, but also gain greater insight into the way that killer cells such as macrophages (large white blood cells patrolling the body) attack colonies of bacteria.
One of the key parameters determining the life expectancy of the prey is the so-called ‘sighting range’ — the distance at which the prey is able to spot the predator. Examining this in more detail, the researchers found that the hunter profits more from the poor eyesight of the prey than from the strength of its own vision.
Long tradition with a new dimension
The analysis of predator-prey systems has a long tradition in statistical physics and today offers many opportunities for cooperative research, particularly in fields such as biology, biochemistry and movement ecology.
“With the ever more detailed experimental study of systems ranging from molecular processes in living biological cells to the motion patterns of animal herds and humans, the need for cross-fertilisation between the life sciences and the quantitative mathematical approaches of the physical sciences has reached a new dimension,” Metzler comments.
To help support this cross-fertilisation, he heads up a new section of the Journal of Physics A: Mathematical and Theoretical that is dedicated to biological modelling and examines the use of numerical techniques to study problems in the interdisciplinary field connecting biology, biochemistry and physics.
Maria Schwarzl, Aljaz Godec, Gleb Oshanin, Ralf Metzler. A single predator charging a herd of prey: effects of self volume and predator–prey decision-making. Journal of Physics A: Mathematical and Theoretical, 2016; 49 (22): 225601 DOI: 10.1088/1751-8113/49/22/225601
Sistema computacional desenvolvido por pesquisadores da USP e da Unicamp estabelece regras de racionamento de suprimento hídrico em períodos de seca
Pesquisadores da Escola Politécnica da Universidade de São Paulo (Poli-USP) e da Faculdade de Engenharia Civil, Arquitetura e Urbanismo da Universidade Estadual de Campinas (FEC-Unicamp) desenvolveram novos modelos matemáticos e computacionais voltados a otimizar a gestão e a operação de sistemas complexos de suprimento hídrico e de energia elétrica, como os existentes no Brasil.
“A ideia é que os modelos matemáticos e computacionais que desenvolvemos possam auxiliar os gestores dos sistemas de distribuição e abastecimento de água e energia elétrica na tomada de decisões que têm enormes impactos sociais e econômicos, como a de decretar racionamento”, disse Paulo Sérgio Franco Barbosa, professor da FEC-Unicamp e coordenador do projeto, à Agência Fapesp.
De acordo com Barbosa, muitas das tecnologias utilizadas hoje nos setores hídrico e energético no Brasil para gerir a oferta e a demanda e os riscos de desabastecimento de água e energia em situações de eventos climáticos extremos, como estiagem severa, foram desenvolvidas na década de 1970, quando as cidades brasileiras eram menores e o País não dispunha de um sistema hídrico e hidroenergético tão complexo como o atual.
Por essas razões, segundo ele, esses sistemas de gestão apresentam falhas como não levar em conta a conexão entre as diferentes bacias e não estimar a ocorrência de eventos climáticos mais extremos do que os que já aconteceram no passado ao planejar a operação de um sistema de reservatórios e distribuição de água.
“Houve falha no dimensionamento da capacidade de abastecimento de água do reservatório Cantareira, por exemplo, porque não se imaginou que aconteceria uma seca pior do que a que atingiu a bacia em 1953, considerado o ano mais seco da história do reservatório antes de 2014”, afirmou Barbosa.
A fim de aprimorar esses sistemas de gestão de risco existentes hoje, os pesquisadores desenvolveram novos modelos matemáticos e computacionais que simulam a operação de um sistema de suprimento hídrico ou de energia de forma integrada e em diferentes cenários de aumento de oferta e demanda de água.
“Por meio de algumas técnicas estatísticas e computacionais, os modelos que desenvolvemos são capazes de fazer simulações melhores e proteger mais um sistema de suprimento hídrico ou de energia elétrica contra riscos climáticos”, disse Barbosa.
Um dos modelos desenvolvidos pelos pesquisadores em colaboração com colegas da University of California em Los Angeles, nos Estados Unidos, é a plataforma de modelagem de otimização e simulação de sistemas de suprimento hídrico Sisagua.
A plataforma computacional integra e representa todas as fontes de abastecimento de um sistema de reservatórios e distribuição de água de cidades de grande porte, como São Paulo, incluindo os reservatórios, canais, dutos, estações de tratamento e de bombeamento.
“O Sisagua possibilita planejar a operação, estudar a capacidade de suprimento e avaliar alternativas de expansão ou de diminuição do fornecimento de um sistema de abastecimento de água de forma integrada”, apontou Barbosa.
Um dos diferenciais do modelo computacional, segundo o pesquisador, é estabelecer regras de racionamento de um sistema de reservatórios e distribuição de água de grande porte em períodos de seca, como o que São Paulo passou em 2014, de modo a minimizar os danos à população e à economia causados por um eventual racionamento.
Quando um dos reservatórios do sistema atinge um volume abaixo dos níveis normais e próximo do volume mínimo de operação, o modelo computacional indica um primeiro estágio de racionamento, reduzindo a oferta da água armazenada em 10%, por exemplo.
Se a crise de abastecimento do reservatório prolongar, o modelo matemático indica alternativas para minimizar a intensidade do racionamento distribuindo o corte de água de forma mais uniforme ao longo do período de escassez de água e entre os outros reservatórios do sistema.
“O Sisagua possui uma inteligência computacional que indica onde e quando cortar o fornecimento de água de um sistema de abastecimento hídrico, de modo a minimizar os danos no sistema e para a população e a economia de uma cidade”, afirmou Barbosa.
Os pesquisadores aplicaram o Sisagua para simular a operação e a gestão do sistema de distribuição de água da região metropolitana de São Paulo, que abastece cerca de 18 milhões de pessoas e é considerado um dos maiores do mundo, com vazão média de 67 metros cúbicos por segundo (m³/s).
O sistema de distribuição de água paulista é composto por oito subsistemas de abastecimento, sendo o maior deles o Cantareira, que fornece água para 5,3 milhões de pessoas, com vazão média de 33 m³/s.
A fim de avaliar a capacidade de suprimento do Cantareira em um cenário de escassez de água e, ao mesmo tempo, de aumento da demanda pelo recurso natural, os pesquisadores realizaram uma simulação de planejamento do uso do subsistema em um período de dez anos utilizando o Sisagua.
Para isso, eles usaram dados de vazões afluentes (de entrada de água) do Cantareira entre 1950 e 1960, fornecidos pela Companhia de Saneamento Básico do Estado de São Paulo (Sabesp).
“Essa período de tempo foi escolhido como base para as projeções do Sisagua porque registrou secas severas, quando as afluências ficaram significativamente abaixo das médias por quatro anos seguidos, entre 1952 e 1956”, explicou Barbosa.
A partir dos dados de vazão afluente desse série histórica, o modelo matemático e computacional analisou cenários com demanda variável de água do Cantareira entre 30 e 40 m³/s.
Algumas das constatações do modelo foram que o Cantareira é capaz de atender uma demanda de até 34 m³/s em um cenário de escassez de água como ocorreu entre 1950 a 1960 com um risco insignificante de desabastecimento. Acima desse valor a escassez e, consequentemente, o risco de racionamento de água no reservatório aumenta exponencialmente.
Para que o Cantareira possa atender uma demanda de 38 m³/s em um período de escassez de água, o modelo indicou que seria preciso começar a racionar a água do reservatório 40 meses (3 anos e 4 meses) antes que o nível da bacia atingisse o ponto crítico, abaixo do volume normal e próximo do limite mínimo de operação.
Dessa forma, seria possível atender entre 85% e 90% da demanda de água do reservatório no período de seca até que ele recuperasse seu volume ideal, evitando um racionamento mais grave do que aconteceria caso fosse mantido o nível pleno de abastecimento do reservatório.
“Quanto antes for feito o racionamento de água de um sistema de abastecimento hídrico melhor o prejuízo é distribuído ao longo do tempo”, disse Barbosa. “A população pode se preparar melhor para um racionamento de 15% de água durante um período de dois anos, por exemplo, do que um corte de 40% em apenas dois meses”, comparou.
Em outro estudo, os pesquisadores usaram o Sisagua para avaliar a capacidade de os subsistemas Cantareira, Guarapiranga, Alto Tietê e Alto Cotia atenderem as atuais demandas de água em um cenário de escassez do recurso natural.
Para isso, eles também utilizaram dados de vazões afluentes dos quatro subsistemas no período de 1950 a 1960.
Os resultados das análises feitas pelo método matemático e computacional indicaram que o subsistema de Cotia atingiu um limite crítico de racionamento diversas vezes durante o período simulado de dez anos.
Em contrapartida, o subsistema Alto Tietê ficou com volume de água acima de sua meta frequentemente.
Com base nessas constatações, os pesquisadores sugerem novas interligações para transferência entre esses quatro subsistemas de abastecimento.
Parte da demanda de água do subsistema de Cotia poderia ser fornecida pelos subsistemas de Guarapiranga e Cantareira. Por outro lado, esses dois subsistemas também poderiam receber água do subsistema Alto Tietê, indicaram as projeções do Sisagua.
“A transferência de água entre os subsistemas proporcionaria maior flexibilidade e resultaria em uma melhor distribuição, eficiência e confiabilidade do sistema de abastecimento hídrico da região metropolitana de São Paulo”, avaliou Barbosa.
De acordo com o pesquisador, as projeções feitas pelo Sisagua também indicaram a necessidade de investimentos em novas fontes de abastecimento de água para a região metropolitana de São Paulo.
Segundo ele, as principais bacias que abastecem São Paulo sofrem de problemas como a concentração urbana.
Em torno da bacia do Alto Tietê, por exemplo, que ocupa apenas 2,7% do território paulista, está concentrada quase 50% da população do Estado de São Paulo, superando em cinco vezes a densidade demográfica de países como Japão, Coréia e Holanda.
Já as bacias de Piracicaba, Paraíba do Sul, Sorocaba e Baixada Santista – que representam 20% da área de São Paulo – concentram 73% da população paulista, com densidade demográfica superior ao de países como Japão, Holanda e Reino Unido, apontam os pesquisadores.
“Será inevitável pensar em outras fontes de abastecimento de água para a região metropolitana de São Paulo, como o sistema Juquiá, no interior do estado, que tem água de excelente quantidade e em grandes volumes”, disse Barbosa.
“Em razão da distância, essa obra será cara e tem sido postergada. Mas, agora, não dá mais para adiá-la”, afirmou.
Além de São Paulo, o Sisagua também foi utilizado para modelar os sistemas de suprimento hídrico de Los Angeles, nos Estados Unidos, e Taiwan.
A origem da monogamia imposta ainda é um mistério. Em algum momento na história da humanidade, quando o advento da agricultura e da pecuária começou a transformar as sociedades, começou a mudar a ideia do que era aceitável nas relações entre homens e mulheres. Ao longo da história, a maioria das sociedades tem permitido a poligamia. O estudo sobre caçadores-coletores sugere que, entre as sociedades pré-históricas, era frequente que um grupo relativamente pequeno de homens monopolizasse as mulheres da tribo para aumentar sua prole.
No entanto, aconteceu algo para que muitos dos grupos que conseguiram se sobrepor adotassem um sistema de organização do sexo tão distante das inclinações humanas, como a monogamia. Como se pode ler em várias passagens da Bíblia, a recomendação para resolver conflitos geralmente consistia na morte dos adúlteros por apedrejamento.
Um grupo de pesquisadores da Universidade de Waterloo (Canadá) e do Instituto Max Planck de Antropologia Evolutiva (Alemanha), que publicou nesta terça-feira um artigo sobre o tema na revista Nature Communications, acredita que as doenças sexualmente transmissíveis desempenharam um papel fundamental. Segundo a hipótese, que foi testada com modelos tecnológicos, os pesquisadores sugerem que, quando a agricultura permitiu o surgimento de populações nas quais mais de 300 pessoas viviam juntas, nossa relação com bactérias como a gonorreia ou sífilis mudou.
A sífilis e a gonorreia afetavam a fertilidade em uma sociedade sem antibióticos ou preservativos
Nos pequenos grupos do Plistoceno, os surtos causados por esses micróbios duravam pouco e tinham um impacto reduzido sobre a população. No entanto, quando o número de indivíduos na sociedade é maior, os surtos se tornam endêmicos e o impacto sobre aqueles que praticam a poligamia é maior. Em uma sociedade sem preservativos de látex ou antibióticos, as infecções bacterianas têm um grande impacto sobre a fertilidade.
Essa condição biológica teria dado vantagem às pessoas que se acasalavam de forma monogâmica e, além disso, também teria tornado mais aceitáveis castigos, como os descritos na Bíblia, para indivíduos que desrespeitassem a norma. Eventualmente, nas crescentes sociedades agrárias do início da história da humanidade, a interação entre a monogamia e a imposição de normas para sustentá-la acabaria dando vantagem sob a forma de maior fertilidade para as sociedades que as praticassem.
Os autores do estudo acreditam que estas abordagens, que testam premissas onde se tenta compreender a interação entre as dinâmicas sociais e naturais, podem ajudar a entender não só o surgimento da monogamia imposta socialmente, mas também outras normas sociais relacionadas com o contato físico entre os seres humanos.
“Nossas normas sociais não se desenvolveram isoladas do que estava acontecendo em nosso ambiente natural”, afirmou em um comunicado Chris Bauch, professor de matemática aplicada da Universidade de Waterloo e um dos autores do estudo. “Pelo contrário, não podemos compreender as normas sociais sem entender sua origem em nosso ambiente natural”, acrescentou. “As normas foram moldadas por nosso ambiente natural”, conclui.
Sprinklers in an agricultural field in California.Credit: Max Whittaker for The New York Times
AS a nation, we have become disciples of data. We interview 60,000 families a month to determine the unemployment rate, we monitor how much energy we use every seven days, Amazon ranks sales of every book it sells every hour.
Then there is water.
Water may be the most important item in our lives, our economy and our landscape about which we know the least. We not only don’t tabulate our water use every hour or every day, we don’t do it every month, or even every year.
The official analysis of water use in the United States is done every five years. It takes a tiny team of people four years to collect, tabulate and release the data. In November 2014, the United States Geological Survey issued its most current comprehensive analysis of United States water use — for the year 2010.
The 2010 report runs 64 pages of small type, reporting water use in each state by quality and quantity, by source, and by whether it’s used on farms, in factories or in homes.
It doesn’t take four years to get five years of data. All we get every five years is one year of data.
The data system is ridiculously primitive. It was an embarrassment even two decades ago. The vast gaps — we start out missing 80 percent of the picture — mean that from one side of the continent to the other, we’re making decisions blindly.
In just the past 27 months, there have been a string of high-profile water crises — poisoned water in Flint, Mich.; polluted water in Toledo, Ohio, and Charleston, W. Va.; the continued drying of the Colorado River basin — that have undermined confidence in our ability to manage water.
In the time it took to compile the 2010 report, Texas endured a four-year drought. California settled into what has become a five-year drought. The most authoritative water-use data from across the West couldn’t be less helpful: It’s from the year before the droughts began.
In the last year of the Obama presidency, the administration has decided to grab hold of this country’s water problems, water policy and water innovation. Next Tuesday, the White House is hosting a Water Summit, where it promises to unveil new ideas to galvanize the sleepy world of water.
The question White House officials are asking is simple: What could the federal government do that wouldn’t cost much but that would change how we think about water?
The best and simplest answer: Fix water data.
More than any other single step, modernizing water data would unleash an era of water innovation unlike anything in a century.
We have a brilliant model for what water data could be: the Energy Information Administration, which has every imaginable data point about energy use — solar, wind, biodiesel, the state of the heating oil market during the winter we’re living through right now — all available, free, to anyone. It’s not just authoritative, it’s indispensable. Congress created the agency in the wake of the 1970s energy crisis, when it became clear we didn’t have the information about energy use necessary to make good public policy.
That’s exactly the state of water — we’ve got crises percolating all over, but lack the data necessary to make smart policy decisions.
Congress and President Obama should pass updated legislation creating inside the United States Geological Survey a vigorous water data agency with the explicit charge to gather and quickly release water data of every kind — what utilities provide, what fracking companies and strawberry growers use, what comes from rivers and reservoirs, the state of aquifers.
Good information does three things.
First, it creates the demand for more good information. Once you know what you can know, you want to know more.
Second, good data changes behavior. The real-time miles-per-gallon gauges in our cars are a great example. Who doesn’t want to edge the M.P.G. number a little higher? Any company, community or family that starts measuring how much water it uses immediately sees ways to use less.
Finally, data ignites innovation. Who imagined that when most everyone started carrying a smartphone, we’d have instant, nationwide traffic data? The phones make the traffic data possible, and they also deliver it to us.
The truth is, we don’t have any idea what detailed water use data for the United States will reveal. But we can be certain it will create an era of water transformation. If we had monthly data on three big water users — power plants, farmers and water utilities — we’d instantly see which communities use water well, and which ones don’t.
We’d see whether tomato farmers in California or Florida do a better job. We’d have the information to make smart decisions about conservation, about innovation and about investing in new kinds of water systems.
Water’s biggest problem, in this country and around the world, is its invisibility. You don’t tackle problems that are out of sight. We need a new relationship with water, and that has to start with understanding it.