Arquivo da tag: Estatística

Book Review: Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition by Wendy Hui Kyong Chun (LSE)

blogs.lse.ac.uk

Professor David Beer – November 22nd, 2021


In Discriminating Data: Correlation, Neighborhoods, and the New Politics of RecognitionWendy Hui Kyong Chun explores how technological developments around data are amplifying and automating discrimination and prejudice. Through conceptual innovation and historical details, this book offers engaging and revealing insights into how data exacerbates discrimination in powerful ways, writes David Beer

Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Wendy Hui Kyong Chun (mathematical illustrations by Alex Barnett). MIT Press. 2021.

Going back a couple of decades, there was a fair amount of discussion of ‘the digital divide’. Uneven access to networked computers meant that a line was drawn between those who were able to switch-on and those who were not. At the time there was a pressing concern about the disadvantages of a lack of access. With the massive escalation of connectivity since, the notion of a digital divide still has some relevance, but it has become a fairly blunt tool for understanding today’s extensively mediated social constellations. The divides now are not so much a product of access; they are instead a consequence of what happens to the data produced through that access.

With the escalation of data and the establishment of all sorts of analytic and algorithmic processes, the problem of uneven, unjust and harmful treatment is now the focal point for an animated and urgent debate. Wendy Hui Kyong Chun’s vibrant new book Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition makes a telling intervention. At its centre is the idea that these technological developments around data ‘are amplifying and automating – rather than acknowledging and repairing – the mistakes of a discriminatory past’ (2). Essentially this is the codification and automation of prejudice. Any ideas about the liberating aspects of technology are deflated. Rooted in a longer history of statistics and biometrics, existing ruptures are being torn open by the differential targeting that big data brings.

This is not just about bits of data. Chun suggests that ‘we need […] to understand how machine learning and other algorithms have been embedded with human prejudice and discrimination, not simply at the level of data, but also at the levels of procedure, prediction, and logic’ (16). It is not, then, just about prejudice being in the data itself; it is also how segregation and discrimination are embedded in the way this data is used. Given the scale of these issues, Chun narrows things down further by focusing on four ‘foundational concepts’, with correlation, homophily, authenticity and recognition providing the focal points for interrogating the discriminations of data.

Image Credit: Pixabay 

It is the concept of correlation that does much of the gluing work within the study. The centrality of correlation is a subtext in Chun’s own overview of the book, which suggests that ‘Discriminating Data reveals how correlation and eugenic understandings of nature seek to close off the future by operationalizing probabilities; how homophily naturalizes segregation; and how authenticity and recognition foster deviation in order to create agitated clusters of comforting rage’ (27). As well as developing these lines of argument, the use of the concept of correlation also allows Chun to think in deeply historical terms about the trajectory and politics of association and patterning.

For Chun the role of correlation is both complex and performative. It is argued, for instance, that correlations ‘do not simply predict certain actions; they also form them’. This is an established position in the field of critical data studies, with data prescribing and producing the outcomes they are used to anticipate. However, Chun manages to reanimate this position through an exploration of how correlation fits into a wider set of discriminatory data practices. The other performative issue here is the way that people are made-up and grouped through the use of data. Correlations, Chun writes, ‘that lump people into categories based on their being “like” one another amplify the effects of historical inequalities’ (58). Inequalities are reinforced as categories become more obdurate, with data lending them a sense of apparent stability and a veneer of objectivity. Hence the pointed claim that ‘correlation contains within it the seeds of manipulation, segregation and misrepresentation’ (59).

Given this use of data to categorise, it is easy to see why Discriminating Data makes a conceptual link between correlation and homophily – with homophily, as Chun puts it, being the ‘principle that similarity breeds connection’ and can therefore lead to swarming and clustering. The acts of grouping within these data structures mean, for Chun, that ‘homophily not only eases conflict; it also naturalizes discrimination’ (103). Using data correlations to group informs a type of homophily that not only misrepresents and segregates; it also makes these divides seem natural and therefore fixed.

Chun anticipates that there may be some remaining remnants of faith in the seeming democratic properties of these platforms, arguing that ‘homophily reveals and creates boundaries within theoretically flat and diffuse social networks; it distinguishes and discriminates between supposedly equal nodes; it is a tool for discovering bias and inequality and for perpetuating them in the name of “comfort,” predictability, and common sense’ (85). As individuals are moved into categories or groups assumed to be like them, based upon the correlations within their data, so discrimination can readily occur. One of the key observations made by Chun is that data homophily can feel comfortable, especially when encased in predictions, yet this can distract from the actual damages of the underpinning discriminations they contain. Instead, these data ‘proxies can serve to buttress – and justify – discrimination’ (121). For Chun there is a ‘proxy politics’ unfolding in which data not only exacerbates but can also be used to lend legitimacy to discriminatory acts.

As with correlation and homophily, Chun, in a particularly novel twist, also explores how authenticity is itself becoming automated within these data structures. In stark terms, it is argued that ‘authenticity has become so central to our times because it has become algorithmic’ (144). Chun is able to show how a wider cultural push towards notions of the authentic, embodied in things like reality TV, becomes a part of data systems. A broader cultural trend is translated into something renderable in data. Chun explains that the ‘term “algorithmic authenticity” reveals the ways in which users are validated and authenticated by network algorithms’ (144). A system of validation occurs in these spaces, where actions and practices are algorithmically judged and authenticated. Algorithmic authenticity ‘trains them to be transparent’ (241). It pushes a form of openness upon us in which an ‘operationalized authenticity’ develops, especially within social media.

This emphasis upon the authentic draws people into certain types of interaction with these systems. It shows, Chun compellingly puts it, ‘how users have become characters in a drama called “big data”’ (145). The notion of a drama is, of course, not to diminish what is happening but to try to get at its vibrant and role-based nature. It also adds a strong sense of how performance plays out in relation to the broader ideas of data judgment that the book is exploring.

These roles are not something that Chun wants us to accept, arguing instead that ‘if we think through our roles as performers and characters in the drama called “big data,” we do not have to accept the current terms of our deployment’ (170). Examining the artifice of the drama is a means of transformation and challenge. Exposing the drama is to expose the roles and scripts that are in place, enabling them to be questioned and possibly undone. This is not fatalistic or absent of agency; rather, Chun’s point is that ‘we are characters, rather than marionettes’ (248).

There are some powerful cross-currents working through the discussions of the book’s four foundational concepts. The suggestion that big data brings a reversal of hegemony is a particularly telling argument. Chun explains that: ‘Power can now operate through reverse hegemony: if hegemony once meant the creation of a majority by various minorities accepting a dominant worldview […], now hegemonic majorities can emerge when angry minorities, clustered around a shared stigma, are strung together through their mutual opposition to so-called mainstream culture’ (34). This line of argument is echoed in similar terms in the book’s conclusion, clarifying further that ‘this is hegemony in reverse: if hegemony once entailed creating a majority by various minorities accepting – and identifying with – a dominant worldview, majorities now emerge by consolidating angry minorities – each attached to a particular stigma – through their opposition to “mainstream” culture’ (243). In this formulation it would seem that big data may not only be disciplinary but may also somehow gain power by upending any semblance of a dominant ideology. Data doesn’t lead to shared ideas but to the splitting of the sharing of ideas into group-based networks. It does seem plausible that the practices of targeting and patterning through data are unlikely to facilitate hegemony. Yet, it is not just that data affords power beyond hegemony but that it actually seeks to reverse it.

The reader may be caught slightly off-guard by this position. Chun generally seems to picture power as emerging and solidifying through a genealogy of the technologies that have formed into contemporary data infrastructures. In this account power seems to be associated with established structures and operates through correlations, calls for authenticity and the means of recognition. These positions on power – with infrastructures on one side and reverse hegemony on the other – are not necessarily incompatible, yet the discussion of reverse hegemony perhaps stands a little outside of that other vision of power. I was left wondering if this reverse hegemony is a consequence of these more processional operations of power or, maybe, it is a kind of facilitator of them.

Chun’s book looks to bring out the deep divisions that data-informed discrimination has already created and will continue to create. The conceptual innovation and the historical details, particularly on statistics and eugenics, lend the book a deep sense of context that feeds into a range of genuinely engaging and revealing insights and ideas. Through its careful examination of the way that data exacerbates discrimination in very powerful ways, this is perhaps the most telling book yet on the topic. The digital divide may no longer be a particularly useful term but, as Chun’s book makes clear, the role data performs in animating discrimination means that the technological facilitation of divisions has never been more pertinent.

We read the 4000-page IPCC climate report so you don’t have to (Quartz)

qz.com

Amanda Shendruk, Tim McDonnell, David Yanofsky, Michael J. Coren

Published August 10, 2021

[Check the original publication here for the text of the report with most important parts highlighted.]


The most important takeaways from the new Intergovernmental Panel on Climate Change report are easily summarized: Global warming is happening, it’s caused by human greenhouse gas emissions, and the impacts are very bad (in some cases, catastrophic). Every fraction of a degree of warming we can prevent by curbing emissions substantially reduces this damage. It’s a message that hasn’t changed much since the first IPCC report in 1990.

But to reach these conclusions (and ratchet up confidence in their findings), hundreds of scientists from universities around the globe spent years combing through the peer-reviewed literature—at least 14,000 papers—on everything from cyclones to droughts.

The final Aug. 9 report is nearly 4,000 pages long. While much of it is written in inscrutable scientific jargon, if you want to understand the scientific case for man-made global warming, look no further. We’ve reviewed the data,  summarized the main points, and created an interactive graphic showing a “heat map” of scientists’ confidence in their conclusions. The terms describing statistical confidence range from very high confidence (a 9 out of 10 chance) to very low confidence (a 1 in 10 chance). Just hover over the graphic [here] and click to see what they’ve written.

Here’s your guide to the IPCC’s latest assessment.

CH 1: Framing, context, methods

The first chapter comes out swinging with a bold political charge: It concludes with “high confidence” that the plans countries so far have put forward to reduce emissions are “insufficient” to keep warming well below 2°C, the goal enshrined in the 2015 Paris Agreement. While unsurprising on its own, it is surprising for a document that had to be signed off on by the same government representatives it condemns. It then lists advancements in climate science since the last IPCC report, as well as key evidence behind the conclusion that human-caused global warming is “unequivocal.”

Highlights

👀Scientists’ ability to observe the physical climate system has continued to improve and expand.

📈Since the last IPCC report, new techniques have provided greater confidence in attributing changes in extreme events to human-caused climate change.

🔬The latest generation of climate models is better at representing natural processes, and higher-resolution models that better capture smaller-scale processes and extreme events have become available.

CH 2: Changing state of the climate system

Chapter 2 looks backward in time to compare the current rate of climate changes to those that happened in the past. That comparison clearly reveals human fingerprints on the climate system. The last time global temperatures were comparable to today was 125,000 years ago, the concentration of atmospheric carbon dioxide is higher than anytime in the last 2 million years, and greenhouse gas emissions are rising faster than anytime in the last 800,000 years.

Highlights

🥵Observed changes in the atmosphere, oceans, cryosphere, and biosphere provide unequivocal evidence of a world that has warmed. Over the past several decades, key indicators of the climate system are increasingly at levels unseen in centuries to millennia, and are changing at rates unprecedented in at least the last 2000 years

🧊Annual mean Arctic sea ice coverage levels are the lowest since at least 1850. Late summer levels are the lowest in the past 1,000 years.

🌊Global mean sea level (GMSL) is rising, and the rate of GMSL rise since the 20th century is faster than over any preceding century in at least the last three millennia. Since 1901, GMSL has risen by 0.20 [0.15–0.25] meters, and the rate of rise is accelerating.

CH 3: Human influence on the climate system

Chapter 3 leads with the IPCC’s strongest-ever statement on the human impact on the climate: “It is unequivocal that human influence has warmed the global climate system since pre-industrial times” (the last IPCC report said human influence was “clear”). Specifically, the report blames humanity for nearly all of the 1.1°C increase in global temperatures observed since the Industrial Revolution (natural forces played a tiny role as well), and the loss of sea ice, rising temperatures, and acidity in the ocean.

🌍Human-induced greenhouse gas forcing is the main driver of the observed changes in hot and cold extremes.

🌡️The likely range of warming in global-mean surface air temperature (GSAT) in 2010–2019 relative to 1850–1900 is 0.9°C–1.2°C. Of that, 0.8°C–1.3°C is attributable to human activity, while natural forces contributed −0.1°C–0.1°C.

😬Combining the attributable contributions from melting ice and the expansion of warmer water, it is very likely that human influence was the main driver of the observed global mean sea level rise since at least 1970.

CH 4: Future global climate: Scenario-based projections and near-term information

Chapter 4 holds two of the report’s most important conclusions: Climate change is happening faster than previously understood, and the likelihood that the global temperature increase can stay within the Paris Agreement goal of 1.5°C is extremely slim. The 2013 IPCC report projected that temperatures could exceed 1.5°C in the 2040s; here, that timeline has been advanced by a decade to the “early 2030s” in the median scenario. And even in the lowest-emission scenario, it is “more likely than not” to occur by 2040.

Highlights

🌡️By 2030, in all future warming scenarios, globally averaged surface air temperature in any individual year could exceed 1.5°C relative to 1850–1900.

🌊Under all scenarios, it is virtually certain that global mean sea level will continue to rise through the 21st century.

💨Even if enough carbon were removed from the atmosphere that global emissions become net negative, some climate change impacts, such as sea level rise, will be not reversed for at least several centuries.

CH 5: Global carbon and other biochemical cycles and feedbacks

Chapter 5 quantifies the level by which atmospheric CO2 and methane concentrations have increased since 1750 (47% and 156% respectively) and addresses the ability of oceans and other natural systems to soak those emissions up. The more emissions increase, the less they can be offset by natural sinks—and in a high-emissions scenario, the loss of forests from wildfires becomes so severe that land-based ecosystems become a net source of emissions, rather than a sink (this is already happening to a degree in the Amazon).

Highlights

🌲The CO2 emitted from human activities during the decade of 2010–2019 was distributed between three Earth systems: 46% accumulated in the atmosphere, 23% was taken up by the ocean, and 31% was stored by vegetation.

📉The fraction of emissions taken up by land and ocean is expected to decline as the CO2 concentration increases.

💨Global temperatures rise in a near-linear relationship to cumulative CO2 emissions. In other words, to halt global warming, net emissions must reach zero.

CH 6: Short-lived climate forcers

Chapter 6 is all about methane, particulate matter, aerosols, hydrofluorocarbons, and other non-CO2 gases that don’t linger very long in the atmosphere (just a few hours, in some cases) but exert a tremendous influence on the climate while they do. In cases, that influence might be cooling, but their net impact has been to contribute to warming. Because they are short-lived, the future abundance and impact of these gases are highly variable in the different socioeconomic pathways considered in the report. These gases have a huge impact on the respiratory health of people around the world.

Highlights

⛽The sectors most responsible for warming from short-lived climate forcers are those dominated by methane emissions: fossil fuel production and distribution, agriculture, and waste management.

🧊In the next two decades, it is very likely that emissions from short-lived climate forcers will cause a warming relative to 2019, in addition to the warming from long-lived greenhouse gases like CO2.

🌏Rapid decarbonization leads to air quality improvements, but on its own is not sufficient to achieve, in the near term, air quality guidelines set by the World Health Organization, especially in parts of Asia and in some other highly polluted regions.

CH 7: The Earth’s energy budget, climate feedbacks, and climate sensitivity

Climate sensitivity is a measure of how much the Earth responds to changes in greenhouse gas concentrations. For every doubling of atmospheric CO2, temperatures go up by about 3°C, this chapter concludes. That’s about the same level scientists have estimated for several decades, but over time the range of uncertainty around that estimate has narrowed. The energy budget is a calculation of how much energy is flowing into the Earth system from the sun. Put together these metrics paint a picture of the human contribution to observed warming.

🐻‍❄️The Arctic warms more quickly than the Antarctic due to differences in radiative feedbacks and ocean heat uptake between the poles.

🌊Because of existing greenhouse gas concentrations, energy will continue to accumulate in the Earth system until at least the end of the 21st century, even under strong emissions reduction scenarios.

☁️The net effect of changes in clouds in response to global warming is to amplify human-induced warming. Compared to the last IPCC report, major advances in the understanding of cloud processes have increased the level of confidence in the cloud feedback cycle.

CH 8: Water cycle changes

This chapter catalogs what happens to water in a warming world. Although instances of drought are expected to become more common and more severe, wet parts of the world will get wetter as the warmer atmosphere is able to carry more water. Total net precipitation will increase, yet the thirstier atmosphere will make dry places drier. And within any one location, the difference in precipitation between the driest and wettest month will likely increase. But rainstorms are complex phenomenon and typically happen at a scale that is smaller than the resolution of most climate models, so specific local predictions about monsoon patterns remains an area of relatively high uncertainty.

Highlights

🌎Increased evapotranspiration will decrease soil moisture over the Mediterranean, southwestern North America, south Africa, southwestern South America, and southwestern Australia.

🌧️Summer monsoon precipitation is projected to increase for the South, Southeast and East Asian monsoon domains, while North American monsoon precipitation is projected to decrease. West African monsoon precipitation is projected to increase over the Central Sahel and decrease over the far western Sahel.

🌲Large-scale deforestation has likely decreased evapotranspiration and precipitation and increased runoff over the deforested regions. Urbanization has increased local precipitation and runoff intensity.

CH 9: Ocean, cryosphere, and sea level change

Most of the heat trapped by greenhouse gases is ultimately absorbed by the oceans. Warmer water expands, contributing significantly to sea level rise, and the slow, deep circulation of ocean water is a key reason why global temperatures don’t turn on a dime in relation to atmospheric CO2. Marine animals are feeling this heat, as scientists have documented that the frequency of marine heatwaves has doubled since the 1980s. Meanwhile, glaciers, polar sea ice, the Greenland ice sheet, and global permafrost are all rapidly melting. Overall sea levels have risen about 20 centimeters since 1900, and the rate of sea level rise is increasing.

Highlights

📈Global mean sea level rose faster in the 20th century than in any prior century over the last three millennia.

🌡️The heat content of the global ocean has increased since at least 1970 and will continue to increase over the 21st century. The associated warming will likely continue until at least 2300 even for low-emission scenarios because of the slow circulation of the deep ocean.

🧊The Arctic Ocean will likely become practically sea ice–free during the seasonal sea ice minimum for the first time before 2050 in all considered SSP scenarios.

CH 10: Linking global to regional climate change

Since 1950, scientists have clearly detected how greenhouse gas emissions from human activity are changing regional temperatures. Climate models can predict regional climate impacts. Where data are limited, statistical methods help identify local impacts (especially in challenging terrain such as mountains). Cities, in particular, will warm faster as a result of urbanization. Global warming extremes in urban areas will be even more pronounced, especially during heatwaves. Although global models largely agree, it is more difficult to consistently predict regional climate impacts across models.

Highlights

⛰️Some local-scale phenomena such as sea breezes and mountain wind systems can not be well represented by the resolution of most climate models.

🌆The difference in observed warming trends between cities and their surroundings can partly be attributed to urbanization. Future urbanization will amplify the projected air temperature change in cities regardless of the characteristics of the background climate.

😕Statistical methods are improving to downscale global climate models to more accurately depict local or regional projections.

CH 11: Weather and climate extreme events in a changing climate

Better data collection, modeling, and means scientists are more confident than ever in understanding the role of rising greenhouse gas concentration in weather and climate extremes.  We are virtually certain humans are behind observed temperature extremes.

Human activity is more making extreme weather and temperatures more intense and frequent, especially rain, droughts, and tropical cyclones. While even 1.5°C of warming will make events more severe, the intensity of extreme events is expected to at least double with 2°C of global warming compared today’s conditions, and quadruple with 3°C of warming. As global warming accelerates, historically unprecedented climatic events are likely to occur.

Highlights

🌡️It is an established fact that human-induced greenhouse gas emissions have led to an increased frequency and/or intensity of some weather and climate extremes since pre-industrial time, in particular for temperature extremes.

🌎Even relatively small incremental increases in global warming cause statistically significant changes in extremes.

🌪️The occurrence of extreme events is unprecedented in the observed record, and will increase with increasing global warming.

⛈️Relative to present-day conditions, changes in the intensity of extremes would be at least double at 2°C, and quadruple at 3°C of global warming.

CH 12: Climate change information for regional impact and for risk assessment

Climate models are getting better, more precise, and more accurate at predicting regional impacts. We know a lot more than we did in 2014 (the release of AR5). Our climate is already different compared ti the early or mid-20th century and we’re seeing big changes to mean temperatures, growing season, extreme heat, ocean acidification, and deoxygenation, and Arctic sea ice loss. Expect more changes by mid-century: more rain in the northern hemisphere, less rain in a few regions (the Mediterranean and South Africa), as well as sea-level rise along all coasts. Overall, there is high confidence that mean and extreme temperatures will rise over land and sea. Major widespread damages are expected, but also benefits are possible in some places.

Highlights

🌏Every region of the world will experience concurrent changes in multiple climate impact drivers by mid-century.

🌱Climate change is already resulting in significant societal and environmental impacts and will induce major socio-economic damages in the future. In some cases, climate change can also lead to beneficial conditions which can be taken into account in adaptation strategies.

🌨️The impacts of climate change depend not only on physical changes in the climate itself, but also on whether humans take steps to limit their exposure and vulnerability.


What we did:

The visualization of confidence is only for the executive summary at the beginning of each chapter. If a sentence had a confidence associated with it, the confidence text was removed and a color applied instead. If a sentence did not have an associated confidence, that doesn’t mean scientists do not feel confident about the content; they may be using likelihood (or certainty) language in that instance instead. We chose to only visualize confidence, as it is used more often in the report. Highlights were drawn from the text of the report but edited and in some cases rephrased for clarity.

People with extremist views less able to do complex mental tasks, research suggests (The Guardian)

theguardian.com

Natalie Grover, 22 Feb 2021


Cambridge University team say their findings could be used to spot people at risk from radicalisation
Head jigsaw puzzle
A key finding of the psychologists was that people with extremist attitudes tended to think about the world in a black and white way. Photograph: designer491/Getty Images/iStockphoto

Our brains hold clues for the ideologies we choose to live by, according to research, which has suggested that people who espouse extremist attitudes tend to perform poorly on complex mental tasks.

Researchers from the University of Cambridge sought to evaluate whether cognitive disposition – differences in how information is perceived and processed – sculpt ideological world-views such as political, nationalistic and dogmatic beliefs, beyond the impact of traditional demographic factors like age, race and gender.

The study, built on previous research, included more than 330 US-based participants aged 22 to 63 who were exposed to a battery of tests – 37 neuropsychological tasks and 22 personality surveys – over the course of two weeks.

The tasks were engineered to be neutral, not emotional or political – they involved, for instance, memorising visual shapes. The researchers then used computational modelling to extract information from that data about the participant’s perception and learning, and their ability to engage in complex and strategic mental processing.

Overall, the researchers found that ideological attitudes mirrored cognitive decision-making, according to the study published in the journal Philosophical Transactions of the Royal Society B.

A key finding was that people with extremist attitudes tended to think about the world in black and white terms, and struggled with complex tasks that required intricate mental steps, said lead author Dr Leor Zmigrod at Cambridge’s department of psychology.

“Individuals or brains that struggle to process and plan complex action sequences may be more drawn to extreme ideologies, or authoritarian ideologies that simplify the world,” she said.

She said another feature of people with tendencies towards extremism appeared to be that they were not good at regulating their emotions, meaning they were impulsive and tended to seek out emotionally evocative experiences. “And so that kind of helps us understand what kind of individual might be willing to go in and commit violence against innocent others.”

Participants who are prone to dogmatism – stuck in their ways and relatively resistant to credible evidence – actually have a problem with processing evidence even at a perceptual level, the authors found.

“For example, when they’re asked to determine whether dots [as part of a neuropsychological task] are moving to the left or to the right, they just took longer to process that information and come to a decision,” Zmigrod said.

In some cognitive tasks, participants were asked to respond as quickly and as accurately as possible. People who leant towards the politically conservative tended to go for the slow and steady strategy, while political liberals took a slightly more fast and furious, less precise approach.

“It’s fascinating, because conservatism is almost a synonym for caution,” she said. “We’re seeing that – at the very basic neuropsychological level – individuals who are politically conservative … simply treat every stimuli that they encounter with caution.”

The “psychological signature” for extremism across the board was a blend of conservative and dogmatic psychologies, the researchers said.

The study, which looked at 16 different ideological orientations, could have profound implications for identifying and supporting people most vulnerable to radicalisation across the political and religious spectrum.

“What we found is that demographics don’t explain a whole lot; they only explain roughly 8% of the variance,” said Zmigrod. “Whereas, actually, when we incorporate these cognitive and personality assessments as well, suddenly, our capacity to explain the variance of these ideological world-views jumps to 30% or 40%.”

The covid-19 pandemic is worse than official figures show (The Economist)

economist.com

But some things are improving, and it will not go on for ever

Sep 26th 2020

AS THE AUTUMNAL equinox passed, Europe was battening down the hatches for a gruelling winter. Intensive-care wards and hospital beds were filling up in Madrid and Marseille—a city which, a few months ago, thought it had more or less eliminated covid-19. Governments were implementing new restrictions, sometimes, as in England, going back on changes made just a few months ago. The al-fresco life of summer was returning indoors. Talk of a second wave was everywhere.

Across the Atlantic the United States saw its official covid-19 death toll—higher than that of all western Europe put together—break the 200,000 barrier. India, which has seen more than half a million new cases a week for four weeks running, will soon take America’s unenviable laurels as the country with the largest official case count.

The world looks set to see its millionth officially recorded death from covid-19 before the beginning of October. That is more than the World Health Organisation (WHO) recorded as having died from malaria (620,000), suicide (794,000) or HIV/AIDS (954,000) over the whole of 2017, the most recent year for which figures are available.

Those deaths represent just over 3% of the recorded covid-19 cases, which now number over 32m. That tally is itself an underestimate of the number who have actually been infected by SARSCoV-2, the virus which causes covid 19. Many of the infected do not get sick. Many who do are never seen by any health system.

A better, if still imperfect, sense of how many infections have taken place since the outbreak began at the end of last year can be gleaned from “serosurveys” which scientists and public-health officials have undertaken around the world. These look for antibodies against SARSCoV-2 in blood samples which may have been taken for other purposes. Their presence reveals past exposure to the virus.

Various things make these surveys inaccurate. They can pick up antibodies against other viruses, inflating their totals—an effect which can differ from place to place, as there are more similar-looking viruses circulating in some regions than in others. They can mislead in the other direction, too. Some tests miss low levels of antibody. Some people (often young ones) fight off the virus without ever producing antibodies and will thus not be recorded as having been infected. As a result, estimates based on serosurveys have to be taken with more than a grain of salt.

But in many countries it would take a small sea’s worth of the stuff to bring the serosurvey figures into line with the official number of cases. The fact that serosurvey data are spotty—there is very little, for example, openly available from China—means it is not possible to calculate the global infection rate directly from the data at hand. But by constructing an empirical relationship between death rates, case rates, average income—a reasonable proxy for intensity of testing—and seropositivity it is possible to impute rates for countries where data are not available and thus estimate a global total.

The graphic on this page shows such an estimate based on 279 serosurveys in 19 countries. It suggests that infections were already running at over 1m a day by the end of January—when the world at large was only just beginning to hear of the virus’s existence. In May the worldwide rate appears to have been more than 5m a day. The uncertainties in the estimate are large, and become greater as you draw close to the present, but all told it finds that somewhere between 500m and 730m people worldwide have been infected—from 6.4% to 9.3% of the world’s population. The WHO has not yet released serosurvey-based estimates of its own, though such work is under way; but it has set an upper bound at 10% of the global population.

As the upper part of the following data panel shows, serosurvey results which can be directly compared with the diagnosed totals are often a great deal bigger. In Germany, where cases have been low and testing thorough, the seropositivity rate was 4.5 times the diagnosed rate in August. In Minnesota a survey carried out in July found a multiplier of seven. A survey completed on August 23rd found a 6.02% seropositivity rate in England, implying a multiplier of 12. A national serosurvey of India conducted from the middle of May to early June found that 0.73% were infected, suggesting a national total of 10m. The number of registered cases at that time was 226,713, giving a multiplier of 44. Such results suggest that a global multiplier of 20 or so is quite possible.

If the disease is far more widespread than it appears, is it proportionately less deadly than official statistics, mainly gathered in rich countries, have made it look? Almost certainly. On the basis of British figures David Spiegelhalter, who studies the public understanding of risk at Cambridge University, has calculated that the risk of death from covid increases by about 13% for every year of age, which means a 65-year-old is 100 times more likely to die than a 25-year-old. And 65-year-olds are not evenly distributed around the world. Last year 20.5% of the EU’s population was over 65, as opposed to just 3% of sub-Saharan Africa’s.

But it is also likely that the number of deaths, like the number of cases, is being seriously undercounted, because many people will have died of the disease without having had a positive test for the virus. One way to get around this is by comparing the number of deaths this year with that which would be predicted on the basis of years past. This “excess mortality” method relies on the idea that, though official statistics may often be silent or misleading as to the cause of death, they are rarely wrong about a death actually having taken place.

The excessive force of destiny

The Economist has gathered all-cause mortality data from countries which report them weekly or monthly, a group which includes most of western Europe, some of Latin America, and a few other large countries, including the United States, Russia and South Africa (see lower part of data panel). Between March and August these countries recorded 580,000 covid-19 deaths but 900,000 excess deaths; the true toll of their share of the pandemic appears to have been 55% greater than the official one. This analysis suggests that America’s official figures underestimate the death toll by 30% or more (America’s Centres for Disease Control and Prevention have provided a similar estimate). This means that the real number of deaths to date is probably a lot closer to 300,000 than 200,000. That is about 10% of the 2.8m Americans who die each year—or, put another way, half the number who succumb to cancer. And there is plenty of 2020 still to go.

Add to all this excess mortality unreported deaths from countries where record keeping is not good enough to allow such assessments and the true death toll for the pandemic may be as high as 2m.

What can be done to slow its further rise? The response to the virus’s original vertiginous ascent was an avalanche of lockdowns; at its greatest extent, around April 10th, at least 3.5bn people were being ordered to stay at home either by national governments or regional ones. The idea was to stop the spread of the disease before health-care systems collapsed beneath its weight, and in this the lockdowns were largely successful. But in themselves they were never a solution. They severely slowed the spread of the disease while they were in place, but they could not stay in place for ever.

Stopping people interacting with each other at all, as lockdowns and limits on the size of gatherings do, is the first of three ways to lower a disease’s reproduction number, R—the number of new cases caused by each existing case. The second is reducing the likelihood that interactions lead to infection; it requires mandated levels of social distancing, hygiene measures and barriers to transmission such as face masks and visors. The third is reducing the time during which an infectious person can interact with people under any conditions. This is achieved by finding people who may recently have been infected and getting them to isolate themselves.

Ensuring that infectious people do not have time to do much infecting requires a fast and thorough test-and-trace system. Some countries, including Canada, China, Germany, Italy, Japan, Singapore and Taiwan, have successfully combined big testing programmes which provide rapid results with a well developed capacity for contact tracing and effective subsequent action. Others have foundered.

Networks and herds

Israel provides a ready example. An early and well-enforced lockdown had the expected effect of reducing new infections. But the time thus bought for developing a test-and-trace system was not well used, and the country’s emergence from lockdown was ill-thought-through. This was in part because the small circle around prime minister Binyamin Netanyahu into which power has been concentrated includes no one with relevant expertise; the health ministry is weak and politicised.

Things have been made worse by the fact that social distancing and barrier methods are being resisted by some parts of society. Synagogues and Torah seminaries in the ultra-Orthodox community and large tribal weddings in the Arab-Israeli community have been major centres of infection. While unhappy countries, like Tolstoy’s unhappy families, all differ, the elements of Israel’s dysfunction have clear parallels elsewhere.

Getting to grips with “superspreader” events is crucial to keeping R low. Close gatherings in confined spaces allow people to be infected dozens at a time. In March almost 100 were infected at a biotech conference in Boston. Many of them spread the virus on: genetic analysis subsequently concluded that 20,000 cases could be traced to that conference.

Nipping such blooms in the bud requires lots of contact tracing. Taiwan’s system logs 15-20 contacts for each person with a positive test. Contact tracers in England register four to five close contacts per positive test; those in France and Spain get just three. It also requires that people be willing to get tested in the first place. In England only 10-30% of people with covid-like symptoms ask for a test through the National Health Service. One of the reasons is that a positive test means self-isolation. Few want to undergo such restrictions, and few are good at abiding by them. In early May a survey in England found that only a fifth of those with covid symptoms had self-isolated as fully as required. The government is now seeking to penalise such breaches with fines of up to £10,000 ($12,800). That will reduce the incentive to get tested in the first place yet further.

As much of Europe comes to terms with the fact that its initial lockdowns have not put an end to its problems, there is increased interest in the Swedish experience. Unlike most of Europe, Sweden never instigated a lockdown, preferring to rely on social distancing. This resulted in a very high death rate compared with that seen in its Nordic neighbours; 58.1 per 100,000, where the rate in Denmark is 11.1, in Finland 6.19 and in Norway 4.93. It is not clear that this high death rate bought Sweden any immediate economic advantage. Its GDP dropped in the second quarter in much the same way as GDPs did elsewhere.

It is possible that by accepting so many deaths upfront Sweden may see fewer of them in the future, for two reasons. One is the phenomenon known, in a rather macabre piece of jargon, as “harvesting”. Those most likely to succumb do so early on, reducing the number of deaths seen later. The other possibility is that Sweden will benefit from a level of herd immunity: once the number of presumably immune survivors in the population grows high enough, the spread of the disease slows down because encounters between the infected and the susceptible become rare. Avoiding lockdown may conceivably have helped with this.

On the other hand, one of the advantages of lockdowns was that they provided time not just for the development of test-and-trace systems but also for doctors to get better at curing the sick. In places with good health systems, getting covid-19 is less risky today than it was six months ago. ISARIC, which researches infectious diseases, has analysed the outcomes for 68,000 patients hospitalised with covid-19; their survival rate increased from 66% in March to 84% in August. The greatest relative gains have been made among the most elderly patients. Survival rates among British people 60 and over who needed intensive care have risen from 39% to 58%.

This is largely a matter of improved case management. Putting patients on oxygen earlier helps. So does reticence about using mechanical ventilators and a greater awareness of the disease’s effects beyond the lungs, such as its tendency to provoke clotting disorders.

Nouvelle vague

As for treatments, two already widely available steroids, dexamethasone and hydrocortisone, increase survival by reducing inflammation. Avigan, a Japanese flu drug, has been found to hasten recovery. Remdesivir, a drug designed to fight other viruses, and convalescent plasma, which provides patients with antibodies from people who have already recovered from the disease, seem to offer marginal benefits.

Many consider antibodies tailor-made for the job by biotech companies a better bet; over the past few years they have provided a breakthrough in the treatment of Ebola. The American government has paid $450m for supplies of a promising two-antibody treatment being developed by Regeneron. That will be enough for between 70,000 and 300,000 doses, depending on what stage of the disease the patients who receive it have reached. Regeneron is now working with Roche, another drug company, to crank up production worldwide. But antibodies will remain expensive, and the need to administer them intravenously limits their utility.

It is tempting to look to better treatment for the reason why, although diagnosed cases in Europe have been climbing steeply into what is being seen as a second wave, the number of deaths has not followed: indeed it has, as yet, barely moved. The main reason, though, is simpler. During the first wave little testing was being done, and so many infections were being missed. Now lots of testing is being done, and vastly more infections are being picked up. Correct for this distortion and you see that the first wave was far larger than what is being seen today, which makes today’s lower death rate much less surprising (see data panel).

The coming winter is nevertheless worrying. Exponential growth can bring change quickly when R gets significantly above one. There is abundant evidence of what Katrine Bach Habersaat of the WHO calls “pandemic fatigue” eating away at earlier behavioural change, as well as increasing resentment of other public-health measures. YouGov, a pollster, has been tracking opinion on such matters in countries around the world. It has seen support for quarantining people who have had contact with someone infected fall a bit in Asia and rather more in the West, where it is down from 78% to 63%. In America it has fallen to 55%.

It is true that infection rates are currently climbing mostly among the young. But the young do not live in bubbles. Recent figures from Bouches-du-Rhône, the French department which includes Marseille, show clearly how a spike of cases in the young becomes, in a few weeks, an increase in cases at all ages.

As the fear of such spikes increases, though, so does the hope that they will not be recurring all that much longer. Pfizer, which has promising vaccine candidate in efficacy trials, has previously said that it will seek regulatory review of preliminary results in October, though new standards at the Food and Drug Administration may not allow it to do so in America quite that soon. Three other candidates, from AstraZeneca, Moderna and J&J, are nipping at Pfizer’s heels. The J&J vaccine is a newcomer; it entered efficacy trials only on September 23rd. But whereas the other vaccines need a booster a month after the first jab, the J&J vaccine is administered just once, which will make the trial quicker; it could have preliminary results in November.

None of the companies will have all the trial data they are planning for until the first quarter of next year. But in emergencies regulators can authorise a vaccine’s use based on interim analysis if it meets a minimum standard (in this case, protection of half those who are vaccinated). Authorisation for use under such conditions would still make such a vaccine more credible than those already in use in China and Russia, neither of which was tested for efficacy at all. But there have been fears that American regulators may, in the run up to the presidential election, set the bar too low. Making an only-just-good-enough vaccine available might see social-distancing collapse and infections increase; alternatively, a perfectly decent vaccine approved in a politically toxic way might not be taken up as widely as it should be.

In either case, though, the practical availability of a vaccine will lag behind any sort of approval. In the long run, billions of doses could be needed. A global coalition of countries known as Covax wants to distribute 2bn by the end of 2021—which will only be enough for 1bn people if the vaccine in question, like Pfizer’s or AstraZeneca’s, needs to be administered twice. The world’s largest manufacturer of vaccines, the Serum Institute in India, recently warned that there will not be enough supplies for universal inoculation until 2024 at the earliest.

Even if everything goes swimmingly, it is hard to see distribution extending beyond a small number of front-line health and care workers this year. But the earlier vaccines are pushed out, the better. The data panel on this page looks at the results of vaccinating earlier versus later in a hypothetical population not that unlike Britain’s. Vaccination at a slower rate which starts earlier sees fewer eventual infections than a much more ambitious campaign started later. At the same time increases in R—which might come about if social distancing and similar measures fall away as vaccination becomes real—make all scenarios worse.

By next winter the covid situation in developed countries should be improved. What level of immunity the vaccines will provide, and for how long, remains to be seen. But few expect none of them to work at all.

Access to the safety thus promised will be unequal, both within countries and between them. Some will see loved ones who might have been vaccinated die because they were not. Minimising such losses will require getting more people vaccinated more quickly than has ever been attempted before. It is a prodigious organisational challenge—and one which, judging by this year’s experience, some governments will handle considerably better than others. ■

This article appeared in the Briefing section of the print edition under the headline “Grim tallies”

An ant-inspired approach to mathematical sampling (Science Daily)

Date: June 19, 2020

Source: University of Bristol

Summary: Researchers have observed the exploratory behavior of ants to inform the development of a more efficient mathematical sampling technique.

In a paper published by the Royal Society, a team of Bristol researchers observed the exploratory behaviour of ants to inform the development of a more efficient mathematical sampling technique.

Animals like ants have the challenge of exploring their environment to look for food and potential places to live. With a large group of individuals, like an ant colony, a large amount of time would be wasted if the ants repeatedly explored the same empty areas.

The interdisciplinary team from the University of Bristol’s Faculties of Engineering and Life Sciences, predicted that the study species — the ‘rock ant’ — uses some form of chemical communication to avoid exploring the same space multiple times.

Lead author, Dr Edmund Hunt, said:

“This would be a reversal of the Hansel and Gretel story — instead of following each other’s trails, they would avoid them in order to explore collectively.

“To test this theory, we conducted an experiment where we let ants explore an empty arena one by one. In the first condition, we cleaned the arena between each ant so they could not leave behind any trace of their path. In the second condition, we did not clean between ants. The ants in the second condition (no cleaning) made a better exploration of the arena — they covered more space.”

In mathematics, a probability distribution describes how likely are each of a set of different possible outcomes: for example, the chance that an ant will find food at a certain place. In many science and engineering problems, these distributions are highly complex, and they do not have a neat mathematical description. Instead, one must sample from it to obtain a good approximation: with a desire to avoid sampling too much from unimportant (low probability) parts of the distribution.

The team wanted to find out if adopting an ant-inspired approach would hasten this sampling process.

“We predicted that we could simulate the approach adopted by the ants in the mathematical sampling problem, by leaving behind a ‘negative trail’ of where has already been sampled. We found that our ant-inspired sampling method was more efficient (faster) than a standard method which does not leave a memory of where has already been sampled,” said Dr Hunt.

These findings contribute toward an interesting parallel between the exploration problem confronted by the ants, and the mathematical sampling problem of acquiring information. This parallel can inform our fundamental understanding of what the ants have evolved to do: acquire information more efficiently.

“Our ant-inspired sampling method may be useful in many domains, such as computational biology, for speeding up the analysis of complex problems. By describing the ants’ collective behaviour in informational terms, it also allows us to quantify how helpful are different aspects of their behaviour to their success. For example, how much better do they perform when their pheromones are not cleaned away. This could allow us to make predictions about which behavioural mechanisms are most likely to be favoured by natural selection.”


Story Source:

Materials provided by University of Bristol. Note: Content may be edited for style and length.


Journal Reference:

  1. Edmund R. Hunt, Nigel R. Franks, Roland J. Baddeley. The Bayesian superorganism: externalized memories facilitate distributed sampling. Journal of The Royal Society Interface, 2020; 17 (167): 20190848 DOI: 10.1098/rsif.2019.0848

Data fog: Why some countries’ coronavirus numbers do not add up (Al Jazeera)

Reported numbers of confirmed cases have become fodder for the political gristmill. Here is what non-politicians think.

By Laura Winter – 17 Jun 2020

Students at a university in Germany evaluate data from COVID-19 patients [Reuters]
Students at a university in Germany evaluate data from COVID-19 patients [Reuters]

Have you heard the axiom “In war, truth is the first casualty?”

As healthcare providers around the world wage war against the COVID-19 pandemic, national governments have taken to brawling with researchers, the media and each other over the veracity of the data used to monitor and track the disease’s march across the globe.

Allegations of deliberate data tampering carry profound public health implications. If a country knowingly misleads the World Health Organization (WHO) about the emergence of an epidemic or conceals the severity of an outbreak within its borders, precious time is lost. Time that could be spent mobilising resources around the globe to contain the spread of the disease. Time to prepare health systems for a coming tsunami of infections. Time to save more lives.

No one country has claimed that their science or data is perfect: French and US authorities confirmed they had their first coronavirus cases weeks earlier than previously thought.

Still, coronavirus – and the data used to benchmark it – has become grist for the political mill. But if we tune out the voices of politicians and pundits, and listen to those of good governance experts, data scientists and epidemiological specialists, what does the most basic but consequential data – the number of confirmed cases per country – tell us about how various governments around the globe are crunching coronavirus numbers and spinning corona-narratives?

What the good governance advocates say

Similar to how meteorologists track storms, data scientists use models to express how epidemics progress, and to predict where the next hurricane of new infections will batter health systems.

This data is fed by researchers into computer modelling programmes that national authorities and the WHO use to advise countries and aid organisations on where to send medical professionals and equipment, and when to take actions such as issuing lockdown orders.

The WHO also harnesses this data to produce a daily report that news organisations use to provide context around policy decisions related to the pandemic. But, unlike a hurricane, which cannot be hidden, epidemic data can be fudged and manipulated.

“The WHO infection numbers are based on reporting from its member states. The WHO cannot verify these numbers,” said Michael Meyer-Resende, Democracy Reporting International’s executive director.

To date, more than 8 million people have been diagnosed as confirmed cases of COVID-19. Of that number, more than 443,000 have died from the virus, according to Johns Hopkins University.

Those numbers are commonly quoted, but what is often not explained is that they both ultimately hinge on two factors: how many people are being tested, and the accuracy of the tests being administered. These numbers we “fetishise”, said Meyer-Resende, “depend on testing, on honesty of governments and on size of the population”.

“Many authoritarian governments are not transparent with their data generally, and one should not expect that they are transparent in this case,” he said. To test Meyer-Resende’s theory that less government transparency equals less transparent COVID-19 case data, Al Jazeera used Transparency International’s Corruption Perceptions Index and the Economist Intelligence Unit’s Democracy Index as lenses through which to view the number of reported cases of the coronavirus.

Transparency International’s Corruption Perceptions Index

The examination revealed striking differences in the number of confirmed COVID-19 cases that those nations deemed transparent and democratic reported compared to the numbers reported by nations perceived to be corrupt and authoritarian.

Denmark, with a population of roughly six million, is ranked in the top 10 of the most transparent and democratic countries. The country reported on May 1 that it had 9,158 confirmed cases of COVID-19, a ratio of 1,581 confirmed cases per million. That was more than triple the world average for that day – 412 cases per million people – according to available data.

Data Fog graphic 2/Laura Winter

Meanwhile, Turkmenistan, a regular in the basement of governance and corruption indexes, maintains that not one of its roughly six million citizens has been infected with COVID-19, even though it borders and has extensive trade with Iran, a regional epicentre of the pandemic.

Also on May 1, Myanmar, with a population of more than 56 million, reported just 151 confirmed cases of infection, a rate of 2.8 infections per million. That is despite the fact that every day, roughly 10,000 workers cross the border into China, where the pandemic first began.

On February 4, Myanmar suspended its air links with Chinese cities, including Wuhan, where COVID-19 is said to have originated last December (however, a recent study reported that the virus may have hit the city as early as August 2019).

“That just seems abnormal, out of the ordinary. Right?” said Roberto Kukutschka, Transparency International’s research coordinator, in reference to the numbers of reported cases.

“In these countries where you have high levels of corruption, there are high levels of discretion as well,” he told Al Jazeera. “It’s counter-intuitive that these countries are reporting so few cases, when all countries that are more open about these things are reporting way more. It’s very strange.”

While Myanmar has started taking steps to address the pandemic, critics say a month of preparation was lost to jingoistic denial. Ten days before the first two cases were confirmed, government spokesman Zaw Htay claimed the country was protected by its lifestyle and diet, and because cash is used instead of credit cards to make purchases.

Turkmenistan’s authorities have reportedly removed almost all mentions of the coronavirus from official publications, including a read-out of a March 27 phone call between Uzbek President Shavkat Mirziyoyev and Turkmen President Gurbanguly Berdimuhamedov.

It is unclear if Turkmenistan even has a testing regime.

Russia, on the other hand, touts the number of tests it claims to have performed, but not how many people have been tested – and that is a key distinction because the same person can be tested more than once. Transparency International places Russia in the bottom third of its corruption index.

On May 1, Russia, with a population just above 145 million, reported that it had confirmed 106,498 cases of COVID-19 after conducting an astounding 3.72 million “laboratory tests”. Just 2.9 percent of the tests produced a positive result.

Data fog feature graphic 3/Laura Winter

Remember, Denmark’s population is six million, or half that of Moscow’s. Denmark had reportedly tested 206,576 people by May 1 and had 9,158 confirmed coronavirus cases, a rate of 4.4 percent. Finland, another democracy at the top of the transparency index, has a population of 5.5 million and a positive test result rate of 4.7 percent.

This discrepancy spurred the editors of PCR News, a Moscow-based Russian-language molecular diagnostics journal, to take a closer look at the Russian test. They reported that in order to achieve a positive COVID-19 result, the sample tested must contain a much higher volume of the virus, or viral load, as compared to the amount required for a positive influenza test result.

In terms of sensitivity or ability to detect COVID-19, the authors wrote: “Is it high or low? By modern standards – low.”

They later added, “The test will not reveal the onset of the disease, or it will be decided too early that the recovering patient no longer releases viruses and cannot infect anyone. And he walks along the street, and he is contagious.”

Ostensibly, if that person then dies, COVID-19 will not be certified as the cause of death.

Good governance experts see a dynamic at play.

Countries who test less will be shown as less of a problem. Countries that test badly will seem as if they don’t have a problem. Numbers are very powerful.

Michael Meyer-Resende, Democracy Reporting International

“In many of these countries, the legitimacy of the state depends on not going into crisis,” said Kukutschka, adding that he counts countries with world-class health systems among them.

“Countries who test less will be shown as less of a problem. Countries that test badly will seem as if they don’t have a problem,” said Meyer-Resende. “Numbers are very powerful. They seem objective.”

Meyer-Resende highlighted the case of China. “The Chinese government said for a while that it had zero new cases. That’s a very powerful statement. It says it all with a single digit: ‘We have solved the problem’. Except, it hadn’t. It had changed the way of counting cases.”

China – where the pandemic originated – recently escaped a joint US-Australian-led effort at the World Health Assembly to investigate whether Beijing had for weeks concealed a deadly epidemic from the WHO.

China alerted the WHO about the epidemic on December 31, 2019. Researchers at the University of Hong Kong estimated that the actual number of COVID-19 cases in China, where the coronavirus first appeared, could have been four times greater in the beginning of this year than what Chinese authorities had been reporting to the WHO.

“We estimated that by Feb 20, 2020, there would have been 232,000 confirmed cases in China as opposed to the 55,508 confirmed cases reported,” said the researchers’ report published by the Lancet.

The University of Hong Kong researchers attribute the discrepancy to ever-changing case definitions, the official guidance that tells doctors which symptoms – and therefore patients – can be diagnosed and recorded as COVID-19. China’s National Health Commission issued no less than seven versions of these guidelines between January 15 and March 3.

All of which adds to the confusion.

“Essentially, we are moving in a thick fog, and the numbers we have are no more than a small flashlight,” said Meyer-Resende.

What the epidemiological expert thinks

Dr Ghassan Aziz monitors epidemics in the Middle East. He is the Health Surveillance Program manager at the Doctors Without Borders (MSF) Middle East Unit. He spoke to Al Jazeera in his own capacity and not on behalf of the NGO.

“I think Iran, they’re not reporting everything,” he told Al Jazeera. “It’s fair to assume that [some countries] are underreporting because they are under-diagnosing. They report what they detect.”

He later added that US sanctions against Iran, which human rights groups say have drastically constrained Tehran’s ability to finance imports of medicines and medical equipment, could also be a factor.

“Maybe [it’s] on purpose, and maybe because of the sanctions and the lack of testing capacities,” said Aziz.

Once China shared the novel coronavirus genome on January 24, many governments began in earnest to test their populations. Others have placed limits on who can be tested.

In Brazil, due to a sustained lack of available tests, patients using the public health network in April were tested only if they were hospitalised with severe symptoms. On April 1, Brazil reported that 201 people had died from the virus. That number was challenged by doctors and relatives of the dead. A month later, after one minister of health was fired and another resigned after a week on the job, the testing protocols had not changed.

On May 1, Brazil reported that COVID-19 was the cause of death for 5,901 people. On June 5, Brazil’s health ministry took down the website that reported cumulative coronavirus numbers – only to be ordered by the country’s Supreme Court to reinstate the information.

Right-wing President Jair Bolsonaro has repeatedly played down the severity of the coronavirus pandemic, calling it “a little flu”. Brazilian Supreme Court Justice Gilmar Mendes accused the government of attempting to manipulate statistics, calling it “a manoeuvre of totalitarian regimes”.

Brazil currently has the dubious distinction of having the second-highest number of COVID-19 deaths in the world, behind the US. By June 15, the COVID-19 death toll in the country had surpassed 43,300 people.

Dr Aziz contends that even with testing, many countries customarily employ a “denial policy”. He said in his native country, Iraq, health authorities routinely obfuscate health emergencies by changing the names of outbreaks such as cholera to “endemic diarrhoea”, or Crimean-Congo hemorrhagic fever to “epidemic fever”.

“In Iraq, they give this idea to the people that ‘We did our best. We controlled it,'” Dr Aziz said. “When someone dies, ‘Oh. It’s not COVID-19. He was sick. He was old. This is God’s will. It was Allah.’ This is what I find so annoying.”

What the data scientist says

Sarah Callaghan, a data scientist and the editor-in-chief of Patterns, a data-science medical journal, told Al Jazeera the numbers of confirmed cases countries report reflect “the unique testing and environmental challenges that each country is facing”.

But, she cautioned: “Some countries have the resources and infrastructure to carry out widespread testing, others simply don’t. Some countries might have the money and the ability to test, but other local issues come into play, like politics.”

According to Callaghan, even in the best of times under the best circumstances, collecting data on an infectious disease is both difficult and expensive. But despite the difficulties presented by some countries’ data, she remains confident that the data and modelling that is available will indeed contribute much to understanding how COVID-19 spreads, how the virus reacts to different environmental conditions, and discovering the questions that need answers.

Her advice is: “When looking at the numbers, think about them. Ask yourself if you trust the source. Ask yourself if the source is trying to push a political or economic agenda.”

“There’s a lot about this situation that we don’t know, and a lot more misinformation that’s being spread, accidentally or deliberately.”

New model predicts the peaks of the COVID-19 pandemic (Science Daily)

Date: May 29, 2020

Source: Santa Fe Institute

Summary: Researchers describe a single function that accurately describes all existing available data on active COVID-19 cases and deaths — and predicts forthcoming peaks.

As of late May, COVID-19 has killed more than 325,000 people around the world. Even though the worst seems to be over for countries like China and South Korea, public health experts warn that cases and fatalities will continue to surge in many parts of the world. Understanding how the disease evolves can help these countries prepare for an expected uptick in cases.

This week in the journal Frontiers in Physics, researchers describe a single function that accurately describes all existing available data on active cases and deaths — and predicts forthcoming peaks. The tool uses q-statistics, a set of functions and probability distributions developed by Constantino Tsallis, a physicist and member of the Santa Fe Institute’s external faculty. Tsallis worked on the new model together with Ugur Tirnakli, a physicist at Ege University, in Turkey.

“The formula works in all the countries in which we have tested,” says Tsallis.

Neither physicist ever set out to model a global pandemic. But Tsallis says that when he saw the shape of published graphs representing China’s daily active cases, he recognized shapes he’d seen before — namely, in graphs he’d helped produce almost two decades ago to describe the behavior of the stock market.

“The shape was exactly the same,” he says. For the financial data, the function described probabilities of stock exchanges; for COVID-19, it described daily the number of active cases — and fatalities — as a function of time.

Modeling financial data and tracking a global pandemic may seem unrelated, but Tsallis says they have one important thing in common. “They’re both complex systems,” he says, “and in complex systems, this happens all the time.” Disparate systems from a variety of fields — biology, network theory, computer science, mathematics — often reveal patterns that follow the same basic shapes and evolution.

The financial graph appeared in a 2004 volume co-edited by Tsallis and the late Nobelist Murray Gell-Mann. Tsallis developed q-statitics, also known as “Tsallis statistics,” in the late 1980s as a generalization of Boltzmann-Gibbs statistics to complex systems.

In the new paper, Tsallis and Tirnakli used data from China, where the active case rate is thought to have peaked, to set the main parameters for the formula. Then, they applied it to other countries including France, Brazil, and the United Kingdom, and found that it matched the evolution of the active cases and fatality rates over time.

The model, says Tsallis, could be used to create useful tools like an app that updates in real-time with new available data, and can adjust its predictions accordingly. In addition, he thinks that it could be fine-tuned to fit future outbreaks as well.

“The functional form seems to be universal,” he says, “Not just for this virus, but for the next one that might appear as well.”

Story Source:

Materials provided by Santa Fe Institute. Note: Content may be edited for style and length.

Journal Reference:

  1. Constantino Tsallis, Ugur Tirnakli. Predicting COVID-19 Peaks Around the World. Frontiers in Physics, 2020; 8 DOI: 10.3389/fphy.2020.00217

The Pandemic Isn’t a Black Swan but a Portent of a More Fragile Global System (New Yorker)

newyorker.com

Bernard Avishai – April 21, 2020

Nassim Nicholas Taleb at his home in Larchmont N.Y.
Nassim Nicholas Taleb says that his profession is “probability.” But his vocation is showing how the unpredictable is increasingly probable.Photograph Michael Appleton / NYT / Redux

Nassim Nicholas Taleb is “irritated,” he told Bloomberg Television on March 31st, whenever the coronavirus pandemic is referred to as a “black swan,” the term he coined for an unpredictable, rare, catastrophic event, in his best-selling 2007 book of that title. “The Black Swan” was meant to explain why, in a networked world, we need to change business practices and social norms—not, as he recently told me, to provide “a cliché for any bad thing that surprises us.” Besides, the pandemic was wholly predictable—he, like Bill Gates, Laurie Garrett, and others, had predicted it—a white swan if ever there was one. “We issued our warning that, effectively, you should kill it in the egg,” Taleb told Bloomberg. Governments “did not want to spend pennies in January; now they are going to spend trillions.”

The warning that he referred to appeared in a January 26th paper that he co-authored with Joseph Norman and Yaneer Bar-Yam, when the virus was still mainly confined to China. The paper cautions that, owing to “increased connectivity,” the spread will be “nonlinear”—two key contributors to Taleb’s anxiety. For statisticians, “nonlinearity” describes events very much like a pandemic: an output disproportionate to known inputs (the structure and growth of pathogens, say), owing to both unknown and unknowable inputs (their incubation periods in humans, or random mutations), or eccentric interaction among various inputs (wet markets and airplane travel), or exponential growth (from networked human contact), or all three.

“These are ruin problems,” the paper states, exposure to which “leads to a certain eventual extinction.” The authors call for “drastically pruning contact networks,” and other measures that we now associate with sheltering in place and social distancing. “Decision-makers must act swiftly,” the authors conclude, “and avoid the fallacy that to have an appropriate respect for uncertainty in the face of possible irreversible catastrophe amounts to ‘paranoia.’ ” (“Had we used masks then”—in late January—“we could have saved ourselves the stimulus,” Taleb told me.)

Yet, for anyone who knows his work, Taleb’s irritation may seem a little forced. His profession, he says, is “probability.” But his vocation is showing how the unpredictable is increasingly probable. If he was right about the spread of this pandemic it’s because he has been so alert to the dangers of connectivity and nonlinearity more generally, to pandemics and other chance calamities for which COVID-19 is a storm signal. “I keep getting asked for a list of the next four black swans,” Taleb told me, and that misses his point entirely. In a way, focussing on his January warning distracts us from his main aim, which is building political structures so that societies will be better able to cope with mounting, random events.

Indeed, if Taleb is chronically irritated, it is by those economists, officials, journalists, and executives—the “naïve empiricists”—who think that our tomorrows are likely to be pretty much like our yesterdays. He explained in a conversation that these are the people who, consulting bell curves, focus on their bulging centers, and disregard potentially fatal “fat tails”—events that seem “statistically remote” but “contribute most to outcomes,” by precipitating chain reactions, say. (Last week, Dr. Phil told Fox’s Laura Ingraham that we should open up the country again, noting, wrongly, that “three hundred and sixty thousand people die each year “from swimming pools — but we don’t shut the country down for that.” In response, Taleb tweeted, “Drowning in swimming pools is extremely contagious and multiplicative.”) Naïve empiricists plant us, he argued in “The Black Swan,” in “Mediocristan.” We actually live in “Extremistan.”

Taleb, who is sixty-one, came by this impatience honestly. As a young man, he lived through Lebanon’s civil war, which was precipitated by Palestinian militias escaping a Jordanian crackdown, in 1971, and led to bloody clashes between Maronite Christians and Sunni Muslims, drawing in Shiites, Druze, and the Syrians as well. The conflict lasted fifteen years and left some ninety thousand people dead. “These events were unexplainable, but intelligent people thought they were capable of providing convincing explanations for them—after the fact,” Taleb writes in “The Black Swan.” “The more intelligent the person, the better sounding the explanation.” But how could anyone have anticipated “that people who seemed a model of tolerance could become the purest of barbarians overnight?” Given the prior cruelties of the twentieth century, the question may sound ingenuous, but Taleb experienced sudden violence firsthand. He grew fascinated, and outraged, by extrapolations from an illusory normal—the evil of banality. “I later saw the exact same illusion of understanding in business success and the financial markets,” he writes.

“Later” began in 1983, when, after university in Paris, and a Wharton M.B.A., Taleb became an options trader—“my core identity,” he says. Over the next twelve years, he conducted two hundred thousand trades, and examined seventy thousand risk-management reports. Along the way, he developed an investment strategy that entailed exposure to regular, small losses, while positioning him to benefit from irregular, massive gains—something like a venture capitalist. He explored, especially, scenarios for derivatives: asset bundles where fat tails—price volatilities, say—can either enrich or impoverish traders, and do so exponentially when they increase the scale of the movement.

These were the years, moreover, when, following Japan, large U.S. manufacturing companies were converting to “just-in-time” production, which involved integrating and synchronizing supply-chains, and forgoing stockpiles of necessary components in favor of acquiring them on an as-needed basis, often relying on single, authorized suppliers. The idea was that lowering inventory would reduce costs. But Taleb, extrapolating from trading risks, believed that “managing without buffers was irresponsible,” because “fat-tail events” can never be completely avoided. As the Harvard Business Review reported this month, Chinese suppliers shut down by the pandemic have stymied the production capabilities of a majority of the companies that depend on them.

The coming of global information networks deepened Taleb’s concern. He reserved a special impatience for economists who saw these networks as stabilizing—who thought that the average thought or action, derived from an ever-widening group, would produce an increasingly tolerable standard—and who believed that crowds had wisdom, and bigger crowds more wisdom. Thus networked, institutional buyers and sellers were supposed to produce more rational markets, a supposition that seemed to justify the deregulation of derivatives, in 2000, which helped accelerate the crash of 2008.

As Taleb told me, “The great danger has always been too much connectivity.” Proliferating global networks, both physical and virtual, inevitably incorporate more fat-tail risks into a more interdependent and “fragile” system: not only risks such as pathogens but also computer viruses, or the hacking of information networks, or reckless budgetary management by financial institutions or state governments, or spectacular acts of terror. Any negative event along these lines can create a rolling, widening collapse—a true black swan—in the same way that the failure of a single transformer can collapse an electricity grid.

COVID-19 has initiated ordinary citizens into the esoteric “mayhem” that Taleb’s writings portend. Who knows what will change for countries when the pandemic ends? What we do know, Taleb says, is what cannot remain the same. He is “too much a cosmopolitan” to want global networks undone, even if they could be. But he does want the institutional equivalent of “circuit breakers, fail-safe protocols, and backup systems,” many of which he summarizes in his fourth, and favorite, book, “Antifragile,” published in 2012. For countries, he envisions political and economic principles that amount to an analogue of his investment strategy: government officials and corporate executives accepting what may seem like too-small gains from their investment dollars, while protecting themselves from catastrophic loss.

Anyone who has read the Federalist Papers can see what he’s getting at. The “separation of powers” is hardly the most efficient form of government; getting something done entails a complex, time-consuming process of building consensus among distributed centers of authority. But James Madison understood that tyranny—however distant it was from the minds of likely Presidents in his own generation—is so calamitous to a republic, and so incipient in the human condition, that it must be structurally mitigated. For Taleb, an antifragile country would encourage the distribution of power among smaller, more local, experimental, and self-sufficient entities—in short, build a system that could survive random stresses, rather than break under any particular one. (His word for this beneficial distribution is “fractal.”)

We should discourage the concentration of power in big corporations, “including a severe restriction of lobbying,” Taleb told me. “When one per cent of the people have fifty per cent of the income, that is a fat tail.” Companies shouldn’t be able to make money from monopoly power, “from rent-seeking”—using that power not to build something but to extract an ever-larger part of the surplus. There should be an expansion of the powers of state and even county governments, where there is “bottom-up” control and accountability. This could incubate new businesses and foster new education methods that emphasize “action learning and apprenticeship” over purely academic certification. He thinks that “we should have a national Entrepreneurship Day.”

But Taleb doesn’t believe that the government should abandon citizens buffeted by events they can’t possibly anticipate or control. (He dedicated his book “Skin in the Game,” published in 2018, to Ron Paul and Ralph Nader.) “The state,” he told me, “should not smooth out your life, like a Lebanese mother, but should be there for intervention in negative times, like a rich Lebanese uncle.” Right now, for example, the government should, indeed, be sending out checks to unemployed and gig workers. (“You don’t bail out companies, you bail out individuals.”) He would also consider a guaranteed basic income, much as Andrew Yang, whom he admires, has advocated. Crucially, the government should be an insurer of health care, though Taleb prefers not a centrally run Medicare-for-all system but one such as Canada’s, which is controlled by the provinces. And, like responsible supply-chain managers, the federal government should create buffers against public-health disasters: “If it can spend trillions stockpiling nuclear weapons, it ought to spend tens of billions stockpiling ventilators and testing kits.”

At the same time, Taleb adamantly opposes the state taking on staggering debt. He thinks, rather, that the rich should be taxed as disproportionately as necessary, “though as locally as possible.” The key is “to build on the good days,” when the economy is growing, and reduce the debt, which he calls “intergenerational dispossession.” The government should then encourage an eclectic array of management norms: drawing up political borders, even down to the level of towns, which can, in an epidemiological emergency, be closed; having banks and corporations hold larger cash reserves, so that they can be more independent of market volatility; and making sure that manufacturing, transportation, information, and health-care systems have redundant storage and processing components. (“That’s why nature gave us two kidneys.”) Taleb is especially keen to inhibit “moral hazard,” such as that of bankers who get rich by betting, and losing, other people’s money. “In the Hammurabi Code, if a house falls in and kills you, the architect is put to death,” he told me. Correspondingly, any company or bank that gets a bailout should expect its executives to be fired, and its shareholders diluted. “If the state helps you, then taxpayers own you.”

Some of Taleb’s principles seem little more than thought experiments, or fit uneasily with others. How does one tax more locally, or close a town border? If taxpayers own corporate equities, does this mean that companies might be nationalized, broken up, or severely regulated? But asking Taleb to describe antifragility to its end is a little like asking Thomas Hobbes to nail down sovereignty. The more important challenge is to grasp the peril for which political solutions must be designed or improvised; society cannot endure with complacent conceptions of how things work. “It would seem most efficient to drive home at two hundred miles an hour,” he put it to me.“But odds are you’d never get there.”

A Guide to the Coronavirus

Bernard Avishai teaches political economy at Dartmouth and is the author of “The Tragedy of Zionism,” “The Hebrew Republic,” and “Promiscuous,” among other books. He was selected as a Guggenheim fellow in 1987.

Steven Pinker talks Donald Trump, the media, and how the world is better off today than ever before (ABC Australia)

Updated

“By many measures of human flourishing the state of humanity has been improving,” renowned cognitive scientist Steven Pinker says, a view often in contrast to the highlights of the 24-hour news cycle and the recent “counter-enlightenment” movement of Donald Trump.

“Fewer of us are dying of disease, fewer of us are dying of hunger, more of us are living in democracies, were more affluent, better educated … these are trends that you can’t easily appreciate from the news because they never happen all at once,” he says.

Canadian-American thinker Steven Pinker is the author of Bill Gates’s new favourite book — Enlightenment Now — in which he maintains that historically speaking the world is significantly better than ever before.

But he says the media’s narrow focus on negative anomalies can result in “systematically distorted” views of the world.

Speaking to the ABC’s The World program, Mr Pinker gave his views on Donald Trump, distorted perceptions and the simple arithmetic that proves the world is better than ever before.

Donald Trump’s ‘counter-enlightenment’

“Trumpism is of course part of a larger phenomenon of authoritarian populism. This is a backlash against the values responsible for the progress that we’ve enjoyed. It’s a kind of counter-enlightenment ideology that Trumpism promotes. Namely, instead of universal human wellbeing, it focusses on the glory of the nation, it assumes that nations are in zero-sum competition against each other as opposed to cooperating globally. It ignores the institutions of democracy which were specifically implemented to avoid a charismatic authoritarian leader from wielding power, but subjects him or her to the restraints of a governed system with checks and balances, which Donald Trump seems to think is rather a nuisance to his own ability to voice the greatness of the people directly. So in many ways all of the enlightenment forces we have enjoyed, are being pushed back by Trump. But this is a tension that has been in play for a couple of hundred years. No sooner did the enlightenment happen that a counter-enlightenment grew up to oppose it, and every once in a while it does make reappearances.”

News media can ‘systematically distort’ perceptions

“If your impression of the world is driven by journalism, then as long as various evils haven’t gone to zero there’ll always be enough of them to fill the news. And if journalism isn’t accompanied by a bit of historical context, that is not just what’s bad now but how bad it was in the past, and statistical context, namely how many wars? How many terrorist attacks? What is the rate of homicide? Then our intuitions, since they’re driven by images and narratives and anecdotes, can be systematically distorted by the news unless it’s presented in historical and statistical context.

‘Simple arithmetic’: The world is getting better

“It’s just a simple matter of arithmetic. You can’t look at how much there is right now and say that it is increasing or decreasing until you compare it with how much took place in the past. When you look at how much took place in the past you realise how much worse things were in the 50s, 60s, 70s and 80s. We don’t appreciate it now when we concentrate on the remaining horrors, but there were horrific wars such as the Iran-Iraq war, the Soviets in Afghanistan, the war in Vietnam, the partition of India, the Bangladesh war of independence, the Korean War, which killed far more people than even the brutal wars of today. And if we only focus on the present, we ought to be aware of the suffering that continues to exist, but we can’t take that as evidence that things have gotten worse unless we remember what happened in the past.”

Don’t equate inequality with poverty

“Globally, inequality is decreasing. That is, if you don’t look within a wealthy country like Britain or the United States, but look across the globe either comparing countries or comparing people worldwide. As best as we can tell, inequality is decreasing because so many poor countries are getting richer faster than rich countries are getting richer. Now within the wealthy countries of the anglosphere, inequality is increasing. And although inequality brings with it a number of serious problems such as disproportionate political power to the wealthy. But inequality itself is not a problem. What we have to focus on is the wellbeing of those at the bottom end of the scale, the poor and the lower middle class. And those have not actually been decreasing once you take into account government transfers and benefits. Now this is a reason we shouldn’t take for granted, the important role of government transfers and benefits. It’s one of the reasons why the non-English speaking wealthy democracies tend to have greater equality than the English speaking ones. But we shouldn’t confuse inequality with poverty.”

Human societies evolve along similar paths (University of Exeter)

PUBLIC RELEASE: 

Societies ranging from ancient Rome and the Inca empire to modern Britain and China have evolved along similar paths, a huge new study shows.

Despite their many differences, societies tend to become more complex in “highly predictable” ways, researchers said.

These processes of development – often happening in societies with no knowledge of each other – include the emergence of writing systems and “specialised” government workers such as soldiers, judges and bureaucrats.The international research team, including researchers from the University of Exeter, created a new database of historical and archaeological information using data on 414 societies spanning the last 10,000 years. The database is larger and more systematic than anything that has gone before it.

“Societies evolve along a bumpy path – sometimes breaking apart – but the trend is towards larger, more complex arrangements,” said corresponding author Dr Thomas Currie, of the Human Behaviour and Cultural Evolution Group at the University of Exeter’s Penryn Campus in Cornwall.

“Researchers have long debated whether social complexity can be meaningfully compared across different parts of the world. Our research suggests that, despite surface differences, there are fundamental similarities in the way societies evolve.

“Although societies in places as distant as Mississippi and China evolved independently and followed their own trajectories, the structure of social organisation is broadly shared across all continents and historical eras.”

The measures of complexity examined by the researchers were divided into nine categories. These included:

  • Population size and territory
  • Number of control/decision levels in administrative, religious and military hierarchies
  • Information systems such as writing and record keeping
  • Literature on specialised topics such as history, philosophy and fiction
  • Economic development

The researchers found that these different features showed strong statistical relationships, meaning that variation in societies across space and time could be captured by a single measure of social complexity.

This measure can be thought of as “a composite measure of the various roles, institutions, and technologies that enable the coordination of large numbers of people to act in a politically unified manner”.

Dr Currie said learning lessons from human history could have practical uses.

“Understanding the ways in which societies evolve over time and in particular how humans are able to create large, cohesive groups is important when we think about state building and development,” he said.

“This study shows how the sciences and humanities, which have not always seen eye-to-eye, can actually work together effectively to uncover general rules that have shaped human history.”

###

The new database of historical and archaeological information is known as “Seshat: Global History Databank” and its construction was led by researchers from the University of Exeter, the University of Connecticut, the University of Oxford, Trinity College Dublin and the Evolution Institute. More than 70 expert historians and archaeologists have helped in the data collection process.

The paper, published in Proceedings of the National Academy of Sciences, is entitled: “Quantitative historical analysis uncovers a single dimension of complexity that structures global variation in human social organisation.”

50 anos de calamidades na América do Sul (Pesquisa Fapesp)

Terremotos e vulcões matam mais, mas secas e inundações atingem maior número de pessoas 

MARCOS PIVETTA | ED. 241 | MARÇO 2016

Um estudo sobre os impactos de 863 desastres naturais registrados nas últimas cinco décadas na América do Sul indica que fenômenos geológicos relativamente raros, como os terremotos e o vulcanismo, produziram quase o dobro de mortes do que eventos climáticos e meteorológicos de ocorrência mais frequente, como inundações, deslizamento de encostas, tempestades e secas. Dos cerca de 180 mil óbitos decorrentes dos desastres, 60% foram em razão de tremores de terra e da atividade de vulcões, um tipo de ocorrência que se concentra nos países andinos, como Peru, Chile, Equador e Colômbia. Os terremotos e o vulcanismo representaram, respectivamente, 11% e 3% dos eventos contabilizados no trabalho.

Aproximadamente 32% das mortes ocorreram em razão de eventos associados a ocorrências meteorológicas ou climáticas, categoria que engloba quatro de cada cinco desastres naturais registrados na região entre 1960 e 2009. Epidemias de doenças – um tipo de desastre biológico com dados escassos sobre a região, segundo o levantamento – levaram 15 mil pessoas a perder a vida, 8% do total. No Brasil, 10.225 pessoas morreram ao longo dessas cinco décadas em razão de desastres naturais, pouco mais de 5% do total, a maioria em inundações e deslizamentos de encostas durante tempestades.

Seca no Nordeste...

O trabalho foi feito pela geógrafa Lucí Hidalgo Nunes, professora do Instituto de Geociências da Universidade Estadual de Campinas (IG-Unicamp) para sua tese de livre-docência e resultou no livro Urbanização e desastres naturais – Abrangência América do Sul (Oficina de Textos), lançado em meados do ano passado. “Desde os anos 1960, a população urbana da América do Sul é maior do que a rural”, diz Lucí. “O palco maior das calamidades naturais tem sido o espaço urbano, que cresce em área ocupada pelas cidades e número de habitantes.”

A situação se inverteu quando o parâmetro analisado foi, em vez da quantidade de mortos, o número de indivíduos afetados em cada tipo de desastre. Dos 138 milhões de vítimas não fatais atingidas por esses eventos, 1% foi alvo de epidemias, 11% de terremotos e vulcanismo, 88% de fenômenos climáticos ou meteorológicos. As secas e as inundações foram as ocorrências que provocaram impactos em mais indivíduos. As grandes estiagens atingiram 57 milhões de pessoas (41% de todos os afetados), e as enchentes, 52,5 milhões de habitantes (38%). O Brasil respondeu por cerca de 85% das vítimas não fatais de secas, essencialmente moradores do Nordeste, e por um terço dos atingidos por inundações, fundamentalmente habitantes das grandes cidades do Sul-Sudeste.

...inundação em Caracas, na Venezuela: esses dois tipos de desastres são os que afetam o maior número de pessoas

Estimados em US$ 44 bilhões ao longo das cinco décadas, os prejuízos materiais associados aos quase 900 desastres contabilizados foram decorrentes, em 80% dos casos, de fenômenos de natureza climática ou meteorológica. “O Brasil tem quase 50% do território e mais da metade da população da América do Sul. Mas foi palco de apenas 20% dos desastres, 5% das mortes e 30% dos prejuízos econômicos associados a esses eventos”, diz Lucí. “O número de pessoas afetadas aqui, no entanto, foi alto, 53% do total de atingidos por desastres na América do Sul. Ainda temos vulnerabilidades, mas não tanto quanto países como Peru, Colômbia e Equador.”

Para escrever o estudo, a geógrafa com-pilou, organizou e analisou os registros de desastres naturais das últimas cinco décadas nos países da América do Sul, além da Guiana Francesa (departamento ultramarino da França), que estão armazenados no Em-Dat – International Disaster Database. Essa base de dados reúne informações sobre mais de 21 mil desastres naturais ocorridos em todo o mundo desde 1900 até hoje. Ela é mantida pelo Centro de Pesquisa em Epidemiologia de Desastres (Cred, na sigla em inglês), que funciona na Escola de Saúde Pública da Universidade Católica de Louvain, em Bruxelas (Bélgica). “Não há base de dados perfeita”, pondera Lucí. “A do Em-Dat é falha, por exemplo, no registro de desastres biológicos.” Sua vantagem é juntar informações oriundas de diferentes fontes – agências não governamentais, órgãos das Nações Unidas, companhias de seguros, institutos de pesquisa e meios de comunicação – e arquivá-las usando sempre a mesma metodologia, abordagem que possibilita a realização de estudos comparativos.

O que caracteriza um desastre
Os eventos registrados no Em-Dat como desastres naturais devem preencher ao menos uma de quatro condições: provocar a morte de no mínimo 10 pessoas; afetar 100 ou mais indivíduos; motivar a declaração de estado de emergência; ou ainda ser a razão para um pedido de ajuda internacional. No trabalho sobre a América do Sul, Lucí organizou os desastres em três grandes categorias, subdivididas em 10 tipos de ocorrências. Os fenômenos de natureza geofísica englobam os terremotos, as erupções vulcânicas e os movimentos de massa seca (como a queda de uma pedra morro abaixo em um dia sem chuva). Os eventos de caráter meteorológico ou climático abarcam as tempestades, as inundações, os deslocamentos de terra em encostas, os extremos de temperatura (calor ou frio fora do normal), as secas e os incêndios. As epidemias representam o único tipo de desastre biológico contabilizado (ver quadro).

062-065_Desastres climáticos_241O climatologista José Marengo, chefe da divisão de pesquisas do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden), em Cachoeira Paulista, interior de São Paulo, afirma que, além de eventos naturais, existem desastres considerados tecnológicos e casos híbridos. O rompimento em novembro passado de uma barragem de rejeitos da mineradora Samarco, em Mariana (MG), que provocou a morte de 19 pessoas e liberou toneladas de uma lama tóxica na bacia hidrográfica do rio Doce, não tem relação com eventos naturais. Pode ser qualificado como um desastre tecnológico, em que a ação humana está ligada às causas da ocorrência. Em 2011, o terremoto de 9.0 graus na escala Richter, seguido de tsunamis, foi o maior da história do Japão. Matou quase 16 mil pessoas, feriu 6 mil habitantes e provocou o desaparecimento de 2.500 indivíduos. Destruiu também cerca de 138 mil edificações. Uma das construções afetadas foi a usina nuclear de Fukushima, de cujos reatores vazou radioatividade. “Nesse caso, houve um desastre tecnológico causado por um desastre natural”, afirma Marengo.

Década após década, os registros de desastres naturais têm aumentado no continente, seguindo uma tendência que parece ser global. “A qualidade das informações sobre os desastres naturais melhorou muito nas últimas décadas. Isso ajuda a engrossar as estatísticas”, diz Lucí. “Mas parece haver um aumento real no número de eventos ocorridos.” Segundo o estudo, grande parte da escalada de eventos trágicos se deveu ao número crescente de fenômenos meteorológicos e climáticos de grande intensidade que atingiram a América do Sul. Na década de 1960, houve 51 eventos desse tipo. Nos anos 2000, o número subiu para 257. Ao longo das cinco décadas, a incidência de desastres geofísicos, que provocam muitas mortes, manteve-se mais ou menos estável e os casos de epidemias diminuíram.

Risco urbano 
O número de mortes em razão de eventos extremos parece estar diminuindo depois de ter atingido um pico de 75 mil óbitos nos anos 1970. Na década passada, houve pouco mais de 6 mil mortes na América do Sul causadas por desastres naturais, de acordo com o levantamento de Lucí. Historicamente, as vítimas fatais se concentram em poucas ocorrências de enormes proporções, em especial os terremotos e as erupções vulcânicas. Os 20 eventos com mais fatalidades (oito ocorridos no Peru e cinco na Colômbia) responderam por 83% de todas as mortes ligadas a fenômenos naturais entre 1960 e 2009. O pior desastre foi um terremoto no Peru em maio de 1970, com 66 mil mortes, seguido de uma inundação na Venezuela em dezembro de 1999 (30 mil mortes) e uma erupção vulcânica na Colômbia em novembro de 1985 (20 mil mortes). O Brasil contabiliza o 9º evento com mais fatalidades (a epidemia de meningite em 1974, com 1.500 óbitos) e o 19° (um deslizamento de encostas, em razão de fortes chuvas, que matou 436 pessoas em março de 1967 em Caraguatatuba, litoral de São Paulo).

Também houve declínio na quantidade de pessoas afetadas nos anos mais recentes, mas as cifras continuam elevadas. Nos anos 1980, os desastres produziram cerca de 50 milhões de vítimas não fatais na América do Sul. Na década passada e também na retrasada, o número caiu para cerca de 20 milhões.

062-065_Desastres climáticos_241-02Sete em cada 10 latino-americanos moram atualmente em cidades, onde a ocupação do solo sem critérios e algumas características geoclimáticas específicas tendem a aumentar a vulnerabilidade da população local a desastres naturais. Lucí comparou a situação de 56 aglomerados urbanos com mais de 750 mil habitantes da América do Sul em relação a cinco fatores que aumentam o risco de calamidades: seca, terremoto, inundação, deslizamento de encostas e vulcanismo. Quito, capital do Equador, foi a única metrópole que estava exposta aos cinco fatores. Quatro cidades colombianas (Bogotá, Cáli, Cúcuta e Medellín) e La Paz, na Bolívia, vieram logo atrás, com quatro vulnerabilidades. As capitais brasileiras apresentaram no máximo dois fatores de risco, seca e inundação (ver quadro). “Os desastres resultam da junção de ameaças naturais e das vulnerabilidades das áreas ocupadas”, diz o pesquisador Victor Marchezini, do Cemaden, sociólogo que estuda os impactos de longo prazo desses fenômenos extremos. “São um evento socioambiental.”

É difícil mensurar os custos de um desastre. Mas a partir de dados da edição de 2013 do Atlas brasileiro de desastres naturais, que usa uma metodologia dife-rente da empregada pela geógrafa da Unicamp para contabilizar calamidades na América do Sul, o grupo de Carlos Eduardo Young, do Instituto de Economia da Universidade Federal do Rio de Janeiro (UFRJ), fez no final do ano passado um estudo. Baseado em estimativas do Banco Mundial de perdas provocadas por desastres em alguns estados brasileiros, Young calculou que enxurradas, inundações e movimentos de massa ocorridos entre 2002 e 2012 provocaram prejuízos econômicos de ao menos R$ 180 bilhões para o país. Em geral, os estados mais pobres, como os do Nordeste, sofreram as maiores perdas econômicas em relação ao tamanho do seu PIB. “A vulnerabilidade a desastres pode ser inversamente proporcional ao grau de desenvolvimento econômico dos estados”, diz o economista. “As mudanças climáticas podem acirrar a questão da desigualdade regional no Brasil.”

IPEA: Estudo desfaz mitos sobre a violência no país (Boletim FPA)

Ano 4 – nº 289 – 22 de março de 2016 – FUNDAÇÃO PERSEU ABRAMO

Estudo lançado hoje pelo Instituto de Pesquisa Econômica Aplicada (Ipea) discute a violência letal no país, que tem evoluído de maneira bastante desigual nas unidades federativas e microrregiões, atingindo crescentemente os moradores de cidades menores no interior e no Nordeste. Segundo o estudo, naqueles estados em que se verificou queda dos homicídios, políticas públicas qualitativamente consistentes foram adotadas, como no caso de São Paulo, Pernambuco, Espírito Santo e Rio de Janeiro. O gráfico abaixo mostra a evolução das taxas de homicídio no país por regiões de 2004 a 2014.  

Segundo o estudo, os homicídios no Brasil em 2014 representam mais de 10% dos homicídios registrados no mundo e colocam o Brasil como o país com o maior número absoluto de homicídios. O Brasil apresentaria ainda uma das 12 maiores taxas de homicídios do mundo. Tal tragédia, segundo o estudo, traz implicações na saúde, na dinâmica demográfica e, por conseguinte, no processo de desenvolvimento econômico e social, uma vez que 53% dos óbitos de homens na faixa etária de 15 a 19 anos são ocasionados por homicídios.

Quanto à escolaridade, o estudo mostra que as chances de vitimização para os indivíduos com 21 anos de idade e menos de oito anos de estudo são 5,4 vezes maiores do que os do mesmo grupo etário e oito ou mais anos de estudo: a educação funciona como um escudo contra os homicídios. Ainda, o estudo mostra que, aos 21 anos de idade, quando há o pico das chances de uma pessoa sofrer homicídio no Brasil, negros possuem 147% a mais de chances de ser vitimados por homicídios, em relação a indivíduos brancos, amarelos e indígenas. Ocorreu também, de 2004 a 2014, um acirramento da diferença de letalidade entre negros e não negros na última década.

O estudo ainda chama a atenção para as especificidades da violência de gênero no país, que por vezes fica invisibilizada diante dos ainda maiores números da violência letal entre homens ou mesmo pela resistência em reconhecer este tema como um problema de política pública.

No caso de mortes causadas por agentes do Estado em serviço, o estudo aponta uma “evidente a subnotificação existente”. Argumenta-se que a letalidade policial é a expressão mais dramática da falta de democratização das instituições responsáveis pela segurança pública no país.

O estudo ainda aponta, por meio de um modelo estatístico, que, ao contrário do que deseja parcela do nosso congresso nacional, se a vitimização violenta assumiu contornos de uma tragédia social no Brasil, sem o Estatuto do Desarmamento a tragédia seria ainda pior.

O Atlas é uma rica fonte de informações sobre a violência no país, se contrapondo ao senso comum sobre o tema no país.

The Water Data Drought (N.Y.Times)

Then there is water.

Water may be the most important item in our lives, our economy and our landscape about which we know the least. We not only don’t tabulate our water use every hour or every day, we don’t do it every month, or even every year.

The official analysis of water use in the United States is done every five years. It takes a tiny team of people four years to collect, tabulate and release the data. In November 2014, the United States Geological Survey issued its most current comprehensive analysis of United States water use — for the year 2010.

The 2010 report runs 64 pages of small type, reporting water use in each state by quality and quantity, by source, and by whether it’s used on farms, in factories or in homes.

It doesn’t take four years to get five years of data. All we get every five years is one year of data.

The data system is ridiculously primitive. It was an embarrassment even two decades ago. The vast gaps — we start out missing 80 percent of the picture — mean that from one side of the continent to the other, we’re making decisions blindly.

In just the past 27 months, there have been a string of high-profile water crises — poisoned water in Flint, Mich.; polluted water in Toledo, Ohio, and Charleston, W. Va.; the continued drying of the Colorado River basin — that have undermined confidence in our ability to manage water.

In the time it took to compile the 2010 report, Texas endured a four-year drought. California settled into what has become a five-year drought. The most authoritative water-use data from across the West couldn’t be less helpful: It’s from the year before the droughts began.

In the last year of the Obama presidency, the administration has decided to grab hold of this country’s water problems, water policy and water innovation. Next Tuesday, the White House is hosting a Water Summit, where it promises to unveil new ideas to galvanize the sleepy world of water.

The question White House officials are asking is simple: What could the federal government do that wouldn’t cost much but that would change how we think about water?

The best and simplest answer: Fix water data.

More than any other single step, modernizing water data would unleash an era of water innovation unlike anything in a century.

We have a brilliant model for what water data could be: the Energy Information Administration, which has every imaginable data point about energy use — solar, wind, biodiesel, the state of the heating oil market during the winter we’re living through right now — all available, free, to anyone. It’s not just authoritative, it’s indispensable. Congress created the agency in the wake of the 1970s energy crisis, when it became clear we didn’t have the information about energy use necessary to make good public policy.

That’s exactly the state of water — we’ve got crises percolating all over, but lack the data necessary to make smart policy decisions.

Congress and President Obama should pass updated legislation creating inside the United States Geological Survey a vigorous water data agency with the explicit charge to gather and quickly release water data of every kind — what utilities provide, what fracking companies and strawberry growers use, what comes from rivers and reservoirs, the state of aquifers.

Good information does three things.

First, it creates the demand for more good information. Once you know what you can know, you want to know more.

Second, good data changes behavior. The real-time miles-per-gallon gauges in our cars are a great example. Who doesn’t want to edge the M.P.G. number a little higher? Any company, community or family that starts measuring how much water it uses immediately sees ways to use less.

Finally, data ignites innovation. Who imagined that when most everyone started carrying a smartphone, we’d have instant, nationwide traffic data? The phones make the traffic data possible, and they also deliver it to us.

The truth is, we don’t have any idea what detailed water use data for the United States will reveal. But we can be certain it will create an era of water transformation. If we had monthly data on three big water users — power plants, farmers and water utilities — we’d instantly see which communities use water well, and which ones don’t.

We’d see whether tomato farmers in California or Florida do a better job. We’d have the information to make smart decisions about conservation, about innovation and about investing in new kinds of water systems.

Water’s biggest problem, in this country and around the world, is its invisibility. You don’t tackle problems that are out of sight. We need a new relationship with water, and that has to start with understanding it.

Statisticians Found One Thing They Can Agree On: It’s Time To Stop Misusing P-Values (FiveThirtyEight)

Footnotes

  1. Even the Supreme Court has weighed in, unanimously ruling in 2011 that statistical significance does not automatically equate to scientific or policy importance. ^

Christie Aschwanden is FiveThirtyEight’s lead writer for science.

Queda de homicídios em SP é obra do PCC, e não da polícia, diz pesquisador (BBC Brasil)

Thiago Guimarães
De Londres

12/02/2016, 15h21 

Policiais militares da Rota durante operação na periferia de São Paulo

Policiais militares da Rota durante operação na periferia de São Paulo. Mario Ângelo/ SigmaPress/AE

Em anúncio recente, o governo de São Paulo informou ter alcançado a menor taxa de homicídios dolosos do Estado em 20 anos. O índice em 2015 ficou em 8,73 por 100 mil habitantes – abaixo de 10 por 100 mil pela primeira vez desde 2001.

“Isso não é obra do acaso. É fruto de muita dedicação. Policiais morreram, perderam suas vidas, heróis anônimos, para que São Paulo pudesse conseguir essa conquista”, disse na ocasião o governador Geraldo Alckmin (PSDB).Para um pesquisador que acompanhou a rotina de investigadores de homicídios em São Paulo, o responsável pela queda é outro: o próprio crime organizado – no caso, o PCC (Primeiro Comando da Capital), a facção que atua dentro e fora dos presídios do Estado.

“A regulação do PCC é o principal fator sobre a vida e a morte em São Paulo. O PCC é produto, produtor e regulador da violência”, diz o canadense Graham Willis, em defesa da hipótese que circula no meio acadêmico e é considerada “ridícula” pelo governo paulista.

Professor da Universidade de Cambridge (Inglaterra), Willis lança nova luz sobre a chamada “hipótese PCC”, num trabalho de imersão que acompanhou a rotina de policiais do DHPP (Delegacia de Homicídios e Proteção à Pessoa) de São Paulo entre 2009 e 2012.

A pesquisa teve acesso a dezenas de documentos internos apreendidos com um membro do PCC e ouviu moradores, comerciantes e criminosos em uma comunidade dominada pela facção na zona leste de São Paulo, em 2007 e 2011.

Teorias do ‘quase tudo’

O trabalho questiona teorias que, segundo Willis, procuram apoio em “quase tudo” para explicar o notório declínio da violência homicida em São Paulo: mudanças demográficas, desarmamento, redução do desemprego, reforço do policiamento em áreas críticas.

“O sistema de segurança pública nunca estabeleceu por que houve essa queda de homicídios nos últimos 15 anos. E nunca transmitiu uma história crível. Falam em políticas públicas, policiamento de hotspots (áreas críticas), mas isso não dá para explicar”, diz.

Em geral, a argumentação de Willis é a seguinte: a queda de 73% nos homicídios no Estado desde 2001, marco inicial da atual série histórica, é muito brusca para ser explicada por fatores de longo prazo como avanços socioeconômicos e mudanças na polícia.

Isso fica claro, diz o pesquisador, quando se constata que, antes da redução, os homicídios se concentravam de forma desproporcional em bairros da periferia da capital paulista: Jardim Ângela, Cidade Tiradentes, Capão Redondo, Brasilândia.

A pacificação nesses locais – com quedas de quase 80% – coincide com o momento, a partir de 2003, em que a estrutura do PCC se ramifica e chega ao cotidiano dessas regiões.

“A queda foi tão rápida que não indica um fator socioeconômico ou de policiamento, que seria algo de longo prazo. Deu-se em vários espaços da cidade mais ou menos na mesma época. E não há dados sobre políticas públicas específicas nesses locais para explicar essas tendências”, diz ele, que baseou suas conclusões em observações de campo.

Vídeo: http://tvuol.tv/bgdw3x

Canal de autoridade

Criado em 1993 com o objetivo declarado de “combater a opressão no sistema prisional paulista” e “vingar” as 111 mortes do massacre do Carandiru, o PCC começa a representar um canal de autoridade em áreas até então caracterizadas pela ausência estatal a partir dos anos 2000, à medida que descentraliza suas decisões.

Os pilares dessa autoridade, segundo Willis e outros pesquisadores que estudaram a facção, são a segurança relativa, noções de solidariedade e estruturas de assistência social. Nesse sentido, a polícia, tradicionalmente vista nesses locais como violenta e corrupta, foi substituída por outra ordem social.

“Quando estive numa comunidade controlada pela facção, moradores diziam que podiam dormir tranquilos com portas e janelas destrancadas”, escreve Willis no recém-lançado The Killing Consensus: Police, Organized Crime and the Regulation of Life and Death in Urban Brazil (O Consenso Assassino: Polícia, Crime Organizado e a Regulação da Vida e da Morte no Brasil Urbano, em tradução livre), livro em que descreve os resultados da investigação.

Antes do domínio do PCC, relata Willis, predominava uma violência difusa e intensa na capital paulista (que responde por 25% dos homicídios no Estado). Gangues lutavam na economia das drogas e abriam espaço para a criminalidade generalizada. O cenário muda quando a facção transpõe às ruas as regras de controle da violência que estabelecera nos presídios.

“Para a organização manter suas atividades criminosas é muito melhor ficar ‘muda’ para não chamar atenção e ter um ambiente de segurança controlado, com regras internas muito rígidas que funcionem”, avalia Willis, que descreve no livro os sistemas de punição da facção.

O pesquisador considera que as ondas de violência promovidas pelo PCC em São Paulo em 2006 e em 2012, com ataques a policiais e a instalações públicas, são pontos fora da curva, episódios de resposta à violência estatal.

“Eles não ficam violentos quando o problema é a repressão ao tráfico, por exemplo, mas quando sentem a sua segurança ameaçada. E a resposta da polícia é ser mais violenta, o que fortalece a ideia entre criminosos de que precisam de proteção. Ou seja, quanto mais você ataca o PCC, mais forte ele fica.”

Apuração em xeque

Willis critica a forma como São Paulo contabiliza seus mortos em situações violentas – e diz que o cenário real é provavelmente mais grave do que o discurso oficial sugere.

Ele questiona, por exemplo, a existência de ao menos nove classificações de mortes violentas em potencial (ossadas encontradas, suicídio, morte suspeita, morte a esclarecer, roubo seguido de morte/latrocínio, homicídio culposo, resistência seguida de morte e homicídio doloso) e diz que a multiplicidade de categorias mascara a realidade.

“Em geral, a investigação de homicídios não acontece em todo o caso. Cada morte suspeita tem que ser avaliada primeiramente por um delegado antes de se decidir se vai ser investigado como homicídio, enquanto em varias cidades do mundo qualquer morte suspeita é investigada como homicídio.”

Para ele, deveria haver mais transparência sobre a taxa de resolução de homicídios (que em São Paulo, diz, fica em torno de 30%, mas inclui casos arquivados sem definições de responsáveis) e sobre o próprio trabalho dos policiais que apuram os casos, que ele vê como um dos mais desvalorizados dentro da instituição.

“Normalmente se pensa em divisão de homicídios como organização de ponta. Mas é o contrário: é um lugar profundamente subvalorizado dentro da polícia, de policiais jovens ou em fim de carreira que desejam sair de lá o mais rápido possível. Policiais suspeitam de quem trabalha lá, em parte porque investigam policiais envolvidos em mortes, mas também porque as vidas que investigam em geral não têm valor, são pessoas de partes pobres da cidade.”

Para ele, o desaparelhamento da investigação de homicídios contrasta com a estrutura de batalhões especializados em repressão, como a Rota e a Força Tática da Polícia Militar.

“Esses policiais têm carros incríveis, caveirões, armas de ponta. Isso mostra muito bem a prioridade dos políticos, que é a repressão física a moradores pobres e negros da periferia. Não é investigar a vida dessas pessoas quando morrem.”

Outro lado

Críticos da chamada “hipótese PCC” costumam levantar a seguinte questão: se a retração nos homicídios não ocorreu por ação da polícia, como explicar a queda em outros índices criminais? Segundo o governo, por exemplo, São Paulo teve queda geral da criminalidade no ano passado em relação a 2014. A facção, ironizam os críticos, estaria então ajudando na queda desses crimes também?

“Variações estatísticas não necessariamente refletem ações do Estado”, diz Willis. Para ele, estudos já mostraram que mais atividade policial não significa sempre menor criminalidade.

Willis diz ainda que as variações estatísticas nesses outros crimes não são significativas, e que o PCC não depende de roubos de carga, veículos ou bancos, mas do pequeno tráfico de drogas com o qual os membros bancam as contribuições obrigatórias à facção.

A Secretaria de Segurança Pública de São Paulo disse considerar a hipótese de Willis sobre o declínio dos homicídios “ridícula e amplamente desmentida pela realidade de todos os índices criminais” do Estado.

Afirma que a taxa no Estado é quase três vezes menor do que a média nacional (25,1 casos por 100 mil habitantes) e “qualquer pesquisador com o mínimo de rigor sabe que propor uma relação de causa e efeito neste sentido é brigar contra as regras básicas da ciência”.

A pasta informou que todos crimes cometidos por policiais no Estado são punidos – citou 1.445 expulsões, 654 demissões e 1.849 policiais presos desde 2011 – e negou a existência de grupos de extermínio nas corporações.

Sobre o fato de não incluir mortes cometidas por policiais na soma oficial dos homicídios, mas em categoria à parte, disse que “todos os Estados” brasileiros e a “maioria dos países, inclusive os Estados Unidos” adotam a mesma metodologia.

A secretaria não comentou as considerações de Willis sobre a estrutura da investigação de homicídios no Estado e a suposta prioridade dada à forças voltadas à repressão.

Monitoramento e análise de dados – A crise nos mananciais de São Paulo (Probabit)

Situação 25.1.2015

4,2 milímetros de chuva em 24.1.2015 nos reservatórios de São Paulo (média ponderada).

305 bilhões de litros (13,60%) de água em estoque. Em 24 horas, o volume subiu 4,4 bilhões de litros (0,19%).

134 dias até acabar toda a água armazenada, com chuvas de 996 mm/ano e mantida a eficiência corrente do sistema.

66% é a redução no consumo necessária para equilibrar o sistema nas condições atuais e 33% de perdas na distribuição.


Para entender a crise

Como ler este gráfico?

Os pontos no gráfico mostram 4040 intervalos de 1 ano para o acumulado de chuva e a variação no estoque total de água (do dia 1º de janeiro de 2003/2004 até hoje). O padrão mostra que mais chuva faz o estoque variar para cima e menos chuva para baixo, como seria de se esperar.

Este e os demais gráficos desta página consideram sempre a capacidade total de armazenamento de água em São Paulo (2,24 trilhões de litros), isto é, a soma dos reservatórios dos Sistemas Cantareira, Alto Tietê, Guarapiranga, Cotia, Rio Grande e Rio Claro. Quer explorar os dados?

A região de chuva acumulada de 1.400 mm a 1.600 mm ao ano concentra a maioria dos pontos observados de 2003 para cá. É para esse padrão usual de chuvas que o sistema foi projetado. Nessa região, o sistema opera sem grandes desvios de seu equilíbrio: máximo de 15% para cima ou para baixo em um ano. Por usar como referência a variação em 1 ano, esse modo de ver os dados elimina a oscilação sazonal de chuvas e destaca as variações climáticas de maior amplitude. Ver padrões ano a ano.

Uma segunda camada de informação no mesmo gráfico são as zonas de risco. A zona vermelha é delimitada pelo estoque atual de água em %. Todos os pontos dentro dessa área (com frequência indicada à direita) representam, portanto, situações que se repetidas levarão ao colapso do sistema em menos de 1 ano. A zona amarela mostra a incidência de casos que se repetidos levarão à diminuição do estoque. Só haverá recuperação efetiva do sistema se ocorrerem novos pontos acima da faixa amarela.

Para contextualizar o momento atual e dar uma ideia de tendência, pontos interligados em azul destacam a leitura adicionada hoje (acumulado de chuva e variação entre hoje e mesmo dia do ano passado) e as leituras de 30, 60 e 90 atrás (em tons progressivamente mais claros).


Discussão a partir de um modelo simples

O ajuste de um modelo linear aos casos observados mostra que existe uma razoável correlação entre o acumulado de chuva e a variação no estoque hídrico, como o esperado.

Ao mesmo tempo, fica clara a grande dispersão de comportamento do sistema, especialmente na faixa de chuvas entre 1.400 mm e 1.500 mm. Acima de 1.600 mm há dois caminhos bem separados, o inferior corresponde ao perído entre 2009 e 2010 quando os reservatórios ficaram cheios e não foi possível estocar a chuva excedente.

Além de uma gestão deliberadamente mais ou menos eficiente da água disponível, podem contribuir para as flutuações observadas as variações combinadas no consumo, nas perdas e na efetividade da captação de água. Entretanto, não há dados para examinarmos separadamente o efeito de cada uma dessas variáveis.

Simulação 1: Efeito do aumento do estoque de água

Nesta simulação foi hipoteticamente incluído no sistema de abastecimento a reserva adicional da represa Billings, com volume de 998 bilhões de litros (já descontados o braço “potável” do reservatório Rio Grande).

Aumentar o estoque disponível não muda o ponto de equilíbrio, mas altera a inclinação da reta que representa a relação entre a chuva e a variação no estoque. A diferença de inclinação entre a linha azul (simulada) e a vermelha (real) mostra o efeito da ampliação do estoque.

Se a Billings não fosse hoje um depósito gigante de esgotos, poderíamos estar fora da situação crítica. Entretanto, vale enfatizar que o simples aumento de estoque não é capaz de evitar indefinidamente a escassez se a quantidade de chuva persistir abaixo do ponto de equilíbrio.

Simulação 2: Efeito da melhoria na eficiência

O único modo de manter o estoque estável quando as chuvas se tornam mais escassas é mudar a ‘curva de eficiência’ do sistema. Em outras palavras, é preciso consumir menos e se adaptar a uma menor entrada de água no sistema.

A linha azul no gráfico ao lado indica o eixo ao redor do qual os pontos precisariam flutuar para que o sistema se equilibrasse com uma oferta anual de 1.200 mm de chuva.

A melhoria da eficiência pode ser alcançada por redução no consumo, redução nas perdas e melhoria na tecnologia de captação de água (por exemplo pela recuperação das matas ciliares e nascentes em torno dos mananciais).

Se persistir a situação desenhada de 2013 a 2015, com chuvas em torno de 1.000 mm será necessário atingir uma curva de eficiência que está muito distante do que já se conseguiu praticar, acima mesmo dos melhores casos já observados.

Com o equilíbrio de “projeto” em torno de 1.500 mm, a conta é mais ou menos assim: a Sabesp perde 500 mm (33% da água distribuída), a população consume 1.000 mm. Para chegar rapidamente ao equilíbrio em 1.000 mm, o consumo deveria ser de 500 mm, uma vez que as perdas não poderão ser rapidamente evitadas e acontecem antes do consumo.

Se 1/3 da água distribuída não fosse sistematicamente perdida não haveria crise. Os 500 mm de chuva disperdiçados anualmente pela precariedade do sistema de distribução não fazem falta quando chove 1.500 mm, mas com 1.000 mm cada litro jogado fora de um lado é um litro que terá de ser economizado do outro.

Simulação 3: Eficiência corrente e economia necessária

Para estimar a eficiência corrente são usadas as últimas 120 observações do comportamento do sistema.

A curva de eficiência corrente permite estimar o ponto de equilíbrio atual do sistema (ponto vermelho em destaque).

O ponto azul indica a última observação do acumulado anual de chuvas. A diferença entre os dois mede o tamanho do desequilíbrio.

Apenas para estancar a perda de água do sistema, é preciso reduzir em 49% o fluxo de retirada. Como esse fluxo inclui todas as perdas, se depender apenas da redução no consumo, a economia precisa ser de 66% se as perdas forem de 33%, ou de 56% se as perdas forem de 17%.

Parece incrível que a eficiência do sistema esteja tão baixa em meio a uma crise tão grave. A tentativa de contenção no consumo está aumentando o consumo? Volumes menores e mais rasos evaporam mais? As pessoas ainda não perceberam a tamanho do desastre?


Prognóstico

Supondo que novos estoques de água não serão incorporados no curto prazo, o prognóstico sobre se e quando a água vai acabar depende da quantidade de chuva e da eficiência do sistema.

O gráfico mostra quantos dias restam de água em função do acumulado de chuva, considerando duas curvas de eficiência: a média e a corrente (estimada a partir dos últimos 120 dias).

O ponto em destaque considera a observação mais recente de chuva acumulada no ano e mostra quantos dias restam de água se persistirem as condições atuais de chuva e de eficiência.

O prognóstico é uma referência que varia de acordo com as novas observações e não tem probabilidade definida. Trata-se de uma projeção para melhor visualizar as condições necessárias para escapar do colapso.

Porém, lembrando que a média histórica de chuvas em São Paulo é de 1.441 mm ao ano, uma curva que cruze esse limite significa um sistema com mais de 50% de chances de colapsar em menos de um ano. Somos capazes de evitar o desastre?


Os dados

O ponto de partida são os dados divulgados diariamente pela Sabesp. A série de dados original atualizada está disponível aqui.

Porém, há duas importantes limitações nesses dados que podem distorcer a interpretação da realidade: 1) a Sabesp usa somente porcentagens para se referir a reservatórios com volumes totais muito diferentes; 2) a entrada de novos volumes não altera a base-de-cálculo sobre o qual essa porcentagem é medida.

Por isso, foi necessário corrigir as porcentagens da série de dados original em relação ao volume total atual, uma vez que os volumes que não eram acessíveis se tornaram acessíveis e, convenhamos, sempre estiveram lá nas represas. A série corrigida pode ser obtida aqui. Ela contém uma coluna adicional com os dados dos volumes reais (em bilhões de litros: hm3)

Além disso, decidimos tratar os dados de forma consolidada, como se toda a água estivesse em um único grande reservatório. A série de dados usada para gerar os gráficos desta página contém apenas a soma ponderada do estoque (%) e da chuva (mm) diários e também está disponível.

As correções realizadas eliminam os picos causados pelas entradas dos volumes mortos e permitem ver com mais clareza o padrão de queda do estoque em 2014.


Padrões ano a ano


Média e quartis do estoque durante o ano


Sobre este estudo

Preocupado com a escassez de água, comecei a estudar o problema ao final de 2014. Busquei uma abordagem concisa e consistente de apresentar os dados, dando destaque para as três variáveis que realmente importam: a chuva, o estoque total e a eficiência do sistema. O site entrou no ar em 16 de janeiro de 2015. Todos os dias, os modelos e os gráficos são refeitos com as novas informações.

Espero que esta página ajude a informar a real dimensão da crise da água em São Paulo e estimule mais ações para o seu enfrentamento.

Mauro Zackiewicz

maurozacgmail.com

scientia probabitlaboratório de dados essenciais

Otimismo do brasileiro cai pela primeira vez desde 2009 (OESP)

Por José Roberto de Toledo | Estadão Conteúdo – 12.jan.2014

No ano em que a presidente Dilma Rousseff tentará se reeleger, o otimismo do brasileiro está 17 pontos menor do que quando a petista assumiu a Presidência da República. Segundo pesquisa do Ibope, 57% esperam que 2014 seja melhor do que 2013. Apesar de elevada, a taxa caiu pela primeira vez em anos. Na pesquisa anterior, os otimistas eram 72% – mesmo patamar de 2011 (74%), 2010 (73%) e 2009 (74%), pela margem de erro.

O pessimismo praticamente dobrou nos últimos 12 meses. Agora, 14% acham que 2014 será pior do que 2013. Um ano antes, só 8% achavam que 2013 seria pior do que 2012. Os restantes 24% apostam que este ano será igual ao anterior (eram 17%).

Há diferenças regionais importantes no otimismo dos brasileiros. Ele é muito maior no Norte/Centro-Oeste (69%) e Nordeste (67%) do que no Sudeste (47%). Destaca-se nas capitais (61%) e murcha nas cidades das periferias das metrópoles (52%). É a marca dos jovens com menos de 25 anos (64%) e dos mais ricos (72%).

A pesquisa do Ibope faz parte de um levantamento global de opinião pública realizado em 65 países pela rede WIN, que reúne alguns dos maiores institutos de pesquisa do mundo. Apesar da diminuição das expectativas de melhora, o Brasil ainda aparece em 7º lugar no ranking das nações mais otimistas. As informações são do jornal O Estado de S. Paulo.

One Percent of Population Responsible for 63% of Violent Crime, Swedish Study Reveals (Science Daily)

Dec. 6, 2013 — The majority of all violent crime in Sweden is committed by a small number of people. They are almost all male (92%) who early in life develops violent criminality, substance abuse problems, often diagnosed with personality disorders and commit large number non-violent crimes. These are the findings of researchers at Sahlgrenska Academy who have examined 2.5 million people in Swedish criminal and population registers.

In this study, the Gothenburg researchers matched all convictions for violent crime in Sweden between 1973 and 2004 with nation-wide population register for those born between 1958 to 1980 (2.5 million).

Of the 2.5 million individuals included in the study, 4 percent were convicted of at least one violent crime, 93,642 individuals in total. Of these convicted at least once, 26 percent were re-convicted three or more times, thus resulting in 1 percent of the population (23,342 individuals) accounting for 63 percent of all violent crime convictions during the study period.

“Our results show that 4 percent of those who have three or more violent crime convictions have psychotic disorders, such as schizophrenia and bipolar disorder. Psychotic disorders are twice as common among repeat offenders as in the general population, but despite this fact they constitute a very small proportion of the repeat offenders,” says Örjan Falk, researcher at Sahlgrenska Academy.

One finding the Gothenburg researchers present is that “acts of insanity” that receive a great deal of mass media coverage, committed by someone with a severe psychiatric disorder, are not responsible for the majority of violent crimes.

According to the researchers, the study’s results are important to crime prevention efforts.

“This helps us identify which individuals and groups in need of special attention and extra resources for intervention. A discussion on the efficacy of punishment (prison sentences) for this group is needed as well, and we would like to initiate a debate on what kind of criminological and medical action that could be meaningful to invest in,” says Örjan Falk.

Studies like this one are often used as arguments for more stringent sentences and US principles like “three strikes and you’re out.” What are your views on this?

“Just locking those who commit three or more violent crimes away for life is of course a compelling idea from a societal protective point of view, but could result in some undesirable consequences such as an escalation of serious violence in connection with police intervention and stronger motives for perpetrators of repeat violence to threaten and attack witnesses to avoid life sentences. It is also a fact that a large number of violent crimes are committed inside the penal system.”

“And from a moral standpoint it would mean that we give up on these, in many ways, broken individuals who most likely would be helped by intensive psychiatric treatments or other kind of interventions. There are also other plausible alternatives to prison for those who persistently relapse into violent crime, such as highly intensive monitoring, electronic monitoring and of course the continuing development of specially targeted treatment programs. This would initially entail a higher cost to society, but over a longer period of time would reduce the total number of violent crimes and thereby reduce a large part of the suffering and costs that result from violent crimes,” says Örjan Falk.

“I first and foremost advocate a greater focus on children and adolescents who exhibit signs of developing violent behavior and who are at the risk of later becoming repeat offenders of violent crime.”

Journal Reference:

  1. Örjan Falk, Märta Wallinius, Sebastian Lundström, Thomas Frisell, Henrik Anckarsäter, Nóra Kerekes. The 1 % of the population accountable for 63 % of all violent crime convictionsSocial Psychiatry and Psychiatric Epidemiology, 2013; DOI: 10.1007/s00127-013-0783-y

Flap Over Study Linking Poverty to Biology Exposes Gulfs Among Disciplines (Chronicle of Higher Education)

February 1, 2013

Flap Over Study Linking Poverty to Biology Exposes Gulfs Among Disciplines 1

 Photo: iStock.

A study by two economists that used genetic diversity as a proxy for ethnic and cultural diversity has drawn fierce rebuttals from anthropologists and geneticists.

By Paul Voosen

Oded Galor and Quamrul Ashraf once thought their research into the causes of societal wealth would be seen as a celebration of diversity. However it has been described, though, it has certainly not been celebrated. Instead, it has sparked a dispute among scholars in several disciplines, many of whom are dubious of any work linking societal behavior to genetics. In the latest installment of the debate, 18 Harvard University scientists have called their work “seriously flawed on both factual and methodological grounds.”

Mr. Galor and Mr. Ashraf, economists at Brown University and Williams College, respectively, have long been fascinated by the historical roots of poverty. Six years ago, they began to wonder if a society’s diversity, in any way, could explain its wealth. They probed tracts of interdisciplinary data and decided they could use records of genetic diversity as a proxy for ethnic and cultural diversity. And after doing so, they found that, yes, a bit of genetic diversity did seem to help a society’s economic growth.

Since last fall, when the pair’s work began to filter out into the broader scientific world, their study has exposed deep rifts in how economists, anthropologists, and geneticists talk—and think. It has provoked calls for caution in how economists use genetic data, and calls of persecution in response. And all of this happened before the study was finally published, in the American Economic Review this month.

“Through this analysis, we’re getting a better understanding of how the world operates in order to alleviate poverty,” Mr. Ashraf said. Any other characterization, he added, is a “gross misunderstanding.”

‘Ethical Quagmires’

A barrage of criticism has been aimed at the study since last fall by a team of anthropologists and geneticists at Harvard. The critique began with a short, stern letter, followed by a rejoinder from the economists; now an expanded version of the Harvard critique will appear in February inCurrent Anthropology.

Fundamentally, the dispute comes down to issues of data selection and statistical power. The paper is a case of “garbage in, garbage out,” the Harvard group says. The indicators of genetic diversity that the economists use stem from only four or five independent points. All the regression analysis in the world can’t change that, said Nick Patterson, a computational biologist at Harvard and MIT’s Broad Institute.

“The data just won’t stand for what you’re claiming,” Mr. Patterson said. “Technical statistical analysis can only do so much for you. … I will bet you that they can’t find a single geneticist in the world who will tell them what they did was right.”

In some respects, the study has become an exemplar for how the nascent field of “genoeconomics,” a discipline that seeks to twin the power of gene sequencing and economics, can go awry. Connections between behavior and genetics rightly need to clear high bars of evidence, said Daniel Benjamin, an economist at Cornell University and a leader in the field who has frequently called for improved rigor.

“It’s an area that’s fraught with an unfortunate history and ethical quagmires,” he said. Mr. Galor and Mr. Ashraf had a creative idea, he added, even if all their analysis doesn’t pass muster.

“I’d like to see more data before I’m convinced that their [theory] is true,” said Mr. Benjamin, who was not affiliated with the study or the critique. The Harvard critics make all sorts of complaints, many of which are valid, he said. “But fundamentally the issue is that there’s just not that much independent data.”

Claims of ‘Outsiders’

The dispute also exposes issues inside anthropology, added Carl Lipo, an anthropologist at California State University at Long Beach who is known for his study of Easter Island. “Anthropologists have long tried to walk the line whereby we argue that there are biological origins to much of what makes us human, without putting much weight that any particular attribute has its origins in genetics [or] biology,” he said.

The debate often erupts in lower-profile ways and ends with a flurry of anthropologists’ putting down claims by “outsiders,” Mr. Lipo said. (Mr. Ashraf and Mr. Galor are “out on a limb” with their conclusions, he added.) The angry reaction speaks to the limits of anthropology, which has been unable to delineate how genetics reaches up through the idiosyncratic circumstances of culture and history to influence human behavior, he said.

Certainly, that reaction has been painful for the newest pair of outsiders.

Mr. Galor is well known for studying the connections between history and economic development. And like much scientific work, his recent research began in reaction to claims made by Jared Diamond, the famed geographer at the University of California at Los Angeles, that the development of agriculture gave some societies a head start. What other factors could help explain that distribution of wealth? Mr. Galor wondered.

Since records of ethnic or cultural diversity do not exist for the distant past, they chose to use genetic diversity as a proxy. (There is little evidence that it can, or can’t, serve as such a proxy, however.) Teasing out the connection to economics was difficult—diversity could follow growth, or vice versa—but they gave it a shot, Mr. Galor said.

“We had to find some root causes of the [economic] diversity we see across the globe,” he said.

They were acquainted with the “Out of Africa” hypothesis, which explains how modern human beings migrated from Africa in several waves to Asia and, eventually, the Americas. Due to simple genetic laws, those serial waves meant that people in Africa have a higher genetic diversity than those in the Americas. It’s an idea that found support in genetic sequencing of native populations, if only at the continental scale.

Combining the genetics with population-density estimates—data the Harvard group says are outdated—along with deep statistical analysis, the economists found that the low and high diversity found among Native Americans and Africans, respectively, was detrimental to development. Meanwhile, they found a sweet spot of diversity in Europe and Asia. And they stated the link in sometimes strong, causal language, prompting another bitter discussion with the Harvard group over correlation and causation.

An ‘Artifact’ of the Data?

The list of flaws found by the Harvard group is long, but it boils down to the fact that no one has ever made a solid connection between genes and poverty before, even if genetics are used only as a proxy, said Jade d’Alpoim Guedes, a graduate student in anthropology at Harvard and the critique’s lead author.

“If my research comes up with findings that change everything we know,” Ms. d’Alpoim Guedes said, “I’d really check all of my input sources. … Can I honestly say that this pattern that I see is true and not an artifact of the input data?”

Mr. Ashraf and Mr. Galor found the response to their study, which they had previewed many times over the years to other economists, to be puzzling and emotionally charged. Their critics refused to engage, they said. They would have loved to present their work to a lecture hall full of anthropologists at Harvard. (Mr. Ashraf, who’s married to an anthropologist, is a visiting scholar this year at Harvard’s Kennedy School.) Their gestures were spurned, they said.

“We really felt like it was an inquisition,” Mr. Galor said. “The tone and level of these arguments were really so unscientific.”

Mr. Patterson, the computational biologist, doesn’t quite agree. The conflict has many roots but derives in large part from differing standards for publication. Submit the same paper to a leading genetics journal, he said, and it would not have even reached review.

“They’d laugh at you,” Mr. Patterson said. “This doesn’t even remotely meet the cut.”

In the end, it’s unfortunate the economists chose genetic diversity as their proxy for ethnic diversity, added Mr. Benjamin, the Cornell economist. They’re trying to get at an interesting point. “The genetics is really secondary, and not really that important,” he said. “It’s just something that they’re using as a measure of the amount of ethnic diversity.”

Mr. Benjamin also wishes they had used more care in their language and presentation.

“It’s not enough to be careful in the way we use genetic data,” he said. “We need to bend over backwards being careful in the way we talk about what the data means; how we interpret findings that relate to genetic data; and how we communicate those findings to readers and the public.”

Mr. Ashraf and Mr. Galor have not decided whether to respond to the Harvard critique. They say they can, point by point, but that ultimately, the American Economic Review’s decision to publish the paper as its lead study validates their work. They want to push forward on their research. They’ve just released a draft study that probes deeper into the connections between genetic diversity and cultural fragmentation, Mr. Ashraf said.

“There is much more to learn from this data,” he said. “It is certainly not the final word.”

Understanding the Historical Probability of Drought (Science Daily)

Jan. 30, 2013 — Droughts can severely limit crop growth, causing yearly losses of around $8 billion in the United States. But it may be possible to minimize those losses if farmers can synchronize the growth of crops with periods of time when drought is less likely to occur. Researchers from Oklahoma State University are working to create a reliable “calendar” of seasonal drought patterns that could help farmers optimize crop production by avoiding days prone to drought.

Historical probabilities of drought, which can point to days on which crop water stress is likely, are often calculated using atmospheric data such as rainfall and temperatures. However, those measurements do not consider the soil properties of individual fields or sites.

“Atmospheric variables do not take into account soil moisture,” explains Tyson Ochsner, lead author of the study. “And soil moisture can provide an important buffer against short-term precipitation deficits.”

In an attempt to more accurately assess drought probabilities, Ochsner and co-authors, Guilherme Torres and Romulo Lollato, used 15 years of soil moisture measurements from eight locations across Oklahoma to calculate soil water deficits and determine the days on which dry conditions would be likely. Results of the study, which began as a student-led class research project, were published online Jan. 29 inAgronomy Journal. The researchers found that soil water deficits more successfully identified periods during which plants were likely to be water stressed than did traditional atmospheric measurements when used as proposed by previous research.

Soil water deficit is defined in the study as the difference between the capacity of the soil to hold water and the actual water content calculated from long-term soil moisture measurements. Researchers then compared that soil water deficit to a threshold at which plants would experience water stress and, therefore, drought conditions. The threshold was determined for each study site since available water, a factor used to calculate threshold, is affected by specific soil characteristics.

“The soil water contents differ across sites and depths depending on the sand, silt, and clay contents,” says Ochsner. “Readily available water is a site- and depth-specific parameter.”

Upon calculating soil water deficits and stress thresholds for the study sites, the research team compared their assessment of drought probability to assessments made using atmospheric data. They found that a previously developed method using atmospheric data often underestimated drought conditions, while soil water deficits measurements more accurately and consistently assessed drought probabilities. Therefore, the researchers suggest that soil water data be used whenever it is available to create a picture of the days on which drought conditions are likely.

If soil measurements are not available, however, the researchers recommend that the calculations used for atmospheric assessments be reconfigured to be more accurate. The authors made two such changes in their study. First, they decreased the threshold at which plants were deemed stressed, thus allowing a smaller deficit to be considered a drought condition. They also increased the number of days over which atmospheric deficits were summed. Those two changes provided estimates that better agreed with soil water deficit probabilities.

Further research is needed, says Ochsner, to optimize atmospheric calculations and provide accurate estimations for those without soil water data. “We are in a time of rapid increase in the availability of soil moisture data, but many users will still have to rely on the atmospheric water deficit method for locations where soil moisture data are insufficient.”

Regardless of the method used, Ochsner and his team hope that their research will help farmers better plan the cultivation of their crops and avoid costly losses to drought conditions.

Journal Reference:

  1. Guilherme M. Torres, Romulo P. Lollato, Tyson E. Ochsner.Comparison of Drought Probability Assessments Based on Atmospheric Water Deficit and Soil Water Deficit.Agronomy Journal, 2013; DOI: 10.2134/agronj2012.0295

Fraud Case Seen as a Red Flag for Psychology Research (N.Y. Times)

By BENEDICT CAREY

Published: November 2, 2011

A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found. Experts say the case exposes deep flaws in the way science is done in a field,psychology, that has only recently earned a fragile respectability.

Joris Buijs/Pve

The psychologist Diederik Stapel in an undated photograph. “I have failed as a scientist and researcher,” he said in a statement after a committee found problems in dozens of his papers.

The psychologist, Diederik Stapel, of Tilburg University, committed academic fraud in “several dozen” published papers, many accepted in respected journals and reported in the news media, according to a report released on Monday by the three Dutch institutions where he has worked: the University of Groningen, the University of Amsterdam, and Tilburg. The journal Science, which published one of Dr. Stapel’s papers in April, posted an “editorial expression of concern” about the research online on Tuesday.

The scandal, involving about a decade of work, is the latest in a string of embarrassments in a field that critics and statisticians say badly needs to overhaul how it treats research results. In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.

“The big problem is that the culture is such that researchers spin their work in a way that tells a prettier story than what they really found,” said Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “It’s almost like everyone is on steroids, and to compete you have to take steroids as well.”

In a prolific career, Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.

In a statement posted Monday on Tilburg University’s Web site, Dr. Stapel apologized to his colleagues. “I have failed as a scientist and researcher,” it read, in part. “I feel ashamed for it and have great regret.”

More than a dozen doctoral theses that he oversaw are also questionable, the investigators concluded, after interviewing former students, co-authors and colleagues. Dr. Stapel has published about 150 papers, many of which, like the advertising study, seem devised to make a splash in the media. The study published in Science this year claimed that white people became more likely to “stereotype and discriminate” against black people when they were in a messy environment, versus an organized one. Another study, published in 2009, claimed that people judged job applicants as more competent if they had a male voice. The investigating committee did not post a list of papers that it had found fraudulent.

Dr. Stapel was able to operate for so long, the committee said, in large measure because he was “lord of the data,” the only person who saw the experimental evidence that had been gathered (or fabricated). This is a widespread problem in psychology, said Jelte M. Wicherts, a psychologist at the University of Amsterdam. In a recent survey, two-thirds of Dutch research psychologists said they did not make their raw data available for other researchers to see. “This is in violation of ethical rules established in the field,” Dr. Wicherts said.

In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.

Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

The American Psychological Association, the field’s largest and most influential publisher of results, “is very concerned about scientific ethics and having only reliable and valid research findings within the literature,” said Kim I. Mills, a spokeswoman. “We will move to retract any invalid research as such articles are clearly identified.”

Researchers in psychology are certainly aware of the issue. In recent years, some have mocked studies showing correlations between activity on brain images and personality measures as “voodoo” science, and a controversy over statistics erupted in January after The Journal of Personality and Social Psychology accepted a paper purporting to show evidence of extrasensory perception. In cases like these, the authors being challenged are often reluctant to share their raw data. But an analysis of 49 studies appearing Wednesday in the journal PLoS One, by Dr. Wicherts, Dr. Bakker and Dylan Molenaar, found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.

“We know the general tendency of humans to draw the conclusions they want to draw — there’s a different threshold,” said Joseph P. Simmons, a psychologist at the University of Pennsylvania’s Wharton School. “With findings we want to see, we ask, ‘Can I believe this?’ With those we don’t, we ask, ‘Must I believe this?’ ”

But reviewers working for psychology journals rarely take this into account in any rigorous way. Neither do they typically ask to see the original data. While many psychologists shade and spin, Dr. Stapel went ahead and drew any conclusion he wanted.

“We have the technology to share data and publish our initial hypotheses, and now’s the time,” Dr. Schooler said. “It would clean up the field’s act in a very big way.”