Arquivo da tag: Morte

Poor air quality kills 5.5 million worldwide annually (Science Daily)

Date: February 12, 2016

Source: University of British Columbia

Summary: New research shows that more than 5.5 million people die prematurely every year due to household and outdoor air pollution. More than half of deaths occur in two of the world’s fastest growing economies, China and India.


New research shows that more than 5.5 million people die prematurely every year due to household and outdoor air pollution. More than half of deaths occur in two of the world’s fastest growing economies, China and India. Credit: Institute for Health Metrics and Evaluation (IHME), University of Washington

New research shows that more than 5.5 million people die prematurely every year due to household and outdoor air pollution. More than half of deaths occur in two of the world’s fastest growing economies, China and India.

Power plants, industrial manufacturing, vehicle exhaust and burning coal and wood all release small particles into the air that are dangerous to a person’s health. New research, presented today at the 2016 annual meeting of the American Association for the Advancement of Science (AAAS), found that despite efforts to limit future emissions, the number of premature deaths linked to air pollution will climb over the next two decades unless more aggressive targets are set.

“Air pollution is the fourth highest risk factor for death globally and by far the leading environmental risk factor for disease,” said Michael Brauer, a professor at the University of British Columbia’s School of Population and Public Health in Vancouver, Canada. “Reducing air pollution is an incredibly efficient way to improve the health of a population.”

For the AAAS meeting, researchers from Canada, the United States, China and India assembled estimates of air pollution levels in China and India and calculated the impact on health.

Their analysis shows that the two countries account for 55 per cent of the deaths caused by air pollution worldwide. About 1.6 million people died of air pollution in China and 1.4 million died in India in 2013.

In China, burning coal is the biggest contributor to poor air quality. Qiao Ma, a PhD student at the School of Environment, Tsinghua University in Beijing, China, found that outdoor air pollution from coal alone caused an estimated 366,000 deaths in China in 2013.

Ma also calculated the expected number of premature deaths in China in the future if the country meets its current targets to restrict coal combustion and emissions through a combination of energy policies and pollution controls. She found that air pollution will cause anywhere from 990,000 to 1.3 million premature deaths in 2030 unless even more ambitious targets are introduced.

“Our study highlights the urgent need for even more aggressive strategies to reduce emissions from coal and from other sectors,” said Ma.

In India, a major contributor to poor air quality is the practice of burning wood, dung and similar sources of biomass for cooking and heating. Millions of families, among the poorest in India, are regularly exposed to high levels of particulate matter in their own homes.

“India needs a three-pronged mitigation approach to address industrial coal burning, open burning for agriculture, and household air pollution sources,” said Chandra Venkataraman, professor of Chemical Engineering at the Indian Institute of Technology Bombay, in Mumbai, India.

In the last 50 years, North America, Western Europe and Japan have made massive strides to combat pollution by using cleaner fuels, more efficient vehicles, limiting coal burning and putting restrictions on electric power plants and factories.

“Having been in charge of designing and implementing strategies to improve air in the United States, I know how difficult it is. Developing countries have a tremendous task in front of them,” said Dan Greenbaum, president of Health Effects Institute, a non-profit organization based in Boston that sponsors targeted efforts to analyze the health burden from different air pollution sources. “This research helps guide the way by identifying the actions which can best improve public health.”

Video: https://youtu.be/Kwoqa84npsU

Background:

The research is an extension of the Global Burden of Disease study, an international collaboration led by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington that systematically measured health and its risk factors, including air pollution levels, for 188 countries between 1990 and 2013. The air pollution research is led by researchers at the University of British Columbia and the Health Effects Institute.

Additional facts about air pollution:

  • World Health Organization (WHO) air quality guidelines set daily particulate matter at 25 micrograms per cubic metre.
  • At this time of year, Beijing and New Delhi will see daily levels at or above 300 micrograms per cubic meter metre; 1,200 per cent higher than WHO guidelines.
  • While air pollution has decreased in most high-income countries in the past 20 years, global levels are up largely because of South Asia, Southeast Asia, and China. More than 85 per cent of the world’s population now lives in areas where the World Health Organization Air Quality Guideline is exceeded.
  • The researchers say that strict control of particulate matter is critical because of changing demographics. Researchers predict that if air pollution levels remain constant, the number of deaths will increase because the population is aging and older people are more susceptible to illnesses caused by poor air quality.
  • According to the Global Burden of Disease study, air pollution causes more deaths than other risk factors like malnutrition, obesity, alcohol and drug abuse, and unsafe sex. It is the fourth greatest risk behind high blood pressure, dietary risks and smoking.
  • Cardiovascular disease accounts for the majority of deaths from air pollution with additional impacts from lung cancer, chronic obstructive pulmonary disease (COPD) and respiratory infections.

Theorizing Embodiment and Making Bodies ‘Matter’ (The Disorder of Things)

JULY 17, 2015, GUEST AUTHORS

Bringing to a close our symposium on Bodies of Violence is Lauren’s rejoinder to all our contributors, Kevin McSorleyAli HowellPablo and Antoine.


First, a huge thank you to the (Dis)order of Things and especially Antoine for organizing this forum and to each of the contributors. It’s been a huge honor to have my work read so carefully and responded to so thoughtfully and I welcome the opportunity to try to clarify some of my work and acknowledge where the contributors have pointed out helpful areas for future research.

As Pablo K and others noticed, Bodies of Violence it is not meant to be a general theory of embodiment in IR (I’m not sure such a project is feasible or politically desirable in any event).  It is a more specific intervention with a different ambition: both to speak to ‘mainstream’ concerns about theorizing violence, particularly forms of political violence associated with the ‘war on terror’ and to make not only a theoretical argument about how we might or should theorize embodiment and violence, but also to show that understanding these different ‘modes of violence’ necessitates such an understanding of the relationship between bodies, subjects and violence.  My rationale for using feminist theory to think about the relationship between bodies, subjects and violence in IR was not meant to be exclusive: certainly (other) people working with concepts of biopolitics as well as anti-colonial/anti-racist theorists, disability theorists, phenomenologists and more also have much to say on this topic, some insights of which have been very important in my analysis, if not as fully fleshed out (if you will) as my engagement with feminist theory is.[i] For me, it was a particular reading of feminist theories of embodiment, not solely based on Butler, but on a particular feminist problematic in which women, as a category of those constituted, as Pablo K put it, the “improperly bodied”, are politically disenfranchised and generally excluded from their status as a fully human subject that served as a starting point, but far from an ‘ending’ for thinking about the subject of embodiment.  Rather, it is, as Kevin noted, “the specific tradition of trying to think through women’s subordination in terms of the relationship between bodies, subjects and power” that feminist theory entails that I wanted to use to think about violence and embodiment in ways that I hope will speak not only to feminists in IR but also to other critical and the more pluralistically and trans-disciplinarily minded scholars in IR and beyond as well.

Ana Mendieta, Body Tracks

However, this brings us to some of the drawbacks of feminist approaches to violence and embodiment. Ali’s point about the violence of feminist theory is a particularly good one. Feminists working in IR tend to be quite aware of the uses of feminism for violent aims: the Taliban’s oppression and abuse of women in Afghanistan as a rationale for war by the US and its allies being supported by NOW and the Feminist Majority is a well-known example. Ali’s point about the violence of some feminism(s) against trans-people is also well-taken; though Butler is hardly a ‘TERF’ by any means, her work has been critiqued by trans-theorists for a number of reasons. For the purposes of this book, I don’t necessarily see a conflict between trans-theory and Butler’s theory of the materialization of bodies and the limits of intelligibility as being relevant to the ways in which security practices work to materialize only certain bodies as ‘real,’ often excluding trans- people and constituting them as threats. In general, I agree with Ali that we should welcome feminist scholarship and practice that is less defensive in regards to the ‘mainstream’ of the discipline and more willing to seek alliances and interlocutors from a broader range of scholars, both in the spaces of IR and outside doing work on violence, power and embodiment.[ii]

Forum contributors also provided some excellent provocations for thinking about aspects of embodiment or ways of addressing the thorny question of embodiment that my book did not focus on. Pablo writes, “It is a book thoroughly about bodies, but not therefore necessarily a theory of bodies and embodiment. And it is theory of em-bodies-ment that we may in need of.” On a somewhat different note, Kevin wonders what might happened if the embodied subjects of which I write “could have a more audible place in the analysis.” Of course, it (should) hardly need mentioning the great amount of work influenced by feminist and postcolonial theory that strives to bring the voices and experiences of embodied subjects, particularly of marginalized peoples, into IR as a disciplinary space. I would point, for one example, to the work of Christine Sylvester and others on experience as an embodied concept for theorizing war. However, as Kevin points out, my book has a different, and I would hope, complementary aim: to show the explanatory and critical value of theorizing bodies as both produced by, and productive of, practices of violence in international politics.  It is the last point, that bodies are productive of violence, which speaks more to Pablo’s concern about bodies ‘mattering’.

While Bodies of Violence is perhaps most influenced by Butler’s project, as Kevin, Ali and Pablo K have all noted, theories of embodiment (or at least the relationship between discourse and materiality) such as Elizabeth Grosz’s Volatile Bodiesand Barad’s ‘posthumanist performativity’ as well as Donna Haraway’s work are perhaps more of an influence than appears in the published version of the book, which takes as an overarching frame Butler’s concepts of normative violence and ontological precarity. These other works are concerned, in their own way, with the ways in which matter ‘matters’ or the ways in which embodied subjects exceed their materializations in discourse.[iii]

Marlene Dumas, Measuring Your Own Grave

It is the ‘generative’ or ‘productive’ capacities of bodies that is an engagement with ‘new materialisms’ or ‘feminist materialisms’ if you like. One of the aspects of Barad’s work, whom Pablo mentions, that is most appealing is the insistence of intra-activity, with the implication that we cannot meaningfully separate matter from the discursive, as phenomena only exist by virtue of ongoing assemblages and reassemblages of matter and discourse.  Bodies ‘matter,’ they do things, they have what Diana Coole refers to as ‘agentic capacities’ One reason that Bodies of Violence focuses on actual instances of violence perpetrated on and by bodies in international politics is precisely to take bodies seriously as something other than ‘representations’ or ‘abstractions’ in IR. An example of bodies being ‘productive’ in the book are the ways that bodies ‘speak’ which might exceed the intentions of ‘speaking subjects’. Antoine’s discussion of my work on the hunger striking body in Guantanamo Bay (which I also discussed earlier here on the blog) makes reference to this point: the body in pain as a call for recognition. This is something the body ‘does’ that is not reducible to the intentions of a fully constituted subject nor the words spoken by such subjects (this is in addition to the ways in which hunger striking prisoners such as Samir Naji al Hasan Moqbel have spoken eloquently about their experiences). And yet, while this body’s actions may have certain implications, enable certain politics, etc, this cannot be understood without understanding that the body’s capacities are already subject to prior materializations and their reception will also bear the marks of prior political assemblages as well.

A key example of this from the book is the embodiment of drone operators, or perhaps more accurately, the legal/technological drone assemblage.  While this form of embodiment is what might be termed, following Haraway, a ‘material-semiotic actor’, it is a body, or form of embodiment, that is necessary for the kind of ‘death-world’ that enables the killing of suspected militants as well as those people who can only be named innocent or militant in the aftermath. Both bodies of drone operators and the people who are killed by drone strikes are intimately connected in this way: the embodiment of drone pilots is productive of the bodies of targets and the ‘uncountable’ bodies whose deaths remain outside of the epistemological framework enabled by this drone assemblage. Thus, there is less of an explicit engagement with ‘new materialisms’ per se than an acknowledgement (one that has been part of feminist theory for decades) that one cannot determine or write bodies ‘all the way down’ and that, in the words of Samantha Frost and Diana Coole,’ nature ‘pushes back’ in sometimes unexpected ways, but in ways that are nonetheless subject to human interpretation.

Insect swarm picture from wired.com, Lukas Felzmann

Antoine concludes the forum on a forward-looking note that also recalls Ali’s point of the various forms of critical literatures that have much to offer our thinking about bodies and violence beyond feminist literatures: “a growing task of critical scholars in the future may therefore also be that of attentiveness to new forms for the sorting and hierarchizing of bodies, human and otherwise, that are emerging from the production of scientific knowledges.” I agree and (some of) my current research is aimed precisely at the question of gender, queer theory and ‘the posthuman’. While I am wary of certain tendencies within some of the critical literatures of affect theory, ‘new materialisms’ and the like that suggest either explicitly or implicitly that feminist, anti-racial or other such critiques are outmoded, scholars like Rosi Braidotti and Donna Haraway have read the feminist politics the ‘posthuman’ in ways that engage the shifting materialities and discursive constructions of gendered and sexualized bodies. I’m working on a project now that pursues the question of embodiment and ‘drone warfare’ future to consider the politics of the insect and the swarm as inspirations for military technological developments, in the manner that Katherine Hayles describes as a double vision that “looks simultaneously at the power of simulation and at the materialities that produce it” in order to “better understand the implication of articulating posthuman constructions together with embodied actualities” (Hayles 1999, 47). This is to say both discursive constructions of insects/swarms in culture (particularly their association with death, abjection and the feminine) as well as the material capabilities of insects and their role in the earth’s eco-system and its own set of ‘death-worlds’ can and should be thought in tandem. The parameters of this project are yet not fixed (are they ever?) and so I’m grateful for this conversation around Bodies of Violence as I work to further the project of taking embodiment and its relationships with subjectivity and violence seriously in thinking about international political violence in its myriad forms. These contributions are evidence that work on embodiment in IR and related disciplines is becoming a robust research area in which many possibilities exist for dialogue, critique and collaboration.


[i] Also, feminist theorists such as Butler, Grosz, Haraway and Ahmed all engage in a variety of traditions as well, from psychoanalysis, Foucauldian theory, phenomenology, postcolonial theory, and more, so the divisions between ‘feminist theory’ and other kinds of critical theory is far from given, and a much longer piece could be written about this.

[ii] Although see recent work by Rose McDermott and Dan Reiter that seems determined to ignore the advances of decades of scholarship on gender, feminism, and war.

[iii] I agree with Pablo K that Butler’s work is ambiguously situated in relationship to the so-called ‘new materialisms’: I make a brief case in the book that it is not incompatible with her approach at times, but I don’t explore this at length in the final version of the text.

The Skeleton Trade: Life, Death, and Commerce in Early Modern Europe (Objects in Motion: Material Culture in Transition)

JULY 9, 2015

Anita Guerrini, Horning Professor of the Humanities and Professor of History at Oregon State University, discusses the fascinating research which she presented at Objects in Motion: Material Culture in Transition.

Although the human skeleton was well known as a symbol before 1500, the articulated skeleton does not seem to have come into its own as an object – scientific and artistic as well as symbolic – until the time of Vesalius. Curiously ubiquitous, since everyone has one, but yet largely invisible, anatomists revealed the skeleton to view. The well-known illustrations of Vesalius were plagiarized over and over for two centuries after their publication in 1543.

Vesalius, "De humani corporis fabrica", 1543. Credit: Wellcome Library, London.

Vesalius was the first to give detailed instructions on how to make a skeleton, for although it was a natural object, it was also a crafted object whose construction entailed a lot of work. The human body became an object in motion as it travelled from the scaffold to the dissection table to the grisly cauldron where the bones were boiled to remove their flesh. While artists and anatomists employed skeletons for instruction, little evidence of their collection appears before the mid-seventeenth century, when they begin to appear in cabinets and collections. Both the Royal Society and the Paris Academy of Sciences owned several. At the Paris Academy, André Colson, described as an “ébeniste” or furniture maker, was charged with the making and maintenance of the skeleton room, while the physician Nehemiah Grew, who catalogued the Royal Society’s collections in 1681, may also have made its skeletons. By the end of the seventeenth century, a vigorous skeleton trade flourished across Europe, and they often appear in auction catalogues alongside books, works of art, and scientific instruments. At the same time, relics, both old and new, retained their potency in both Catholic and Protestant countries.

After Vesalius, detailed instructions for making a skeleton appeared in many anatomical texts and manuals as part of the education of a physician or surgeons; in the eighteenth century, William Hunter took it for granted that each of his students would need to construct a skeleton for his own use and in addition procure “several skulls.” While such a process would seem to confer anonymity to the finished skeleton, provenance and even identity often clung to the bones along with religious resonances. Most skeletons were of executed criminals, some of them widely known. The skeleton of the “Thief-taker General” Jonathan Wild, executed in 1725, still hangs in the gallery of the College of Surgeons in London, and Hogarth’s famous 1751 “Fourth Stage of Cruelty” shows the skeletons of other malefactors on display in niches at Surgeons’ Hall while a cauldron awaits the bones of Tom Nero, who is being dissected by the surgeons after his conviction for murder.

William Hogarth's "The Fourth Stage of Cruelty", 1751. Credit: Wikimedia.

Widespread demand and changing scientific contexts expanded the market for skeletons (as well as skulls) beyond Europe to encompass much of the known world by the mid-eighteenth century. The prodigious collector Hans Sloane received skulls and bones from contacts throughout the world, including native bones that his Jamaican contacts apparently stumbled across in caves. Sloane’s meticulous catalogues of his collections allow one to trace the provenance of many of his human specimens though other collectors and agents. Such catalogues, along with account books, advertisements, and illustrations, reveal this worldwide commerce in skeletons alongside a continued trade in skeletal relics. Traveling across time and place, skeletons embodied beauty and deformity, crime and punishment, sin and sanctity, science and colonial power, often simultaneously.

18th-century trade card for the skeleton seller and preparator Nathaniel Longbottom of London. Credit: Wellcome Library, London.

Médicos que ‘ressuscitam mortos’ querem testar técnica em humanos (BBC)

Técnica para estender vidas por algumas horas nunca foi testada em humanos

“Quando seu corpo está com temperatura de 10 graus, sem atividade cerebral, batimento cardíaco e sangue – é um consenso que você está morto”, diz o professor Peter Rhee, da universidade do Arizona. “Mas ainda assim, nós conseguimos trazer você de volta.”

Rhee não está exagerando. Com Samuel Tisherman, da Universidade de Maryland, nos Estados Unidos, ele comprovou que é possível manter o corpo em estado “suspenso” por horas.

O procedimento já foi testado com animais e é o mais radical possível. Envolve retirar todo o sangue do corpo e esfriá-lo até 20 graus abaixo da sua temperatura normal.

Quando o problema no corpo do paciente é resolvido, o sangue volta a ser bombeado, reaquecendo lentamente o sistema. Quando a temperatura do sangue chega a 30 graus, o coração volta a bater.

Os animais submetidos a esse teste tiveram poucos efeitos colaterais ao despertar. “Eles ficam um pouco grogue por um tempo, mas no dia seguinte já estão bem”, diz Tisherman.

Testes com humanos

Tisherman causou um frisson internacional este ano quando anunciou que está pronto para fazer testes com humanos. As primeiras cobaias seriam vítimas de armas de fogo em Pittsburgh, na Pensilvânia.

Nesse caso, são pacientes cujos corações já pararam de bater e que não teriam mais chances de sobreviver, pelas técnicas convencionais. O médico americano teme que, por conta de manchetes imprecisas na imprensa, tenha-se criado uma ideia equivocada da sua pesquisa

Peter Rhee ajudou a criar técnica inovadora que envolve retirar o sangue do paciente

“Quando as pessoas pensam no assunto, elas pensam em viajantes espaciais sendo congelados e acordados em Júpiter, ou no [personagem] Han Solo, de Guerra nas Estrelas”, diz Tisherman.

“Isso não ajuda, porque é importante que as pessoas saibam que não se trata de ficção científica.”

Os esforços para trazer as pessoas de volta do que se acredita ser a morte já existem há décadas. Tisherman começou seus estudos com Peter Safar, que nos anos 1960 criou a técnica pioneira de reanimação cardiorrespiratória. Com uma massagem cardíaca, é possível manter o coração artificialmente ativo por um tempo.

“Sempre fomos criados para acreditar que a morte é um momento absoluto, e que quando morremos não tem mais volta”, diz Sam Parnia, da Universidade Estadual de Nova York.

“Com a descoberta básica da reanimação cardiorrespiratória nós passamos a entender que as células do corpo demoram horas para atingir uma morte irreversível. Mesmo depois que você já virou um cadáver, ainda existe como resgatá-lo.”

Recentemente, um homem de 40 anos no Texas sobreviveu por três horas e meia com a reanimação cardiorrespiratória.

Segundo os médicos de plantão, “todo mundo com dois braços foi chamado para se revezar fazendo as compressões no peito do paciente”.

Durante a massagem, ele continuava consciente e conversando com os médicos, mas caso o procedimento fosse interrompido, ele morreria. Eventualmente ele se recuperou e acabou sobrevivendo.

Esse caso de rescucitação ao longo de um grande período só funcionou porque não havia uma grande lesão no corpo do paciente. Mas isso é raro.

‘Limbo’

A técnica desenvolvida agora por Tisherman é baseada na ideia de que baixas temperaturas mantêm o corpo vivo por mais tempo – cerca de uma ou duas horas.

O sangue é retirado e no seu lugar é colocada uma solução salina que ajuda a rebaixar a temperatura do corpo para algo como 10 a 15 graus Celsius.

Em experiência com porcos, cerca de 90% deles se recuperaram quando o sangue foi bombeado de volta. Cada animal passou mais de uma hora no “limbo”.

Técnica de massagem cardíaca já ajuda a estender a vida de pessoas com paradas

“É uma das coisas mais incríveis de se observar: quando o coração começa a bater de novo”, diz Rhee.

Após a operação, foram realizados vários testes para avaliar se houve dano cerebral. Aparentemente nenhum porco apresentou problemas.

O desafio de obter permissão para testar em humanos tem sido enorme até agora. Tisherman e Rhee finalmente receberam permissão para testar sua técnica com vítimas de tiros em Pittsburgh.

Um dos problemas a ser contornado é ver como os pacientes se adaptam com o sangue de outra pessoa. Os porcos receberam o próprio sangue congelado, mas no caso dos humanos será necessário usar o estoque do banco de sangues.

Se der certo, os médicos acreditam que a técnica poderia ser aplicada não só vítimas de lesões, como tiros e facadas, mas em pessoas com ataque cardíaco.

A pesquisa também está levando a outros estudos sobre qual seria a melhor solução química para reduzir o metabolismo do corpo humano.

Leia a versão desta reportagem original em inglês no site BBC Future.

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients (Science Daily)

Date: October 16, 2014

Source: University of Cambridge

Summary: Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

These images show brain networks in two behaviorally similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu

Scientists in Cambridge have found hidden signatures in the brains of people in a vegetative state, which point to networks that could support consciousness even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate.

There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).

The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically — but importantly, not always — impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question — it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”

The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.

Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”


Journal Reference:

  1. Chennu S, Finoia P, Kamau E, Allanson J, Williams GB, et al. Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness. PLOS Computational Biology, 2014; 10 (10): e1003887 DOI:10.1371/journal.pcbi.1003887

‘El rayo fue un castigo’: Mamo que sobrevivió a tragedia de la Sierra (El Tiempo)

EL TIEMPO visitó el pueblo donde murieron 11 indígenas y habló con su máxima autoridad.

Por:   |

2:29 p.m. | 7 de octubre de 2014

En la foto, el mamo Ramón Gil, que perdió a su hijo Juan Ramón Gil cuando cayó el rayo que mató a 11 indígenas.

Foto: Carlos Capella / EL TIEMPO. En la foto, el mamo Ramón Gil, que perdió a su hijo Juan Ramón Gil cuando cayó el rayo que mató a 11 indígenas.

El mamo Ramón Gil, la máxima autoridad de los wiwa y uno de los indígenas tradicionales más conocidos de la Sierra Nevada, dice que hace dos años la naturaleza le había advertido que debían pagar por tantas talas y saqueos que se han realizado en estas montañas. (Lea también: Llegan ayudas a comunidad wiwa tras caída de rayo en Sierra Nevada)

Esa advertencia se hizo realidad cuando en la madrugada de este lunes, asegura el mamo, un rayo cayó sobre la unguma, choza ceremonial donde estaban reunidos unos 50 wiwas de la cuenca media del río Guachaca, y mató a 11 indígenas y dejó a otros 20 con heridas.

La comunidad wiwa de la sierra nevada de Santa Marta se repone de la tragedia que ocasionó la caída de un rayo que mató 11 personas y dejó 20 heridos. Foto: CEET

Luego de la tragedia, en la noche del lunes, los indígenas se fueron del pueblo por temor a que otro rayo volviera a castigarlos. Los cadáveres fueron recogidos en una choza y acomodados en el piso, donde pasaron la noche. Hoy, en la mañana, cuando escucharon el sonido del helicóptero volvieron a bajar de las montañas al pueblo. (Lea también: ‘Un trueno retumbó en la Sierra y en segundos se prendió la choza’)

“El domingo a las seis de la tarde, cuando cayeron los primeros relámpagos, sentí que estaban molestos, pidiendo que le devuelvan a la naturaleza todo lo que se han llevado de la Sierra”, contó el hombre ayer entre las cenizas de la choza ceremonial, de donde aún, pese a los últimos aguaceros, se levantan pequeñas columnas de humo que salen de la tierra y el olor a quemado invade las 40 chozas de kemakúmake, el pueblo ancestral que llora por la tragedia. (Vea las fotos de la zona donde cayó el rayo y la operación para evacuar a los heridos)

En su relato, Ramón, que perdió a su hijo, recuerda que le dijo a la comunidad que el relámpago necesitaba un pago, por tantos árboles talados y cuarzo saqueado. También les había dicho que desde hace tiempo la naturaleza le estaba pidiendo que le cobrara a todos aquellos que habían profanado esos lugares sagrados y él no lo había hecho. (Vea en un mapa los 2.900 rayos que cayeron en la zona de la Sierra Nevada)

“Le dije a la comunidad, el trueno está bravo, dice que nos mandó el primer castigo el verano, pero como suplicamos mucho, manda el aguacero, pero no pagamos y ahora va a venir guerra de la naturaleza y de la humanidad”, asegura el viejo mamo que le dijo la naturaleza.

Esa noche, él estaba hablando con los hombres del pueblo en la choza ceremonial, cuando sintió como la luz iluminó el lugar y todos fueron cayendo lentamente. “Cuando la candela vino hacia mí se me nubló la vista. Me levanté, me dio rabia y lo insulté. A los pocos minutos solo hubo caos y el fuego se apoderó del lugar”, recuerda . Los indígenas que llegaron de las otras chozas tuvieron que sacar los cuerpos para evitar que las llamas los consumieran. (Lea también: Unas 100 personas mueren por rayos en Colombia cada año)

“Le quitamos 11 para que reflexione, analice y hable con los hermanitos menores y les advierta también”, dice Ramón que es el mensaje de la naturaleza.

Pide reunión con mamos

El mamo Ramón le pidió al Gobierno que los ayude para citar un encuentro de por lo menos un mes con los mamos ancestrales y espirituales de los cuatro pueblos indígenas de la Sierra Nevada: koguis, arhuacos, kankuamos y wiwas, para que analicen como autoridades todas las problemáticas que se viven en estos momentos en los resguardos.

Luego de la tragedia, en la noche del lunes, los indígenas se fueron del pueblo por temor a que otro rayo volviera a castigarlos. Foto CEET

También reconoció que los cabildos gobernadores de estos pueblos se han convertido en una especie de talanquera para que las autoridades espirituales y guías de estos pueblos se reúnan. “Necesitamos analizar y unificar un criterio, interna y espiritualmente, ya que los cabildos gobernadores no se ponen de acuerdo”, dijo.

Ayer, Ramón se lamentó de no saber leer ni escribir en español para poder hacer una cartilla para que todos entiendan y comprendan cuál es el mensaje que la naturaleza les da a los mamos y así respeten los últimos recursos que quedan en la Sierra Nevada.

Siguen llegando ayudas

A las 6:30 a.m. de hoy salió el primer helicóptero con alimentos, frazadas, medicamentos y hamacas recogidos por la Defensa Civil y enviados por la Unidad Nacional de Riesgo.

Desde la primera División del Ejército, entre ayer hoy, unos nueve viajes se hicieron en helicópteros sacando heridos, llevando ayudas y periodistas. “No solamente estamos para la guerra, también para ayudas humanitarias”, dijo el capitán del Ejército, Ómar Pardo, quien está al frente de los vuelos.

Ejército y Policía acompañados de la Defensa Civil llevan ayudas. Foto: CEET

A su turno, el coronel Luis Alfonso Quintero Parada, comandante de la Policía Metropolitana de Santa Marta, encabezó con la Policía Judicial la última inspección a los cadáveres, que hoy mismo serán entregados a la comunidad.

“Tenemos un equipo medico revisando a los indígenas, tal como lo solicitaron, para brindarles un apoyo con medicamento y curación, al equipo de Policía judicial se les sumaron dos médicos forenses de Barranquilla, para apoyar el trabajo”, dijo el oficial.

L​eonardo Herrera Delghams
Enviado especial de EL TIEMPO
Sierra Nevada de Santa Marta

Consciência pode permanecer por até três minutos após a morte, diz estudo (O Globo)

Cientistas entrevistaram pacientes que chegaram a ter morte clínica, mas voltaram à vida

POR O GLOBO

Cena da novela "Amor Eterno Amor" da Rede Globo retrata a experiência de quase morte estudadas pelos cientistas da Universidade de Southampton Foto: ReproduçãoCena da novela “Amor Eterno Amor” da Rede Globo retrata a experiência de quase morte estudadas pelos cientistas da Universidade de Southampton – Reprodução

RIO – Aquele túnel com uma luz brilhante no fundo e uma sensação de paz descritos por filmes e outras pessoas que alegaram ter passado por experiência de quase morte podem ser reais. No maior estudo já feito sobre o tema, cientistas da Universidade de Southampton disseram ter comprovado que a consciência humana permanece por ao menos três minutos após o óbito biológico. Durante esse meio tempo, pacientes conseguiriam testemunhar e lembrar depois de eventos como a saída do corpo e os movimentos ao redor do quarto do hospital.

Ao longo de quatro anos, os especialistas examinaram mais de duas mil pessoas que sofreram paradas cardíacas em 15 hospitais no Reino Unido, Estados Unidos e Áustria. Cerca de 16% sobreviveram. E destes, mais de 40% descreveram algum tipo de “consciência” durante o tempo em que eles estavam clinicamente mortos, antes de seus corações voltarem a bater.

O caso mais emblemático foi de um homem ainda lembrou ter deixado seu corpo totalmente e assistindo sua reanimação do canto da sala. Apesar de ser inconsciente e “morto” por três minutos, o paciente narrou com detalhes as ações da equipe de enfermagem e descreveu o som das máquinas.

– Sabemos que o cérebro não pode funcionar quando o coração parou de bater. Mas neste caso, a percepção consciente parece ter continuado por até três minutos no período em que o coração não estava batendo, mesmo que o cérebro normalmente encerre as atividades dentro de 20 a 30 segundos após o coração – explicou ao jornal inglês The Telegraph o pesquisador Sam Parnia.

Dos 2.060 pacientes com parada cardíaca estudados, 330 sobreviveram e 140 disseram ter experimentado algum tipo de consciência ao ser ressuscitado. Embora muitos não se lembrassem de detalhes específicos, alguns relatos coincidiram. Um em cada cinco disseram que tinha sentido uma sensação incomum de tranquilidade, enquanto quase um terço disse que o tempo tinha se abrandado ou se acelerado.

Alguns lembraram de ter visto uma luz brilhante, um flash de ouro ou o sol brilhando. Outros relataram sentimentos de medo, afogamento ou sendo arrastado pelas águas profundas. Cerca de 13% disseram que se sentiam separados de seus corpos.

De acordo com Parnia, muito mais pessoas podem ter experiências quando estão perto da morte, mas as drogas ou sedativos utilizados no processo de ressuscitação podem afetar a memória:

– As estimativas sugerem que milhões de pessoas tiveram experiências vivas em relação à morte. Muitas assumiram que eram alucinações ou ilusões, mas os relatos parecem corresponder a eventos reais. E uma proporção maior de pessoas pode ter experiências vivas de morte, mas não se lembrarem delas devido aos efeitos da lesão cerebral ou sedativos em circuitos de memória.

VEJA TAMBÉM

Read more: http://oglobo.globo.com/sociedade/saude/consciencia-pode-permanecer-por-ate-tres-minutos-apos-morte-diz-estudo-14166762#ixzz3FaJap9ny

Near-death experiences? Results of the world’s largest medical study of the human mind and consciousness at time of death (Science Daily)

Date: October 7, 2014

Source: University of Southampton

Summary: The results of a four-year international study of 2060 cardiac arrest cases across 15 hospitals concludes the following. The themes relating to the experience of death appear far broader than what has been understood so far, or what has been described as so called near-death experiences. In some cases of cardiac arrest, memories of visual awareness compatible with so called out-of-body experiences may correspond with actual events. A higher proportion of people may have vivid death experiences, but do not recall them due to the effects of brain injury or sedative drugs on memory circuits. Widely used yet scientifically imprecise terms such as near-death and out-of-body experiences may not be sufficient to describe the actual experience of death. The recalled experience surrounding death merits a genuine investigation without prejudice.

The results of a four-year international study of 2060 cardiac arrest cases across 15 hospitals are in. Among those who reported a perception of awareness and completed further interviews, 46 per cent experienced a broad range of mental recollections in relation to death that were not compatible with the commonly used term of near death experiences. Credit: © sudok1 / Fotolia

The results of a four-year international study of 2060 cardiac arrest cases across 15 hospitals concludes the following. The themes relating to the experience of death appear far broader than what has been understood so far, or what has been described as so called near-death experiences. In some cases of cardiac arrest, memories of visual awareness compatible with so called out-of-body experiences may correspond with actual events. A higher proportion of people may have vivid death experiences, but do not recall them due to the effects of brain injury or sedative drugs on memory circuits. Widely used yet scientifically imprecise terms such as near-death and out-of-body experiences may not be sufficient to describe the actual experience of death.

Recollections in relation to death, so-called out-of-body experiences (OBEs) or near-death experiences (NDEs), are an often spoken about phenomenon which have frequently been considered hallucinatory or illusory in nature; however, objective studies on these experiences are limited.

In 2008, a large-scale study involving 2060 patients from 15 hospitals in the United Kingdom, United States and Austria was launched. The AWARE (AWAreness during REsuscitation) study, sponsored by the University of Southampton in the UK, examined the broad range of mental experiences in relation to death. Researchers also tested the validity of conscious experiences using objective markers for the first time in a large study to determine whether claims of awareness compatible with out-of-body experiences correspond with real or hallucinatory events.

Results of the study have been published in the journal Resuscitation.

Dr Sam Parnia, Assistant Professor of Critical Care Medicine and Director of Resuscitation Research at The State University of New York at Stony Brook, USA, and the study’s lead author, explained: “Contrary to perception, death is not a specific moment but a potentially reversible process that occurs after any severe illness or accident causes the heart, lungs and brain to cease functioning. If attempts are made to reverse this process, it is referred to as ‘cardiac arrest’; however, if these attempts do not succeed it is called ‘death’. In this study we wanted to go beyond the emotionally charged yet poorly defined term of NDEs to explore objectively what happens when we die.”

Thirty-nine per cent of patients who survived cardiac arrest and were able to undergo structured interviews described a perception of awareness, but interestingly did not have any explicit recall of events.

“This suggests more people may have mental activity initially but then lose their memories after recovery, either due to the effects of brain injury or sedative drugs on memory recall,” explained Dr Parnia, who was an Honorary Research Fellow at the University of Southampton when he started the AWARE study.

Among those who reported a perception of awareness and completed further interviews, 46 per cent experienced a broad range of mental recollections in relation to death that were not compatible with the commonly used term of NDE’s. These included fearful and persecutory experiences. Only 9 per cent had experiences compatible with NDEs and 2 per cent exhibited full awareness compatible with OBE’s with explicit recall of ‘seeing’ and ‘hearing’ events.

One case was validated and timed using auditory stimuli during cardiac arrest. Dr Parnia concluded: “This is significant, since it has often been assumed that experiences in relation to death are likely hallucinations or illusions, occurring either before the heart stops or after the heart has been successfully restarted, but not an experience corresponding with ‘real’ events when the heart isn’t beating. In this case, consciousness and awareness appeared to occur during a three-minute period when there was no heartbeat. This is paradoxical, since the brain typically ceases functioning within 20-30 seconds of the heart stopping and doesn’t resume again until the heart has been restarted. Furthermore, the detailed recollections of visual awareness in this case were consistent with verified events.

“Thus, while it was not possible to absolutely prove the reality or meaning of patients’ experiences and claims of awareness, (due to the very low incidence (2 per cent) of explicit recall of visual awareness or so called OBE’s), it was impossible to disclaim them either and more work is needed in this area. Clearly, the recalled experience surrounding death now merits further genuine investigation without prejudice.”

Further studies are also needed to explore whether awareness (explicit or implicit) may lead to long term adverse psychological outcomes including post-traumatic stress disorder.

Dr Jerry Nolan, Editor-in-Chief of Resuscitation, stated: “The AWARE study researchers are to be congratulated on the completion of a fascinating study that will open the door to more extensive research into what happens when we die.”


Journal Reference:

  1. Parnia S, et al. AWARE—AWAreness during REsuscitation—A prospective study. Resuscitation, 2014 DOI: 10.1016/j.resuscitation.2014.09.004

Deadly Algorithms (Radical Philosophy)

Can legal codes hold software accountable for code that kills?

RP 187 (Sept/Oct 2014)

Schuppli web-web

Algorithms have long adjudicated over vital processes that help to ensure our well-being and survival, from pacemakers that maintain the natural rhythms of the heart, and genetic algorithms that optimise emergency response times by cross-referencing ambulance locations with demographic data, to early warning systems that track approaching storms, detect seismic activity, and even seek to prevent genocide by monitoring ethnic conflict with orbiting satellites. [1] However, algorithms are also increasingly being tasked with instructions to kill: executing coding sequences that quite literally execute.

Guided by the Obama presidency’s conviction that the War on Terror can be won by ‘out-computing’ its enemies and pre-empting terrorists’ threats using predictive software, a new generation of deadly algorithms is being designed that will both control and manage the ‘kill-list,’ and along with it decisions to strike. [2] Indeed, the recently terminated practice of ‘signature strikes’, in which data analytics was used to determine emblematic ‘terrorist’ behaviour and match these patterns to potential targets on the ground, already points to a future in which intelligence-gathering, assessment and military action, including the calculation of who can legally be killed, will largely be performed by machines based upon an ever-expanding database of aggregated information. As such, this transition to execution by algorithm is not simply a continuation of killing at ever greater distances inaugurated by the invention of the bow and arrow that separated warrior and foe, as many have suggested. [3] It is also a consequence of the ongoing automation of warfare, which can be traced back to the cybernetic coupling of Claude Shannon’s mathematical theory of information with Norbert Wiener’s wartime research into feedback loops and communication control systems. [4] As this new era of intelligent weapons systems progresses, operational control and decision-making are increasingly being outsourced to machines.

Computing terror

In 2011 the US Department of Defense (DOD) released its ‘roadmap’ forecasting the expanded use of unmanned technologies, of which unmanned aircraft systems – drones – are but one aspect of an overall strategy directed towards the implementation of fully autonomous Intelligent Agents. It projects its future as follows:

The Department of Defense’s vision for unmanned systems is the seamless integration of diverse unmanned capabilities that provide flexible options for Joint Warfighters while exploiting the inherent advantages of unmanned technologies, including persistence, size, speed, maneuverability, and reduced risk to human life. DOD envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure. [5]

The document is a strange mix of Cold War caricature and Fordism set against the backdrop of contemporary geopolitical anxieties, which sketches out two imaginary vignettes to provide ‘visionary’ examples of the ways in which autonomy can improve efficiencies through inter-operability across military domains, aimed at enhancing capacities and flexibility between manned and unmanned sectors of the US Army, Air Force and Navy. In these future scenarios, the scripting and casting are strikingly familiar, pitting the security of hydrocarbon energy supplies against rogue actors equipped with Russian technology. One concerns an ageing Russian nuclear submarine deployed by a radicalized Islamic nation-state that is beset by an earthquake in the Pacific, thus contaminating the coastal waters of Alaska and threatening its oil energy reserves. The other involves the sabotaging of an underwater oil pipeline in the Gulf of Guinea off the coast of Africa, complicated by the approach of a hostile surface vessel capable of launching a Russian short-range air-to-surface missile. [6]

These Hollywood-style action film vignettes – fully elaborated across five pages of the report – provide an odd counterpoint to the claims being made throughout the document as to the sober science, political prudence and economic rationalizations that guide the move towards fully unmanned systems. On what grounds are we to be convinced by these visions and strategies? On the basis of a collective cultural imaginary that finds its politics within the CGI labs of the infotainment industry? Or via an evidence-based approach to solving the complex problems posed by changing global contexts? Not surprisingly, the level of detail (and techno-fetishism) used to describe unmanned responses to these risk scenarios is far more exhaustive than that devoted to the three primary challenges which the report identifies as specific to the growing reliance upon and deployment of automated and autonomous systems:

1. Investment in science and technology (S&T) to enable more capable autonomous operations.

2. Development of policies and guidelines on what decisions can be safely and ethically delegated and under what conditions.

3. Development of new Verification and Validation (V&V) and T&E techniques to enable verifiable ‘trust’ in autonomy. [7]

As the second of these ‘challenges’ indicates, the delegation of decision-making to computational regimes is particularly crucial here, in so far as it provokes a number of significant ethical dilemmas but also urgent questions regarding whether existing legal frameworks are capable of attending to the emergence of these new algorithmic actors. This is especially concerning when the logic of precedent that organizes much legal decision-making (within common law systems) has followed the same logic that organized the drone programme in the first place: namely, the justification of an action based upon a pattern of behaviour that was established by prior events.

The legal aporia intersects with a parallel discourse around moral responsibility; a much broader debate that has tended to structure arguments around the deployment of armed drones as an antagonism between humans and machines. As the authors of the entry on ‘Computing and Moral Responsibility’ in the Stanford Encyclopedia of Philosophy put it:

Traditionally philosophical discussions on moral responsibility have focused on the human components in moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today’s increasingly technological society, however, human activity cannot be properly understood without making reference to technological artifacts, which complicates the ascription of moral responsibility. [8]

When one poses the question, under what conditions is it morally acceptable to deliberately kill a human being, one is not, in this case, asking whether the law permits such an act for reasons of imminent threat, self-defence or even empathy for someone who is in extreme pain or in a non-responsive vegetative state. The moral register around the decision to kill operates according to a different ethical framework: one that doesn’t necessarily bind the individual to a contract enacted between the citizen and the state. Moral positions can be specific to individual values and beliefs whereas legal frameworks permit actions in our collective name as citizens contracted to a democratically elected body that acts on our behalf but with which we might be in political disagreement. While it is, then, much easier to take a moral stance towards events that we might oppose – US drone strikes in Pakistan – than to justify a claim as to their specific illegality given the anti-terror legislation that has been put in place since 9/11, assigning moral responsibility, proving criminal negligence or demonstrating legal liability for the outcomes of deadly events becomes even more challenging when humans and machines interact to make decisions together, a complication that will only intensify as unmanned systems become more sophisticated and act as increasingly independent legal agents. Moreover, the outsourcing of decision-making to the judiciary as regards the validity of scientific evidence, which followed the 1993 Daubertruling – in the context of a case brought against Merrell Dow Pharmaceuticals – has, in addition, made it difficult for the law to take an activist stance when confronted with the limitations of its own scientific understandings of technical innovation. At present it would obviously be unreasonable to take an algorithm to court when things go awry, let alone when they are executed perfectly, as in the case of a lethal drone strike.

By focusing upon the legal dimension of algorithmic liability as opposed to more wide-ranging moral questions I do not want to suggest that morality and law should be consigned to separate spheres. However, it is worth making a preliminary effort to think about the ways in which algorithms are not simply reordering the fundamental principles that govern our lives, but might also be asked to provide alternate ethical arrangements derived out of mathematical axioms.

Algorithmic accountability

Law, which has already expanded the category of ‘legal personhood’ to include non-human actors such as corporations, also offers ways, then, to think about questions of algorithmic accountability. [9] Of course many would argue that legal methods are not the best frameworks for resolving moral dilemmas. But then again nor are the objectives of counter-terrorism necessarily best serviced by algorithmic oversight. Shifting the emphasis towards a juridical account of algorithmic reasoning might, at any rate, prove useful when confronted with the real possibility that the kill list and other emergent matrices for managing the war on terror will be algorithmically derived as part of a techno-social assemblage in which it becomes impossible to isolate human from non-human agents. It does, however, raise the ‘bar’ for what we would now need to ask the law to do. The degree to which legal codes can maintain their momentum alongside rapid technological change and submit ‘complicated algorithmic systems to the usual process of checks-and-balances that is generally imposed on powerful items that affect society on a large scale’ is of considerable concern. [10] Nonetheless, the stage has already been set for the arrival of a new cast of juridical actors endowed not so much with free will in the classical sense (that would provide the conditions for criminal liability), but intelligent systems which are wilfully free in the sense that they have been programmed to make decisions based upon their own algorithmic logic.[11] While armed combat drones are the most publicly visible of the automated military systems that the DOD is rolling out, they are only one of the many remote-controlled assets that will gather, manage, analyse and act on the data that they acquire and process.

Proponents of algorithmic decision-making laud the near instantaneous response time that allows Intelligent Agents – what some have called ‘moral predators’ – to make micro-second adjustments to avert a lethal drone strike should, for example, children suddenly emerge out of a house that is being targeted as a militant hideout. [12] Indeed robotic systems have long been argued to decrease the error margin of civilian casualties that are often the consequence of actions made by tired soldiers in the field. Nor are machines overly concerned with their own self-preservation, which might likewise cloud judgement under conditions of duress. Yet, as Sabine Gless and Herbert Zech ask, if these ‘Intelligent Agents are often used in areas where the risk of failure and error can be reduced by relying on machines rather than humans … the question arises: Who is liable if things go wrong?’[13]

Typically when injury and death occur to humans, the legal debate focuses upon the degree to which such an outcome was foreseeable and thus adjudicates on the basis of whether all reasonable efforts and pre-emptive protocols had been built into the system to mitigate against such an occurrence. However, programmers cannot of course run all the variables that combine to produce machinic decisions, especially when the degree of uncertainty as to conditions and knowledge of events on the ground is as variable as the shifting contexts of conflict and counter-terrorism. Werner Dahm, chief scientist at the United States Air Force, typically stresses the difficulty of designing error-free systems: ‘You have to be able to show that the system is not going to go awry – you have to disprove a negative.’ [14] Given that highly automated decision-making processes involve complex and rapidly changing contexts mediated by multiple technologies, can we then reasonably expect to build a form of ethical decision-making into these unmanned systems? And would an algorithmic approach to managing the ethical dimensions of drone warfare – for example, whether to strike 16-year-old Abdulrahman al-Awlaki in Yemen because his father was a radicalized cleric; a role that he might inherit – entail the same logics that characterized signature strikes, namely that of proximity to militant-like behaviour or activity? [15] The euphemistically rebranded kill list known as the ‘disposition matrix’ suggests that such determinations can indeed be arrived at computationally. As Greg Miller notes: ‘The matrix contains the names of terrorism suspects arrayed against an accounting of the resources being marshaled to track them down, including sealed indictments and clandestine operations.’ [16]

Intelligent systems are arguably legal agents but not as of yet legal persons, although precedents pointing to this possibility have already been set in motion. The idea that an actual human being or ‘legal person’ stands behind the invention of every machine who might ultimately be found responsible when things go wrong, or even when they go right, is no longer tenable and obfuscates the fact that complex systems are rarely, if ever, the product of single authorship; nor do humans and machines operate in autonomous realms. Indeed, both are so thoroughly entangled with each other that the notion of a sovereign human agent functioning outside the realm of machinic mediation seems wholly improbable. Consider for a moment only one aspect of conducting drone warfare in Pakistan – that of US flight logistics – in which we find that upwards of 165 people are required just to keep a Predator drone in the air for twenty-four hours, the half-life of an average mission. These personnel requirements are themselves embedded in multiple techno-social systems composed of military contractors, intelligence officers, data analysts, lawyers, engineers, programmers, as well as hardware, software, satellite communication, and operation centres (CAOC), and so on. This does not take into account the R&D infrastructure that engineered the unmanned system, designed its operating procedures and beta-tested it. Nor does it acknowledge the administrative apparatus that brought all of these actors together to create the event we call a drone strike. [17]

In the case of a fully automated system, decision-making is reliant upon feedback loops that continually pump new information into the system in order to recalibrate it. But perhaps more significantly in terms of legal liability, decision-making is also governed by the system’s innate ability to self-educate: the capacity of algorithms to learn and modify their coding sequences independent of human oversight. Isolating the singular agent who is directly responsible – legally – for the production of a deadly harm (as currently required by criminal law) suggests, then, that no one entity beyond the Executive Office of the President might ultimately be held accountable for the aggregate conditions that conspire to produce a drone strike and with it the possibility of civilian casualties. Given that the USA doesn’t accept the jurisdiction of the International Criminal Court and Article 25 of the Rome Statute governing individual criminal responsibility, what new legal formulations could, then, be created that would be able to account for indirect and aggregate causality born out of a complex chain of events including so called digital perpetrators? American tort law, which adjudicates over civil wrongs, might be one such place to look for instructive models. In particular, legal claims regarding the use of environmental toxins, which are highly distributed events whose lethal effects often take decades to appear, and involve an equally complex array of human and non-human agents, have been making their way into court, although not typically with successful outcomes for the plaintiffs. The most notable of these litigations have been the mass toxic tort regarding the use of Agent Orange as a defoliant in Vietnam and the Bhopal disaster in India. [18] Ultimately, however, the efficacy of such an approach has to be considered in light of the intended outcome of assigning liability, which in the cases mentioned was not so much deterrence or punishment, but, rather, compensation for damages.

Recoding the law

While machines can be designed with a high degree of intentional behaviour and will out-perform humans in many instances, the development of unmanned systems will need to take into account a far greater range of variables, including shifting geopolitical contexts and murky legal frameworks, when making the calculation that conditions have been met to execute someone. Building in fail-safe procedures that abort when human subjects of a specific size (children) or age and gender (males under the age of 18) appear, sets the stage for a proto-moral decision-making regime. But is the design of ethical constraints really where we wish to push back politically when it comes to the potential for execution by algorithm? Or can we work to complicate the impunity that certain techno-social assemblages currently enjoy? As a 2009 report by the Royal Academy of Engineering on autonomous systems argues,

Legal and regulatory models based on systems with human operators may not transfer well to the governance of autonomous systems. In addition, the law currently distinguishes between human operators and technical systems and requires a human agent to be responsible for an automated or autonomous system. However, technologies which are used to extend human capabilities or compensate for cognitive or motor impairment may give rise to hybrid agents … Without a legal framework for autonomous technologies, there is a risk that such essentially human agents could not be held legally responsible for their actions – so who should be responsible? [19]

Implicating a larger set of agents including algorithmic ones that aid and abet such an act might well be a more effective legal strategy, even if expanding the limits of criminal liability proves unwieldy. As the 2009 ECCHR Study on Criminal Accountability in Sri Lanka put it: ‘Individuals, who exercise the power to organise the pattern of crimes that were later committed, can be held criminally liable as perpetrators. These perpetrators can usually be found in civil ministries such as the ministry of defense or the office of the president.’ [20] Moving down the chain of command and focusing upon those who participate in the production of violence by carrying out orders has been effective in some cases (Sri Lanka), but also problematic in others (Abu Ghraib) where the indictment of low-level officers severed the chain of causal relations that could implicate more powerful actors. Of course prosecuting an algorithm alone for executing lethal orders that the system is in fact designed to make is fairly nonsensical if the objective is punishment. The move must, then, be part of an overall strategy aimed at expanding the field of causality and thus broadening the reach of legal responsibility.

My own work as a researcher on the Forensic Architecture project, alongside Eyal Weizman and several others, in developing new methods of spatial and visual investigation for the UN inquiry into the use of armed drones, provides one specific vantage point for considering how machinic capacities are reordering the field of political action and thus calling forth new legal strategies.[21] In taking seriously the agency of things, we must also take seriously the agency of things whose productive capacities are enlisted in the specific decision to kill. Computational regimes, in operating largely beyond the thresholds of human perception, have produced informatic conjunctions that have redistributed and transformed the spaces in which action occurs, as well as the nature of such consequential actions themselves. When algorithms are being enlisted to out-compute terrorism and calculate who can and should be killed, do we not need to produce a politics appropriate to these radical modes of calculation and a legal framework that is sufficiently agile to deliberate over such events?

Decision-making by automated systems will produce new relations of power for which we have as yet inadequate legal frameworks or modes of political resistance – and, perhaps even more importantly, insufficient collective understanding as to how such decisions will actually be made and upon what grounds. Scientific knowledge about technical processes does not belong to the domain of science alone, as the Daubert ruling implies. However, demands for public accountability and oversight will require much greater participation in the epistemological frameworks that organize and manage these new techno-social systems, and that may be a formidable challenge for all of us. What sort of public assembly will be able to prevent the premature closure of a certain ‘epistemology of facts’, as Bruno Latour would say, that are at present cloaked under a veil of secrecy called ‘national security interests’ – the same order of facts that scripts the current DOD roadmap for unmanned systems?

In a recent ABC Radio interview, Sarah Knuckey, director of the Project on Extrajudicial Executions at New York University Law School, emphasized the degree to which drone warfare has strained the limits of international legal conventions and with it the protection of civilians. [22] The ‘rules of warfare’ are ‘already hopelessly out-dated’, she says, and will require ‘new rules of engagement to be drawn up’: ‘There is an enormous amount of concern about the practices the US is conducting right now and the policies that underlie those practices. But from a much longer-term perspective and certainly from lawyers outside the US there is real concern about not just what’s happening now but what it might mean 10, 15, 20 years down the track.’ [23] Could these new rules of engagement – new legal codes – assume a similarly preemptive character to the software codes and technologies that are being evolved – what I would characterize as a projective sense of the law? Might they take their lead from the spirit of the Geneva Conventions protecting the rights of noncombatants, rather than from those protocols (the Hague Conventions of 1899, 1907) that govern the use of weapons of war, and are thus reactive in their formulation and event-based? If so, this would have to be a set of legal frameworks that is not so much determined by precedent – by what has happened in the past – but, instead, by what may take place in the future.

Notes

1. ^ See, for example, the satellite monitoring and atrocity evidence programmes: ‘Eyes on Darfur’ (www.eyesondarfur.org) and ‘The Sentinel Project for Genocide Prevention’ (http://thesentinelproject.org).

2. ^ Cori Crider, ‘Killing in the Name of Algorithms: How Big Data Enables the Obama Administration’s Drone War’, Al Jazeera America, 2014, http://america.aljazeera.com/opinions/2014/3/drones-big-data-waronterror
obama.html; accessed 18 May 2014. See also the flow chart in Daniel Byman and Benjamin Wittes, ‘How Obama Decides Your Fate if He Thinks You’re a Terrorist,’ The Atlantic, 3 January 2013, http://www.theatlantic.com/
international/archive/2013/01/how-obama-decides-your-fate-if-he-thinks-youre-a-terrorist/266419.

3. ^ For a recent account of the multiple and compound geographies through which drone operations are executed, see Derek Gregory, ‘Drone Geographies’, Radical Philosophy 183 (January/February 2014), pp. 7–19.

4. ^ Contemporary information theorists would argue that the second-order cybernetic model of feedback and control, in which external data is used to adjust the system, doesn’t take into account the unpredictability of evolutive data internal to the system resulting from crunching ever-larger datasets. See Luciana Parisi’s Introduction to Contagious Architecture: Computation, Aesthetics, and Space, MIT Press, Cambridge MA, 2013. For a discussion of Weiner’s cybernetics in this context, see Reinhold Martin, ‘The Organizational Complex: Cybernetics, Space, Discourse’, Assemblage 37, 1998, p. 110.

5. ^ DOD, Unmanned Systems Integrated Roadmap Fy2011–2036, Office of the Undersecretary of Defense for Acquisition, Technology, & Logistics, Washington, DC, 2011, p. 3, http://www.defense.gov/pubs/DOD-USRM-
2013.pdf.

6. ^ Ibid., pp. 1–10.

7. ^ Ibid., p. 27.

8. ^ Merel Noorman and Edward N. Zalta, ‘Computing and Moral Responsibility,’ The Stanford Encyclopedia of Philosophy(2014), http://plato.stanford.edu/archives/sum2014/entries/computing-responsibility.

9. ^ See John Dewey, ‘The Historic Background of Corporate Legal Personality’, Yale Law Journal, vol. 35, no. 6, 1926, pp. 656, 669.

10. ^ Data & Society Research Institute, ‘Workshop Primer: Algorithmic Accountability’, The Social, Cultural & Ethical Dimensions of ‘Big Data’ workshop, 2014, p. 3.

11. ^ See Gunther Teubner, ‘Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law,’ Journal of Law & Society, vol. 33, no.4, 2006, pp. 497–521.

12. ^ See Bradley Jay Strawser, ‘Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles,’ Journal of Military Ethics, vol. 9, no. 4, 2010, pp. 342–68.

13. ^ Sabine Gless and Herbert Zech, ‘Intelligent Agents: International Perspectives on New Challenges for Traditional Concepts of Criminal, Civil Law and Data Protection’, text for ‘Intelligent Agents’ workshop, 7–8 February 2014, University of Basel, Faculty of Law, http://www.snis.ch/sites/default/files/workshop_intelligent_agents.pdf.

14. ^ Agence-France Presse, ‘The Next Wave in U.S. Robotic War: Drones on Their Own’, Defense News, 28 September 2012, p. 2, http://www.defensenews.com/article/20120928/DEFREG02/309280004/The-Next-Wave-
U-S-Robotic-War-Drones-Their-Own.

15. ^ When questioned about the drone strike that killed 16-year old American-born Abdulrahman al-Awlaki, teenage son of radicalized cleric Anwar Al-Awlaki, in Yemen in 2011, Robert Gibbs, former White House press secretary and senior adviser to President Obama’s re-election campaign, replied that the boy should have had ‘a more responsible father’.

16. ^ Greg Miller, ‘Plan for Hunting Terrorists Signals U.S. Intends to Keep Adding Names to Kill Lists’, Washington Post, 23 October 2012, http://www.washingtonpost.com/world/national-security/plan-for-hunting-terrorists-signals-us-intends-to-keep-adding-names-to-kill-lists/2012/10/23/4789b2ae-18b3–11e2–a55c-39408fbe6a4b_story.html.

17. ^ ‘While it might seem counterintuitive, it takes significantly more people to operate unmanned aircraft than it does to fly traditional warplanes. According to the Air Force, it takes a jaw-dropping 168 people to keep just one Predator aloft for twenty-four hours! For the larger Global Hawk surveillance drone, that number jumps to 300 people. In contrast, an F-16 fighter aircraft needs fewer than one hundred people per mission.’ Medea Benjamin, Drone Warfare: Killing by Remote Control, Verso, London and New York, 2013, p. 21.

18. ^ See Peter H. Schuck, Agent Orange on Trial: Mass Toxic Disasters in the Courts, Belknap Press of Harvard University Press, Cambridge MA, 1987. See also: http://www.bhopal.com/bhopal-litigation.

19. ^ Royal Academy of Engineering, Autonomous Systems: Social, Legal and Ethical Issues, RAE, London, 2009, p. 3, http://www.raeng.org.uk/societygov/engineeringethics/pdf/Autonomous_Systems_Report_09.pdf.

20. ^ European Center for Constitutional and Human Rights, Study on Criminal Accountability in Sri Lanka as of January 2009, ECCHR, Berlin, 2010, p. 88.

21. ^ Other members of the Forensic Architecture drone investigative team included Jacob Burns, Steffen Kraemer, Francesco Sebregondi and SITU Research. See http://www.forensic-architecture.org/case/drone-strikes.

22. ^ Bureau of Investigative Journalism, ‘Get the Data: Drone Wars’, http://www.thebureauinvestigates.com/category/projects/drones/drones-graphs.

23. ^ Annabelle Quince, ‘Future of Drone Strikes Could See Execution by Algorithm’, Rear Vision, ABC Radio, edited transcript, pp. 2–3.

Murder Machines: Why Cars Will Kill 30,000 Americans This Year (Collectors Weekly)

By Hunter Oatman-Stanford — March 10th, 2014

boston fatalities

There’s an open secret in America: If you want to kill someone, do it with a car. As long as you’re sober, chances are you’ll never be charged with any crime, much less manslaughter. Over the past hundred years, as automobiles have been woven into the fabric of our daily lives, our legal system has undermined public safety, and we’ve been collectively trained to think of these deaths as unavoidable “accidents” or acts of God. Today, despite the efforts of major public-health agencies and grassroots safety campaigns, few are aware that car crashes are the number one cause of death for Americans under 35. But it wasn’t always this way.

“At some point, we decided that somebody on a bike or on foot is not traffic, but an obstruction to traffic.”

“If you look at newspapers from American cities in the 1910s and ’20s, you’ll find a lot of anger at cars and drivers, really an incredible amount,” says Peter Norton, the author of Fighting Traffic: The Dawn of the Motor Age in the American City. “My impression is that you’d find more caricatures of the Grim Reaper driving a car over innocent children than you would images of Uncle Sam.”

Though various automobiles powered by steam, gas, and electricity were produced in the late 19th century, only a handful of these cars actually made it onto the roads due to high costs and unreliable technologies. That changed in 1908, when Ford’s famous Model T standardized manufacturing methods and allowed for true mass production, making the car affordable to those without extreme wealth. By 1915, the number of registered motor vehicles was in the millions.

Top: A photo of a fatal car wreck in Somerville, Massachusetts, in 1933. Via the Boston Public Library. Above: The New York Times coverage of car violence from November 23, 1924.

Top: A photo of a fatal car wreck in Somerville, Massachusetts, in 1933. Via the Boston Public Library. Above: The New York Times coverage of car violence from November 23, 1924.

Within a decade, the number of car collisions and fatalities skyrocketed. In the first four years after World War I, more Americans died in auto accidents than had been killed during battle in Europe, but our legal system wasn’t catching on. The negative effects of this unprecedented shift in transportation were especially felt in urban areas, where road space was limited and pedestrian habits were powerfully ingrained.

For those of us who grew up with cars, it’s difficult to conceptualize American streets before automobiles were everywhere. “Imagine a busy corridor in an airport, or a crowded city park, where everybody’s moving around, and everybody’s got business to do,” says Norton. “Pedestrians favored the sidewalk because that was cleaner and you were less likely to have a vehicle bump against you, but pedestrians also went anywhere they wanted in the street, and there were no crosswalks and very few signs. It was a real free-for-all.”

A typical busy street scene on Sixth Avenue in New York City shows how pedestrians rules the roadways before automobiles arrived, circa 1903. Via Shorpy.

A typical busy street scene on Sixth Avenue in New York City shows how pedestrians ruled the roadways before automobiles arrived, circa 1903. Via Shorpy.

Roads were seen as a public space, which all citizens had an equal right to, even children at play. “Common law tended to pin responsibility on the person operating the heavier or more dangerous vehicle,” says Norton, “so there was a bias in favor of the pedestrian.” Since people on foot ruled the road, collisions weren’t a major issue: Streetcars and horse-drawn carriages yielded right of way to pedestrians and slowed to a human pace. The fastest traffic went around 10 to 12 miles per hour, and few vehicles even had the capacity to reach higher speeds.

“The real battle is for people’s minds, and this mental model of what a street is for.”

In rural areas, the car was generally welcomed as an antidote to extreme isolation, but in cities with dense neighborhoods and many alternate methods of transit, most viewed private vehicles as an unnecessary luxury. “The most popular term of derision for a motorist was a ‘joyrider,’ and that was originally directed at chauffeurs,” says Norton. “Most of the earliest cars had professional drivers who would drop their passengers somewhere, and were expected to pick them up again later. But in the meantime, they could drive around, and they got this reputation for speeding around wildly, so they were called joyriders.”

Eventually, the term spread to all types of automobile drivers, along with pejoratives like “vampire driver” or “death driver.” Political cartoons featured violent imagery of so-called “speed demons” murdering innocents as they plowed through city streets in their uncontrollable vehicles. Other editorials accused drivers of being afflicted with “motor madness” or “motor rabies,” which implied an addiction to speed at the expense of human life.

This cartoon from 1909 shows the outrage felt by many Americans that wealthy motorists could hurt others without consequence. Via the Library of Congress.

This cartoon from 1909 shows the outrage felt by many Americans that wealthy motorists could hurt others without consequence. Via the Library of Congress.

In an effort to keep traffic flowing and solve legal disputes, New York City became the first municipality in America to adopt an official traffic code in 1903, when most roadways had no signage or traffic controls whatsoever. Speed limits were gradually adopted in urban areas across the country, typically with a maximum of 10 mph that dropped to 8 mph at intersections.

By the 1910s, many cities were working to improve their most dangerous crossings. One of the first tactics was regulating left-turns, which was usually accomplished by installing a solid column or “silent policeman” at the center of busy intersections that forced vehicles to navigate around it. Cars had to pass this mid-point before turning left, preventing them from cutting corners and speeding recklessly into oncoming traffic.

Left, a patent for a Silent Policeman traffic post, and right, an ad for the Cutter Company's lighted post, both from 1918.

Left, a patent for a Silent Policeman traffic post, and right, an ad for the Cutter Company’s lighted post, both from 1918.

A variety of innovative street signals and markings were developed by other cities hoping to tame the automobile. Because they were regularly plowed over by cars, silent policemen were often replaced by domed, street-level lights called “traffic turtles” or “traffic mushrooms,” a style popularized in Milwaukee, Wisconsin. Detroit reconfigured a tennis court line-marker as a street-striping device for dividing lanes. In 1914, Cleveland installed the first alternating traffic lights, which were manually operated by a police officer stationed at the intersection. Yet these innovations did little to protect pedestrians.

“What evil bastard would drive their speeding car where a kid might be playing?”

By the end of the 1920s, more than 200,000 Americans had been killed by automobiles. Most of these fatalities were pedestrians in cities, and the majority of these were children. “If a kid is hit in a street in 2014, I think our first reaction would be to ask, ‘What parent is so neglectful that they let their child play in the street?,’” says Norton.

“In 1914, it was pretty much the opposite. It was more like, ‘What evil bastard would drive their speeding car where a kid might be playing?’ That tells us how much our outlook on the public street has changed—blaming the driver was really automatic then. It didn’t help if they said something like, ‘The kid darted out into the street!,’ because the answer would’ve been, ‘That’s what kids do. By choosing to operate this dangerous machine, it’s your job to watch out for others.’ It would be like if you drove a motorcycle in a hallway today and hit somebody—you couldn’t say, ‘Oh, well, they just jumped out in front of me,’ because the response would be that you shouldn’t operate a motorcycle in a hallway.”

Left, an ad for the Milwaukee-style traffic mushroom, and right, the device in action on Milwaukee's streets, circa 1926. Via the Milwaukee Public Library.

Left, an ad for the Milwaukee-style traffic mushroom, and right, the device in action on Milwaukee streets, circa 1926. Via the Milwaukee Public Library.

In the face of this traffic fatality epidemic, there was a fierce public outcry including enormous rallies, public memorials, vehement newspaper editorials, and even a few angry mobs that attacked motorists following a collision. “Several cities installed public memorials to the children hit by cars that looked like war monuments, except that they were temporary,” says Norton. “To me, that says a lot, because you collectively memorialize people who are considered a public loss. Soldiers killed in battle are mourned by the whole community, and they were doing that for children killed in traffic, which really captures how much the street was considered a public space. People killed in it were losses to the whole community.”

As early as 1905, newspapers were printing cartoons that criticized motor-vehicle drivers.

As early as 1905, newspapers were printing cartoons that criticized motor-vehicle drivers.

As the negative press increased and cities called for lower speed limits and stricter enforcement, the burgeoning auto industry recognized a mounting public-relations disaster. The breaking point came in 1923, when 42,000 citizens of Cincinnati signed a petition for a referendum requiring any driver in the city limits to have a speed governor, a mechanical device that would inhibit the fuel supply or accelerator, to keep vehicles below 25 miles per hour. (Studies show that around five percent of pedestrians are killed when hit by vehicles traveling under 20 miles per hour, versus 80 percent for cars going 40 miles an hour or more.)

The Cincinnati referendum logically equated high vehicle speeds with increasing danger, a direct affront to the automobile industry. “Think about that for a second,” Norton says. “If you’re in the business of selling cars, and the public recognizes that anything fast is dangerous, then you’ve just lost your number-one selling point, which is that they’re faster than anything else. It’s amazing how completely the auto industry joined forces and mobilized against it.”

One auto-industry response to the Cincinnati referendum of 1923 was to conflate speed governors with negative stereotypes about China. Via the Cincinnati Post.

One auto-industry response to the Cincinnati referendum of 1923 was to conflate speed governors with negative stereotypes about China. Via the Cincinnati Post.

“Motordom,” as the collective of special interests including oil companies, auto makers, auto dealers, and auto clubs dubbed itself, launched a multi-pronged campaign to make city streets more welcoming to drivers, though not necessarily safer. Through a series of social, legal, and physical transformations, these groups reframed arguments about vehicle safety by placing blame on reckless drivers and careless pedestrians, rather than the mere presence of cars.

In 1924, recognizing the crisis on America’s streets, Herbert Hoover launched the National Conference on Street and Highway Safety from his position as Commerce Secretary (he would become President in 1929). Any organizations interested or invested in transportation planning were invited to discuss street safety and help establish standardized traffic regulations that could be implemented across the country. Since the conference’s biggest players all represented the auto industry, the group’s recommendations prioritized private motor vehicles over all other transit modes.

A woman poses with a newly installed stop sign in Los Angeles in 1925, built to the specifications recommended at the first National Conference on Street Safety. Via USC Libraries.

A woman poses with a newly installed stop sign in Los Angeles in 1925, built to the specifications recommended at the first National Conference on Street Safety. Via USC Libraries.

Norton suggests that the most important outcome of this meeting was a model municipal traffic ordinance, which was released in 1927 and provided a framework for cities writing their own street regulations. This model ordinance was the first to officially deprive pedestrians access to public streets. “Pedestrians could cross at crosswalks. They could also cross when traffic permitted, or in other words, when there was no traffic,” explains Norton. “But other than that, the streets were now for cars. That model was presented to the cities of America by the U.S. Department of Commerce, which gave it the stamp of official government recommendation, and it was very successful and widely adopted.” By the 1930s, this legislation represented the new rule of the road, making it more difficult to take legal recourse against drivers.

Meanwhile, the auto industry continued to improve its public image by encouraging licensing to give drivers legitimacy, even though most early licenses required no testing. Norton explains that in addition to the revenue it generated, the driver’s license “would exonerate the average motorist in the public eye, so that driving itself wouldn’t be considered dangerous, and you could direct blame at the reckless minority.” Working with local police and civic groups like the Boy Scouts, auto clubs pushed to socialize new pedestrian behavior, often by shaming or ostracizing people who entered the street on foot. Part of this effort was the adoption of the term “jaywalker,” which originally referred to a clueless person unaccustomed to busy city life (“jay” was slang for a hayseed or country bumpkin).

Left, a cartoon from 1923 mocks jaywalking behavior. Via the National Safety Council. Right, a 1937 WPA poster emphasizes jaywalking dangers.

Left, a cartoon from 1923 mocks jaywalking behavior. Via the National Safety Council. Right, a 1937 WPA poster emphasizes jaywalking dangers.

“Drivers first used the word ‘jaywalker’ to criticize pedestrians,” says Norton, “and eventually, it became an organized campaign by auto dealers and auto clubs to change attitudes about walking in the street wherever you wanted to. They had people dressed up like idiots with sandwich board signs that said ‘jaywalker’ or men wearing women’s dresses pretending to be jaywalkers. They even had a parade where a clown was hit by a Model T over and over again in front of the crowd. Of course, the message was that you’re stupid if you walk in the street.” Eventually, cities began adopting laws against jaywalking of their own accord.

In 1928, the American Automobile Association (AAA) took charge of safety education for children by sending free curricula to every public school in America. “Children would illustrate posters with slogans like, ‘Why I should not play in the street’ or ‘Why the street is for cars’ and so on,” explains Norton. “They took over the school safety patrols at the same time. The original patrols would go out and stop traffic for other kids to cross the street. But when AAA took over, they had kids sign pledges that said, ‘I will not cross the street except at the intersection,’ and so on. So a whole generation of kids grew up being trained that the streets were for cars only.” Other organizations like the Automobile Safety Foundation and the National Safety Council also helped to educate the public on the dangers of cars, but mostly focused on changing pedestrian habits or extreme driver behaviors, like drunk driving.

Street-safety posters produced by AAA in the late 1950s focused on changing behavior of children, rather than drivers.

Street-safety posters produced by AAA in the late 1950s focused on changing behavior of children, rather than drivers.

Once the social acceptance of private cars was ensured, automobile proponents could begin rebuilding the urban environment to accommodate cars better than other transit modes. In the 1920s, America’s extensive network of urban railways was heavily regulated, often with specific fare and route restrictions as well as requirements to serve less-profitable areas. As motor vehicles began invading streetcar routes, these companies pushed for equal oversight of private cars.

“Automobiles could drive on the tracks,” explains Norton, “so this meant that as soon as just five percent of the people in cities were going around by car, they slowed the street railways down significantly, and streetcars couldn’t make their schedules anymore. They could ring a bell and try to make drivers get off their tracks, but if the driver couldn’t move because of other traffic, they were stuck. So the streetcars would just stand in traffic like automobiles.”

GE streetcar ads from 1928, left, and the early 1940s, right, emphasize the efficiency of mass transit over private automobiles.

GE streetcar ads from 1928, left, and the early 1940s, right, emphasize the efficiency of mass transit over private automobiles.

The final blow was delivered in 1935 with the Public Utility Holding Company Act, which forced electric-utility companies to divest their streetcar businesses. Though intended to reduce corruption and regulate these growing electric utilities, this law removed the subsidies supporting many streetcar companies, and as a result, more than 100 transit companies failed over the next decade.

Even as government assistance was removed from these mass-transit systems, the growing network of city streets and highways was receiving ever more federal funding. Many struggling metro railways were purchased by a front company (operated by General Motors, Firestone Rubber, Standard Oil, and Phillips Petroleum), that ripped up their tracks to make way for fleets of buses, furthering America’s dependency on motor vehicles.

Meanwhile, traffic engineers were reworking city streets to better accommodate motor vehicles, even as they recognized cars as the least equitable and least efficient form of transportation, since automobiles were only available to the wealthy and took up 10 times the space of a transit rider. Beginning in Chicago, traffic engineers coordinated street signals to keep motor vehicles moving smoothly, while making crossing times unfriendly to pedestrians.

An aerial view from 1939 of 14th Street and Pennsylvania Avenue, in Washington, D.C., shows early street markings. Via shorpy.com.

An aerial view from 1939 of 14th Street and Pennsylvania Avenue, in Washington, D.C., shows early street markings. Via shorpy.com.

“Long after its victory, Motordom fought to keep control of traffic problems. Its highway engineers defined a good thoroughfare as a road with a high capacity for motor vehicles; they did not count the number of persons moved,” Norton writes in Fighting Traffic. Today our cities still reflect this: The Level of Service (LOS) measurement by which most planners use to gauge intersection efficiency is based only on motor-vehicle delays, rather than the impact to all modes of transit.

As in other American industries ranging from health care to education, those with the ability to pay for the best treatment were prioritized over all others. One 1941 traffic-control textbook read: “If people prefer to drive downtown and can afford it, then facilities must be built for them up to their ability to pay. The choice of mode of travel is their own; they cannot be forced to change on the strength of arguments of efficiency or economy.”

All the while, traffic violence continued unabated, with fatalities increasing every year. The exception was during World War II, when fuel shortages and resource conservation led to less driving, hence a drop in the motor-vehicle death rates, which spiked again following the war’s conclusion. By the time the National Interstate and Defense Highways Act was passed in 1956, the U.S. was fully dependent on personal automobiles, favoring the flexibility of cars over the ability of mass transit to carry more people with less energy in a safer manner.

In 1962, Boston formally adopted jaywalking laws to penalize pedestrians, as this photo of city officials shows.

In 1962, Boston formally adopted jaywalking laws to penalize pedestrians.

In 1966, Ralph Nader published his best-selling book, Unsafe At Any Speed, which detailed the auto industry’s efforts to suppress safety improvements in favor of profits. In the preface to his book, Naderpointed out the huge costs inflicted by private vehicle collisions, noting that “…these are not the kind of costs which fall on the builders of motor vehicles (excepting a few successful lawsuits for negligent construction of the vehicle) and thus do not pinch the proper foot. Instead, the costs fall to users of vehicles, who are in no position to dictate safer automobile designs.” Instead of directing money at prevention, like vehicle improvements, changing behaviors, and road design, money is spent on treating the symptoms of road violence. Today, the costs of fatal crashes are estimated at over $99 billion in the U.S., or around $500 for every licensed driver, according to the Center for Disease Control (CDC).

Nader suggested that the protection of our “body rights,” or physical safety, needed the same broad support given to civil rights, even in the face of an industry with so much financial power. “A great problem of contemporary life is how to control the power of economic interests which ignore the harmful effects of their applied science and technology. The automobile tragedy is one of the most serious of these man-made assaults on the human body,” Nader wrote.

Dr. David Sleet, who works in the Division of Unintentional Injury Prevention at the CDC, says Nader’s book was a game-changer. “That really started this whole wave of improvements in our highway-safety problem,” says Sleet. “The death rates from vehicle crashes per population just kept steadily increasing from the 1920s until 1966. Two acts of Congress were implemented in 1966, which initiated a national commitment to reducing injuries on the road by creating agencies within the U.S. Department of Transportation to set standards and regulate vehicles and highways. After that, the fatalities started to decline.”

Ralph Nader's book, "Unsafe at Any Speed," brought a larger awareness to America's traffic fatalities, and targeted design issues with the Corvair. A few years prior, in 1962, comedian Ernie Kovacs was killed in a Corvair wagon, seen at right wrapped around a telephone pole.

Ralph Nader’s book, “Unsafe at Any Speed,” brought a larger awareness to America’s traffic fatalities, and targeted design issues with the Corvair. A few years prior, in 1962, comedian Ernie Kovacs was killed in a Corvair wagon, seen at right wrapped around a telephone pole.

The same year Nader’s book was published, President Lyndon Johnson signed the National Traffic and Motor Vehicle Safety Act and the Highway Safety Act. This legislation led to the creation of the National Highway Traffic Safety Administration (NHTSA), which set new safety standards for cars and highways. A full 50 years after automobiles had overtaken city streets, federal agencies finally began addressing the violence as a large-scale, public-health issue. In 1969, NHTSA director Dr. William Haddon, a public-health physician and epidemiologist, recognized that like infectious diseases, motor-vehicle deaths were the result of interactions between a host (person), an agent (motor vehicle), and their environment (roadways). As directed by Haddon, the NHTSA enforced changes to features like seat belts, brakes, and windshields that helped improve the country’s fatality rate.

Following the release of Nader’s book, grassroots organizations like Mothers Against Drunk Driving (MADD, 1980) formed to combat car-safety issues that national legislators were not addressing. The CDC began adapting its public-health framework to the issue of motor-vehicle injury prevention in 1985, focusing on high-risk populations like alcohol-impaired drivers, motorcyclists, and teenagers.

In the late 1970s, the NHTSA standardized crash tests, like this 90 mph test of two Volvos.

In the late 1970s, the NHTSA standardized crash tests, like this 90 mph test of two Volvos.

“I think the perennial problem for us, as a culture, is recognizing that these injuries are both predictable and preventable,” says Sleet. “The public still has not come around to thinking of motor-vehicle crashes as something other than ‘accidents.’ And as long as you believe they’re accidents or acts of fate, then you won’t do anything to prevent them. The CDC continues to stress that motor-vehicle injuries, like diseases, are preventable.”

Sleet says the CDC’s approach is similar to its efforts against smoking: The first step is understanding the magnitude of the problem or threat, the second is identifying risk factors, and the third is developing interventions that can reduce these factors. “The last stage is getting widespread adoption of these known and effective interventions,” explains Sleet. “The reason we think motor-vehicle injuries represent a winnable battle is that there are lots of effective interventions that are just not used by the general public. We’ve been fighting this battle of increasing injuries since cars were first introduced into society, and we still haven’t solved it.

“Public health is a marathon, not a sprint,” adds Sleet. “It’s taken us 50 years since the first surgeon general’s report on smoking to make significant progress against tobacco. We need to stay the course with vehicle injuries.”

Though their advocacy is limited to drunk driving, MADD is one of the few organizations to use violent imagery to promote road safety, as seen in this ad from 2007.

Though their advocacy is limited to drunk driving, MADD is one of the few organizations to use violent imagery to promote road safety, as seen in this ad from 2007.

Although organizations like the CDC have applied this public-health approach to the issue for decades now, automobiles remain a huge danger. While the annual fatality rate has dropped significantly from its 1930s high at around 30 deaths for every 100,000 persons to 11 per 100,000 in recent years, car crashes are still a top killer of all Americans. For young people, motor-vehicle collisions remain the most common cause of death. In contrast, traffic fatalities in countries like the United Kingdom, where drivers are presumed to be liable in car crashes, are about a third of U.S. rates.

In 2012, automobile collisions killed more than 34,000 Americans, but unlike our response to foreign wars, the AIDS crisis, or terrorist attacks—all of which inflict fewer fatalities than cars—there’s no widespread public protest or giant memorial to the dead. We fret about drugs and gun safety, but don’t teach children to treat cars as the loaded weapons they are.

“These losses have been privatized, but in the ’20s, they were regarded as public losses,” says Norton. After the auto industry successfully altered street norms in the 1920s, most state Departments of Transportation actually made it illegal to leave roadside markers where a loved one was killed. “In recent years, thanks to some hard work by grieving families, the rules have changed in certain states, and informal markers are now allowed,” Norton adds. “Some places are actually putting in DOT-made memorial signs with the names of victims. The era of not admitting what’s going on is not quite over, but the culture is changing.”

Ghost bikes have been installed on roadways across the country where cyclists were killed by motorists, like this bike in Boulder, Colorado, in memory of Matthew Powell in 2008.

In recent years, white Ghost Bikes have been installed on roadways across the country where cyclists were killed by motorists, like this bike in Boulder, Colorado, in memory of Matthew Powell.

“Until recently, there wasn’t any kind of concerted public message around the basic danger of driving,” says Ben Fried, editor of the New York branch of Streetsblog, a national network of journalists chronicling transportation issues. “Today’s street safety advocates look to MADD and other groups that changed social attitudes toward drunk driving in the late ’70s and early ’80s as an example of how to affect these broad views on how we drive. Before you had those organizations advocating for victims’ families, you would hear the same excuses for drunk driving that you hear today for reckless driving.”

Though drunk driving has long been recognized as dangerous, seen in this WPA poster from 1937, reckless driving has been absent from most safety campaigns.

Though drunk driving has long been recognized as dangerous, seen in this WPA poster from 1937, reckless driving has been absent from most safety campaigns.

Though anti-drunk-driving campaigns are familiar to Americans, fatalities involving alcohol only account for around a third of all collisions, while the rest are caused by ordinary human error. Studies also show that reckless drivers who are sober are rarely cited by police, even when they are clearly at fault. In New York City during the last five years, less than one percent of drivers who killed or injured pedestrians and cyclists were ticketed for careless driving. (In most states, “negligent” driving, which includes drunk driving, has different legal consequences than “reckless” driving, though the jargon makes little difference to those hurt by such drivers.)

Increasingly, victims and their loved ones aremaking the case that careless driving is as reprehensible as drunk driving, advocating a cultural shift that many drivers are reluctant to embrace. As with auto-safety campaigns in the past, this grassroots effort is pushing cities to adopt legislation that protects against reckless drivers, including laws inspired by Sweden’s Vision Zerocampaign. First implemented in 1997, Vision Zero is an effort to end all pedestrian fatalities and serious injuries; recently, cities like New York, Chicago, and San Francisco also announced their goals of eliminating traffic deaths within 10 years. Other initiatives are being introduced at the state level, including “vulnerable user laws,” which pin greater responsibility on road users who wield the most power whether a car compared to a bicyclist, or a biker to a pedestrian.

Fried says that most people are aware of the dangers behind the wheel, but are accustomed to sharing these risks, rather than taking individual responsibility for careless behavior. “So many of us drive and have had the experience of not following the law to a T—going a little bit over the speed limit or rolling through a stop sign,” he explains. “So there’s this tendency to deflect our own culpability, and that’s been institutionalized by things like no-fault laws and car insurance, where we all share the cost for the fact that driving is a dangerous thing.”

This dark political cartoon from "Puck" magazine in 1907 suggested that speeding motorists were chasing death. Via the Library of Congress.

This dark political cartoon from “Puck” magazine in 1907 suggested that speeding motorists were chasing death. Via the Library of Congress.

As cities attempt to undo years of car-oriented development by rebuilding streets that better incorporate public transit, bicycle facilities, and pedestrian needs, the existing bias towards automobiles is making the fight to transform streets just as intense as when cars first arrived in the urban landscape. “The fact that changes like redesigning streets for bike lanes set off such strong reactions today is a great analogy to what was going on in the ’20s,” says Fried. “There’s a huge status-quo bias that’s inherent in human nature. While I think the changes today are much more beneficial than what was done 80 years ago, the fact that they’re jarring to people comes from the same place. People are very comfortable with things the way they are.”

However, studies increasingly show that most young people prefer to live in dense, walkable neighborhoods, and are more attuned to the environmental consequences of their transportation than previous generations. Yet in the face of clear evidence that private automobiles are damaging to our health and our environment, most older Americans still cling to their cars. Part of this impulse may be a natural resistance to change, but it’s also reinforced when aging drivers have few viable transportation alternatives, particularly in suburban areas or sprawling cities with terrible public transit.

“People don’t have to smoke,” Sleet says, “whereas people might feel they do need a car to get to work. Our job is to try and make every drive a safe drive. I think we can also reduce the dependency we have on motor vehicles, but that’s not going to happen until we provide other alternatives for people to get from here to there.”

Gory depictions of car violence became rare in the United States after the 1920s, though they persisted in Europe, as seen in his German safety poster from 1930 that reads, "Motorist! Be Careful!"

Gory depictions of car violence became rare in the United States after the 1920s, though they persisted in Europe, as seen in his German safety poster from 1930 that reads, “Motorist! Be Careful!” Via the Library of Congress.

Fried says that unlike campaigns for smoking and HIV reduction, American cities aren’t directly pushing people to change their behavior. “You don’t see cities saying outright that driving is bad, or asking people to take transit or ride a bike, in part because they’re getting flack from drivers. No one wants to be seen as ‘anti-car,’ so their message has mostly been about designing streets for greater safety. I think, by and large, this has been a good choice.”

“The biggest reductions in traffic injuries that the New York City DOT has been able to achieve are all due to reallocating space from motor vehicles to pedestrians and bikes,” says Fried. “The protected bike-lane redesigns in New York City are narrowing the right of way for vehicles by at least 8 feet, and sometimes more. If you’re a pedestrian, that’s 8 more feet that you don’t have to worry about when you’re crossing the street. And if you’re driving, the design gives you cues to take it a bit slower because the lanes are narrower. You’re more aware of how close you are to other moving objects, so the incidence of speeding isn’t as high as it used to be. All these changes contribute to a safer street environment.”

Like in the 1920s, these infrastructure changes really start with a new understanding of acceptable street behavior. “That battle for street access of the 1910s and ’20s, while there was a definite winner, it never really ended,” says Norton. “It’s a bit like the street became an occupied country, and you have a resistance movement. There have always been pedestrians who are like, ‘To hell with you, I’m crossing anyway.’

“The people who really get it today, in 2014, know that the battle isn’t to change rules or put in signs or paint things on the pavement,” Norton continues. “The real battle is for people’s minds, and this mental model of what a street is for. There’s a wonderful slogan used by some bicyclists that says, ‘We are traffic.’ It reveals the fact that at some point, we decided that somebody on a bike or on foot is not traffic, but an obstruction to traffic. And if you look around, you’ll see a hundred other ways in which that message gets across. That’s the main obstacle for people who imagine alternatives—and it’s very much something in the mind.”

This 1935 Chevy safety film made the misleading argument that their vehicles were "the safest place to be," and that all danger was created by careless drivers.

This 1935 Chevy safety film made the argument that motor vehicles were “the safest place to be,” and that danger was only created by careless drivers.

(This article is dedicated to my uncle, Jim Vic Oatman, and friend, Chris Webber, both of whom were killed by car collisions. Learn more about the CDC’s battle against motor-vehicle injuries here, find out how to bring Vision Zero to your city, or scare yourself with the Boston Public Library’s archive of historic car wreck images.)

Learning How to Die in the Anthropocene (New York Times)

November 10, 2013, 3:00 pm

By ROY SCRANTON

Jeffery DelViscio

I.

Driving into Iraq just after the 2003 invasion felt like driving into the future. We convoyed all day, all night, past Army checkpoints and burned-out tanks, till in the blue dawn Baghdad rose from the desert like a vision of hell: Flames licked the bruised sky from the tops of refinery towers, cyclopean monuments bulged and leaned against the horizon, broken overpasses swooped and fell over ruined suburbs, bombed factories, and narrow ancient streets.

Civilizations have marched blindly toward disaster because humans are wired to believe that tomorrow will be much like today.

With “shock and awe,” our military had unleashed the end of the world on a city of six million — a city about the same size as Houston or Washington. The infrastructure was totaled: water, power, traffic, markets and security fell to anarchy and local rule. The city’s secular middle class was disappearing, squeezed out between gangsters, profiteers, fundamentalists and soldiers. The government was going down, walls were going up, tribal lines were being drawn, and brutal hierarchies savagely established.

I was a private in the United States Army. This strange, precarious world was my new home. If I survived.

Two and a half years later, safe and lazy back in Fort Sill, Okla., I thought I had made it out. Then I watched on television as Hurricane Katrina hit New Orleans. This time it was the weather that brought shock and awe, but I saw the same chaos and urban collapse I’d seen in Baghdad, the same failure of planning and the same tide of anarchy. The 82nd Airborne hit the ground, took over strategic points and patrolled streets now under de facto martial law. My unit was put on alert to prepare for riot control operations. The grim future I’d seen in Baghdad was coming home: not terrorism, not even W.M.D.’s, but a civilization in collapse, with a crippled infrastructure, unable to recuperate from shocks to its system.

And today, with recovery still going on more than a year after Sandy and many critics arguing that the Eastern seaboard is no more prepared for a huge weather event than we were last November, it’s clear that future’s not going away.

This March, Admiral Samuel J. Locklear III, the commander of the United States Pacific Command, told security and foreign policy specialists in Cambridge, Mass., that global climate change was the greatest threat the United States faced — more dangerous than terrorism, Chinese hackers and North Korean nuclear missiles. Upheaval from increased temperatures, rising seas and radical destabilization “is probably the most likely thing that is going to happen…” he said, “that will cripple the security environment, probably more likely than the other scenarios we all often talk about.’’

Locklear’s not alone. Tom Donilon, the national security adviser,said much the same thing in April, speaking to an audience at Columbia’s new Center on Global Energy Policy. James Clapper, director of national intelligence, told the Senate in March that “Extreme weather events (floods, droughts, heat waves) will increasingly disrupt food and energy markets, exacerbating state weakness, forcing human migrations, and triggering riots, civil disobedience, and vandalism.”

On the civilian side, the World Bank’s recent report, “Turn Down the Heat: Climate Extremes, Regional Impacts, and the Case for Resilience,” offers a dire prognosis for the effects of global warming, which climatologists now predict will raise global temperatures by 3.6 degrees Fahrenheit within a generation and 7.2 degrees Fahrenheit within 90 years. Projections from researchers at the University of Hawaii find us dealing with “historically unprecedented” climates as soon as 2047. The climate scientist James Hansen, formerly with NASA, has argued that we face an “apocalyptic” future. This grim view is seconded by researchers worldwide, including Anders LevermannPaul and Anne Ehrlich,Lonnie Thompson and manymanymany others.

This chorus of Jeremiahs predicts a radically transformed global climate forcing widespread upheaval — not possibly, not potentially, but inevitably. We have passed the point of no return. From the point of view of policy experts, climate scientists and national security officials, the question is no longer whether global warming exists or how we might stop it, but how we are going to deal with it.

II.

There’s a word for this new era we live in: the Anthropocene. This term, taken up by geologistspondered by intellectuals and discussed in the pages of publications such as The Economist and the The New York Times, represents the idea that we have entered a new epoch in Earth’s geological history, one characterized by the arrival of the human species as a geological force. The biologist Eugene F. Stoermer and the Nobel-Prize-winning chemist Paul Crutzen advanced the term in 2000, and it has steadily gained acceptance as evidence has increasingly mounted that the changes wrought by global warming will affect not just the world’s climate and biological diversity, but its very geology — and not just for a few centuries, but for millenniums. The geophysicist David Archer’s 2009 book, “The Long Thaw: How Humans are Changing the Next 100,000 Years of Earth’s Climate,” lays out a clear and concise argument for how huge concentrations of carbon dioxide in the atmosphere and melting ice will radically transform the planet, beyond freak storms and warmer summers, beyond any foreseeable future.

The Stratigraphy Commission of the Geological Society of London — the scientists responsible for pinning the “golden spikes” that demarcate geological epochs such as the Pliocene, Pleistocene, and Holocene — have adopted the Anthropocene as a term deserving further consideration, “significant on the scale of Earth history.” Working groups are discussing what level of geological time-scale it might be (an “epoch” like the Holocene, or merely an “age” like the Calabrian), and at what date we might say it began. The beginning of the Great Acceleration, in the middle of the 20th century? The beginning of the Industrial Revolution, around 1800? The advent of agriculture?

Every day I went out on mission in Iraq, I looked down the barrel of the future and saw a dark, empty hole.

The challenge the Anthropocene poses is a challenge not just to national security, to food and energy markets, or to our “way of life” — though these challenges are all real, profound, and inescapable. The greatest challenge the Anthropocene poses may be to our sense of what it means to be human. Within 100 years — within three to five generations — we will face average temperatures 7 degrees Fahrenheit higher than today, rising seas at least three to 10 feet higher, and worldwide shifts in crop belts, growing seasons and population centers. Within a thousand years, unless we stop emitting greenhouse gases wholesale right now, humans will be living in a climate the Earth hasn’t seen since the Pliocene, three million years ago, when oceans were 75 feet higher than they are today. We face the imminent collapse of the agricultural, shipping and energy networks upon which the global economy depends, a large-scale die-off in the biosphere that’s already well on its way, and our own possible extinction. If homo sapiens (or some genetically modified variant) survives the next millenniums, it will be survival in a world unrecognizably different from the one we have inhabited.

Jeffery DelViscio

Geological time scales, civilizational collapse and species extinction give rise to profound problems that humanities scholars and academic philosophers, with their taste for fine-grained analysis, esoteric debates and archival marginalia, might seem remarkably ill suited to address. After all, how will thinking about Kant help us trap carbon dioxide? Can arguments between object-oriented ontology and historical materialism protect honeybees from colony collapse disorder? Are ancient Greek philosophers, medieval theologians, and contemporary metaphysicians going to keep Bangladesh from being inundated by rising oceans?

Of course not. But the biggest problems the Anthropocene poses are precisely those that have always been at the root of humanistic and philosophical questioning: “What does it mean to be human?” and “What does it mean to live?” In the epoch of the Anthropocene, the question of individual mortality — “What does my life mean in the face of death?” — is universalized and framed in scales that boggle the imagination. What does human existence mean against 100,000 years of climate change? What does one life mean in the face of species death or the collapse of global civilization? How do we make meaningful choices in the shadow of our inevitable end?

These questions have no logical or empirical answers. They are philosophical problems par excellence. Many thinkers, including Cicero, Montaigne, Karl Jaspers, and The Stone’s own Simon Critchley, have argued that studying philosophy is learning how to die. If that’s true, then we have entered humanity’s most philosophical age — for this is precisely the problem of the Anthropocene. The rub is that now we have to learn how to die not as individuals, but as a civilization.

III.

Learning how to die isn’t easy. In Iraq, at the beginning, I was terrified by the idea. Baghdad seemed incredibly dangerous, even though statistically I was pretty safe. We got shot at and mortared, and I.E.D.’s laced every highway, but I had good armor, we had a great medic, and we were part of the most powerful military the world had ever seen. The odds were good I would come home. Maybe wounded, but probably alive. Every day I went out on mission, though, I looked down the barrel of the future and saw a dark, empty hole.

“For the soldier death is the future, the future his profession assigns him,” wrote  Simone Weil in her remarkable meditation on war, “The Iliad or the Poem of Force.” “Yet the idea of man’s having death for a future is abhorrent to nature. Once the experience of war makes visible the possibility of death that lies locked up in each moment, our thoughts cannot travel from one day to the next without meeting death’s face.” That was the face I saw in the mirror, and its gaze nearly paralyzed me.

I found my way forward through an 18th-century Samurai manual, Yamamoto Tsunetomo’s “Hagakure,” which commanded: “Meditation on inevitable death should be performed daily.” Instead of fearing my end, I owned it. Every morning, after doing maintenance on my Humvee, I’d imagine getting blown up by an I.E.D., shot by a sniper, burned to death, run over by a tank, torn apart by dogs, captured and beheaded, and succumbing to dysentery. Then, before we rolled out through the gate, I’d tell myself that I didn’t need to worry, because I was already dead. The only thing that mattered was that I did my best to make sure everyone else came back alive. “If by setting one’s heart right every morning and evening, one is able to live as though his body were already dead,” wrote Tsunetomo, “he gains freedom in the Way.”

I got through my tour in Iraq one day at a time, meditating each morning on my inevitable end. When I left Iraq and came back stateside, I thought I’d left that future behind. Then I saw it come home in the chaos that was unleashed after Katrina hit New Orleans. And then I saw it again when Sandy battered New York and New Jersey: Government agencies failed to move quickly enough, and volunteer groups like Team Rubicon had to step in to manage disaster relief.

Now, when I look into our future — into the Anthropocene — I see water rising up to wash out lower Manhattan. I see food riots, hurricanes, and climate refugees. I see 82nd Airborne soldiers shooting looters. I see grid failure, wrecked harbors, Fukushima waste, and plagues. I see Baghdad. I see the Rockaways. I see a strange, precarious world.

Our new home.

The human psyche naturally rebels against the idea of its end. Likewise, civilizations have throughout history marched blindly toward disaster, because humans are wired to believe that tomorrow will be much like today — it is unnatural for us to think that this way of life, this present moment, this order of things is not stable and permanent. Across the world today, our actions testify to our belief that we can go on like this forever, burning oil, poisoning the seas, killing off other species, pumping carbon into the air, ignoring the ominous silence of our coal mine canaries in favor of the unending robotic tweets of our new digital imaginarium. Yet the reality of global climate change is going to keep intruding on our fantasies of perpetual growth, permanent innovation and endless energy, just as the reality of mortality shocks our casual faith in permanence.

The biggest problem climate change poses isn’t how the Department of Defense should plan for resource wars, or how we should put up sea walls to protect Alphabet City, or when we should evacuate Hoboken. It won’t be addressed by buying a Prius, signing a treaty, or turning off the air-conditioning. The biggest problem we face is a philosophical one: understanding that this civilization is already dead. The sooner we confront this problem, and the sooner we realize there’s nothing we can do to save ourselves, the sooner we can get down to the hard work of adapting, with mortal humility, to our new reality.

The choice is a clear one. We can continue acting as if tomorrow will be just like yesterday, growing less and less prepared for each new disaster as it comes, and more and more desperately invested in a life we can’t sustain. Or we can learn to see each day as the death of what came before, freeing ourselves to deal with whatever problems the present offers without attachment or fear.

If we want to learn to live in the Anthropocene, we must first learn how to die.

Patrick Lane: An open letter to all the wild creatures of the Earth (Times Colonist)

PATRICK LANE / TIMES COLONIST

DECEMBER 2, 2013 02:39 PM

Patrick Lane speaks at UVic's convocation ceremony.

Photograph by: University of Victoria

Victoria poet Patrick Lane received an honorary doctor of letters degree from the University of Victoria on Nov. 13. Lane, who has won the Governor General’s Literary Award and numerous other honours, has written 25 volumes of poetry, as well as fiction and non-fiction. He is known for what the university called the “gritty honesty” of his style. In keeping with his unique voice, his convocation speech was moving and powerful. Here is the text of his speech.

It is 65 years ago, you’re 10 years old and sitting on an old, half-blind, grey horse. All you have is a saddle blanket and a rope for reins as you watch a pack of dogs rage at the foot of a Ponderosa pine.

High up on a branch, a cougar lies supine, one paw lazily swatting at the air. He knows the dogs will tire. They will slink away and then the cougar will climb down and go on with its life in the Blue Bush country south of Kamloops. It is a hot summer day. There is the smell of pine needles and Oregon grape and dust. It seems to you that the sun carves the dust from the face of the broken rocks, carves and lifts it into the air where it mixes with the sun. Just beyond you are three men on horses.

The men have saddles and boots and rifles and their horses shy at the clamour of the dogs. The man with the Winchester rifle is the one who owns the dog pack and he is the one who has led you out of the valley, following the dogs through the hills to the big tree where the cougar is trapped. You watch as the man with the rifle climbs down from the saddle and sets his boots among the slippery pine needles. When the man is sure of his footing he lifts the rifle, takes aim, and then … and then you shrink inside a cowl of silence as the cougar falls.

As you watch, the men raise their rifles and shoot them at the sun. You will not understand their triumph, their exultance. Not then. You are too young. It will take years for you to understand. But one day you will step up to a podium in an auditorium at a University on an island far to the west and you will talk about what those men did. You know now they shot at the sun because they wanted to bring a darkness into the world. Knowing that has changed you forever.

Today I look back at their generation. Most of them are dead. They were born into the first Great War of the last century. Most of their fathers did not come home from the slaughter. Most of their mothers were left lost and lonely. Their youth was wasted through the years of the Great Depression when they wandered the country in search of work, a bed or blanket, a friendly hand, a woman’s touch, a child’s quick cry. And then came the Second World War and more were lost. Millions upon millions of men, women, and children died in that old world. But we sometimes forget that untold numbers of creatures died with them: the sparrow and the rabbit, the salmon and the whale, the beetle and the butterfly, the deer and the wolf. And trees died too, the fir and spruce, the cedar and hemlock. Whole forests were sacrificed to the wars.

Those men bequeathed to me a devastated world. When my generation came of age in the mid-century, we were ready for change. And we tried to make it happen, but the ones who wanted change were few. In the end, we did what the generations before us did. We began to eat the world. We devoured the oceans and we devoured the land. We drank the lakes and the seas and we ate the mountains and plains. We ate and ate until there was almost nothing left for you or for your children to come.

The cougar that died that day back in 1949 was a question spoken into my life, and I have tried to answer that question with my teaching, my poems and my stories. Ten years after they killed the cougar, I came of age. I had no education beyond high school, but I had a deep desire to become an artist, a poet. The death of the cougar stayed with me through the years of my young manhood. Then, one moonlit night in 1963, I stepped out of my little trailer perched on the side of a mountain above the North Thompson River. Below me was the sawmill where I worked as a first-aid man. Down a short path, a little creek purled through the trees just beyond my door. I went there under the moon and, kneeling in the moss, cupped water in my hands for a drink. As I looked up I saw a cougar leaning over his paws in the thin shadows. He was six feet away, drinking from the same pool. I stared at the cougar and found myself alive in the eyes of the great cat. The cougar those men had killed when I was a boy came back to me. It was then I swore I would spend my life bearing witness to the past and the years to come.

I stand here looking out over this assembly, and ask myself what I can offer you who are taking from my generation’s hands a troubled world. I am an elder now. There are times many of us old ones feel a deep regret, a profound sorrow, but our sorrow does not have to be yours. You are young and it is soon to be your time. A month ago, I sat on a river estuary in the Great Bear Rainforest north of here as a mother grizzly nursed her cubs. As the little ones suckled, the milk spilled down her chest and belly. As I watched her, I thought of this day and I thought of you who not so long ago nursed at your own mother’s breast. There, in the last intact rainforest on earth, the bear cubs became emblems of hope to me.

Out there are men and women only a few years older than you who are trying to remedy a broken world. I know and respect their passion. You, too, can change things. Just remember there are people who will try to stop you, and when they do you will have to fight for your lives and the lives of the children to come.

Today, you are graduating with the degrees you have worked so hard to attain. They will affect your lives forever. You are also one of the wild creatures of the Earth. I want you for one moment to imagine you are a ten-year-old on a half-blind, grey horse. You are watching a cougar fall from the high limb of a Ponderosa pine into a moil of raging dogs. The ones who have done this, the ones who have brought you here, are shooting at the sun. They are trying to bring a darkness into the world.

It’s your story now.

How do you want it to end?

© Copyright 2013

– See more at: http://www.timescolonist.com/patrick-lane-an-open-letter-to-all-the-wild-creatures-of-the-earth-1.717669#sthash.0lbjM8ax.dpuf

Câmara de SP aprova projeto que permite enterro de pets com dono (Folha de S.Paulo)

16/05/2013 – 18h00

DE SÃO PAULO

A Câmara Municipal de São Paulo aprovou, em primeira votação nesta quinta-feira (16), o projeto de lei que permite que animais domésticos sejam enterrados no mesmo jazigo de seus donos em cemitérios municipais.

Ontem, a proposta já havia sido aprovada pela Comissão de Constituição e Justiça da casa. Agora, o projeto ainda precisa passar por outra votação na Câmara antes de ser sancionado pelo prefeito Fernando Haddad (PT).

Segundo o projeto, dos vereadores Roberto Tripoli (PV) e Antonio Goulart (PSD), o enterro destina-se a bichos de estimação de famílias que já têm jazigo nos cemitérios municipais.

De acordo com Goulart, o objetivo do projeto é solucionar a atual falta de local para destinação de animais mortos na cidade.

Segundo o vereador, mui­tas pessoas querem enterrar o bicho de estimação no ja­zigo familiar. “O animal faz parte da família.”

O projeto foi apresentado no plenário da Câmara na semana passada. “O projeto vai passar sem problemas. É um assunto atual”, previu Goulart.

O Serviço Funerário da cidade diz ser preciso um estudo técnico para avaliar a viabilidade da proposta.

Deeply Held Religious Beliefs Prompting Sick Kids to Be Given ‘Futile’ Treatment (Science Daily)

ScienceDaily (Aug. 13, 2012) — Parental hopes of a “miraculous intervention,” prompted by deeply held religious beliefs, are leading to very sick children being subjected to futile care and needless suffering, suggests a small study in the Journal of Medical Ethics.

The authors, who comprise children’s intensive care doctors and a hospital chaplain, emphasise that religious beliefs provide vital support to many parents whose children are seriously ill, as well as to the staff who care for them.

But they have become concerned that deeply held beliefs are increasingly leading parents to insist on the continuation of aggressive treatment that ultimately is not in the best interests of the sick child.

It is time to review the current ethics and legality of these cases, they say.

They base their conclusions on a review of 203 cases which involved end of life decisions over a three year period.

In 186 of these cases, agreement was reached between the parents and healthcare professionals about withdrawing aggressive, but ultimately futile, treatment.

But in the remaining 17 cases, extended discussions with the medical team and local support had failed to resolve differences of opinion with the parents over the best way to continue to care for the very sick child in question.

The parents had insisted on continuing full active medical treatment, while doctors had advocated withdrawing or withholding further intensive care on the basis of the overwhelming medical evidence.

The cases in which withdrawal or withholding of intensive care was considered to be in the child’s best interests were consistent with the Royal College of Paediatrics and Child Health guidance.

Eleven of these cases (65%) involved directly expressed religious claims that intensive care should not be stopped because of the expectation of divine intervention and a complete cure, together with the conviction that the opinion of the medical team was overly pessimistic and wrong.

Various different faiths were represented among the parents, including Christian fundamentalism, Islam, Judaism, and Roman Catholicism.

Five of the 11 cases were resolved after meeting with the relevant religious leaders outside the hospital, and intensive care was withdrawn in a further case after a High Court order.

But five cases were not resolved, so intensive care was continued. Four of these children eventually died; one survived with profound neurological disability.

Six of the 17 cases in which religious belief was not a cited factor, were all resolved without further recourse to legal, ethical, or socio-religious support. Intensive care was withdrawn in all these children, five of whom died and one of whom survived, but with profound neurological disability.

The authors emphasise that parental reluctance to allow treatment to be withdrawn is “completely understandable as [they] are defenders of their children’s rights, and indeed life.”

But they argue that when children are too young to be able to actively subscribe to their parents’ religious beliefs, a default position in which parental religion is not the determining factor might be more appropriate.

They cite Article 3 of the Human Rights Act, which aims to ensure that no one is subjected to torture or inhumane or degrading treatment or punishment.

“Spending a lifetime attached to a mechanical ventilator, having every bodily function supervised and sanitised by a carer or relative, leaving no dignity or privacy to the child and then adult, has been argued as inhumane,” they argue.

And they conclude: “We suggest it is time to reconsider current ethical and legal structures and facilitate rapid default access to courts in such situations when the best interests of the child are compromised in expectation of the miraculous.”

In an accompanying commentary, the journal’s editor, Professor Julian Savulescu, advocates: “Treatment limitation decisions are best made, not in the alleged interests of patients, but on distributive justice grounds.”

In a publicly funded system with limited resources, these should be given to those whose lives could be saved rather than to those who are very unlikely to survive, he argues.

“Faced with the choice between providing an intensive care bed to a [severely brain damaged] child and one who has been at school and was hit by a cricket ball and will return to normal life, we should provide the bed to the child hit by the cricket ball,” he writes.

In further commentaries, Dr Steve Clarke of the Institute for Science and Ethics maintains that doctors should engage with devout parents on their own terms.

“Devout parents, who are hoping for a miracle, may be able to be persuaded, by the lights of their own personal…religious beliefs, that waiting indefinite periods of time for a miracle to occur while a child is suffering, and while scarce medical equipment is being denied to other children, is not the right thing to do,” he writes.

Leading ethicist, Dr Mark Sheehan, argues that these ethical dilemmas are not confined to fervent religious belief, and to polarise the issue as medicine versus religion is unproductive, and something of a “red herring.”

Referring to the title of the paper, Charles Foster, of the University of Oxford, suggests that the authors have asked the wrong question. “The legal and ethical orthodoxy is that no beliefs, religious or secular, should be allowed to stonewall the best interests of the child,” he writes.

Teen Survival Expectations Predict Later Risk-Taking Behavior (Science Daily)

ScienceDaily (Aug. 1, 2012) — Some young people’s expectations that they will not live long, healthy lives may actually foreshadow such outcomes.

New research published August 1 in the open access journal PLOS ONEreports that, for American teens, the expectation of death before the age of 35 predicted increased risk behaviors including substance abuse and suicide attempts later in life and a doubling to tripling of mortality rates in young adulthood.

The researchers, led by Quynh Nguyen of Northeastern University in Boston, found that one in seven participants in grades 7 to 12 reported perceiving a 50-50 chance or less of surviving to age 35. Upon follow-up interviews over a decade later, the researchers found that low expectations of longevity at young ages predicted increased suicide attempts and suicidal thoughts as well as heavy drinking, smoking, and use of illicit substances later in life relative to their peers who were almost certain they would live to age 35.

“The association between early survival expectations and detrimental outcomes suggests that monitoring survival expectations may be useful for identifying at-risk youth,” the authors state.

The study compared data collected from 19,000 adolescents in 1994-1995 to follow-up data collected from the same respondents 13-14 years later. The cohort was part of the National Longitudinal Study of Adolescent Health (Add Health), conducted by the Carolina Population Center and funded by the National Institutes of Health and 23 other federal agencies and foundations.

Journal Reference:

Quynh C. Nguyen, Andres Villaveces, Stephen W. Marshall, Jon M. Hussey, Carolyn T. Halpern, Charles Poole. Adolescent Expectations of Early Death Predict Adult Risk BehaviorsPLoS ONE, 2012; 7 (8): e41905 DOI: 10.1371/journal.pone.0041905

The Political Effects of Existential Fear (Science Daily)

ScienceDaily (Oct. 20, 2011) — Why did the approval ratings of President George W. Bush — who was perceived as indecisive before September 11, 2001 — soar over 90 percent after the terrorist attacks? Because Americans were acutely aware of their own deaths. That is one lesson from the psychological literature on “mortality salience” reviewed in a new article called “The Politics of Mortal Terror.”

The paper, by psychologists Florette Cohen of the City University of New York’s College of Staten Island and Sheldon Solomon of Skidmore College, appears in October’s Current Directions in Psychological Science, a journal published by the Association for Psychological Science.

The fear people felt after 9/11 was real, but it also made them ripe for psychological manipulation, experts say. “We all know that fear tactics have been used by politicians for years to sway votes,” says Cohen. Now psychological research offers insight into the chillingly named “terror management.”

The authors cite studies showing that awareness of mortality tends to make people feel more positive toward heroic, charismatic figures and more punitive toward wrongdoers. In one study, Cohen and her colleagues asked participants to think of death and then gave them statements from three fictional political figures. One was charismatic: he appealed to the specialness of the person and the group to which she belonged. One was a technocrat, offering practical solutions to problems. The third stressed the value of participation in democracy. After thinking about death, support for the charismatic leader shot up eightfold.

Even subliminal suggestions of mortality have similar effects. Subjects who saw the numbers 911 or the letters WTC had higher opinions of a Bush statement about the necessity of invading Iraq. This was true of both liberals and conservatives.

Awareness of danger and death can bias even peaceful people toward war or aggression. Iranian students in a control condition preferred the statement of a person preaching understanding and the value of human life over a jihadist call to suicide bombing. But primed to think about death, they grew more positive toward the bomber. Some even said that they might consider becoming a martyr.

As time goes by and the memory of danger and death grows fainter, however, “morality salience” tends to polarize people politically, leading them to cling to their own beliefs and demonize others who hold opposing beliefs — seeing in them the cause of their own endangerment.

The psychological research should make voters wary of emotional political appeals and even of their own emotions in response, Cohen says. “We encourage all citizens to vote with their heads rather than their hearts. Become an educated voter. Look at the candidate’s positions and platforms. Look at who you are voting for and what they stand for.”