Arquivo da tag: Poder

A Brief History of the “Testocracy,” Standardized Testing and Test-Defying (Truthout)

Wednesday, 25 March 2015 00:00

By Jesse Hagopian, Haymarket Books | Book Excerpt 

CHICAGO- 24 April, 2013: Demonstrator holds sign at a rally against school closings and over testing. (Photo: Sarah Jane Rhee)

Demonstrators rally against school closings and testing in Chicago, April 24, 2013. (Photo: Sarah Jane Rhee)

“We are experiencing the largest ongoing revolt against high-stakes standardized testing in US history,” according to Jesse Hagopian, high school history teacher, education writer and editor of More Than a Score. This remarkable book introduces the educators, students, parents and others who make up the resistance movement pushing back against the corporate “testocracy.” Click here to order More Than a Score today by making a donation to Truthout!

In this excerpt from More Than a Score, Jesse Hagopian explains who the “testocracy” are, what they want – for everybody else’s children and for their own – and why more people than ever before are resisting tests and working collectively to reclaim public education.

Who are these testocrats who would replace teaching with testing? The testocracy, in my view, does not only refer to the testing conglomerates—most notably the multibillion-dollar Pearson testing and textbook corporation—that directly profit from the sale of standardized exams. The testocracy is also the elite stratum of society that finances and promotes competition and privatization in public education rather than collaboration, critical thinking, and the public good. Not dissimilar to a theocracy, under our current testocracy, a deity—in this case the exalted norm-referenced bubble exam—is officially recognized as the civil ruler of education whose policy is governed by officials that regard test results as divine. The testocratic elite are committed to reducing the intellectual and emotional process of teaching and learning to a single number—a score they subsequently use to sacrifice education on the altar devoted to high-stakes testing by denying students promotion or graduation, firing teachers, converting schools into privatized charters, or closing schools altogether. You’ve heard of this program; the testocracy refers to it as “education reform.”

Among the most prominent members of the testocracy are some of the wealthiest people the world has ever known.

Among the most prominent members of the testocracy are some of the wealthiest people the world has ever known. Its tsars include billionaires Bill Gates, Eli Broad, and members of the Walton family (the owners of Walmart), who have used their wealth to circumvent democratic processes and impose test-and-punish policies in public education. They fund a myriad of organizations—such as Michelle Rhee’s StudentsFirst, Teach for America, and Stand for Children—that serve as shock troops to enforce the implantation of high-stakes testing and corporate education reform in states and cities across the nation. Secretary of Education Arne Duncan serves to help coordinate and funnel government money to the various initiatives of the testocracy. The plan to profit from public schools was expressed by billionaire media executive Rupert Murdoch, when he said in a November 2010 press release: “When it comes to K through 12 education, we see a $500 billion sector in the U.S. alone that is waiting desperately to be transformed by big breakthroughs that extend the reach of great teaching.”

Testing companies got the memo and are working diligently to define great teaching as preparing students for norm-referenced exams—available to districts across the country if the price is right. The textbook and testing industry generates between $20 billion and $30 billion dollars per year. Pearson, a multi-national corporation based in Britain, brings in more than $9 billion annually, and is the world’s largest education company and book publisher. But it’s not the only big testing company poised to profit from the testocracy. Former president George W. Bush’s brother Neil and his parents founded a company called Ignite! Learning to sell test products after the passage of No Child Left Behind.

“An Invalid Measure”: The Fundamental Flaws of Standardized Testing

The swelling number of test-defiers is rooted in the increase of profoundly flawed standardized exams. Often, these tests don’t reflect the concepts emphasized in the students’ classes and, just as often, the results are not available until after the student has already left the teacher’s classroom, rendering the test score useless as a tool for informing instruction. Yet the problem of standardized bubble tests’ usefulness for educators extends well beyond the lag time (which can be addressed by computerized tests that immediately calculate results). A standardized bubble test does not help teachers understand how a student arrived at answer choice “C.” The student may have selected the right answer but not known why it was right, or conversely, may have chosen the wrong answer but had sophisticated reasoning that shows a deeper understanding of the concept than someone else who randomly guessed correctly. Beyond the lack of utility of standardized testing in facilitating learning there is a more fundamental flaw. A norm-referenced, standardized test compares each individual student to everyone else taking the test, and the score is then usually reported as a percentile. Alfie Kohn describes the inherent treachery of the norm-referenced test:

No matter how many students take an NRT [norm-referenced test], no matter how well or poorly they were taught, no matter how difficult the questions are, the pattern of results is guaranteed to be the same: Exactly 10 percent of those who take the test will score in the top 10 percent. And half will always fall below the median. That’s not because our schools are failing; that’s because of what the word median means.

And as professor of education Wayne Au explained in 2011, when he was handed a bullhorn at the Occupy Education protest outside the headquarters of Gates Foundation, “If all the students passed the test you advocate, that test would immediately be judged an invalid metric, and any measure of students which mandates the failure of students is an invalid measure.”

Researchers have long known that what standardized tests measure above all else is a student’s access to resources.

Unsurprisingly, the Gates Foundation was not swayed by the logic of Au’s argument. That is because standardized testing serves to reinforce the mythology of a meritocracy in which those on the top have achieved their position rightfully—because of their hard work, their dedication to hitting the books, and their superior intelligence as proven by their scores. But what researchers have long known is that what standardized tests measure above all else is a student’s access to resources. The most damning truth about standardized tests is that they are a better indicator of a student’s zip code than a student’s aptitude. Wealthier, and predominantly whiter, districts score better on tests. Their scores do not reflect the intelligence of wealthier, mostly white students when compared to those of lower-income students and students of color, but do reflect the advantages that wealthier children have—books in the home, parents with more time to read with them, private tutoring, access to test-prep agencies, high-quality health care, and access to good food, to name a few. This is why attaching high stakes to these exams only serves to exacerbate racial and class inequality. As Boston University economics professors Olesya Baker and Kevin Lang’s 2013 study, “The School to Prison Pipeline Exposed,” reveals, the increases in the use of high-stakes standardized high school exit exams are linked to higher incarceration rates. Arne Duncan’s refusal to address the concerns raised by this study exposes the bankruptcy of testocratic policy.

Hypocrisy of the Testocracy

At first glance it would be easy to conclude that the testocracy’s strategy for public schools is the result of profound ignorance. After all, members of the testocracy have never smelled a free or reduced-price lunch yet throw a tantrum when public school advocates suggest poverty is a substantial factor in educational outcomes. The testocracy has never had to puzzle over the conundrum of having more students than available chairs in the classroom, yet they are the very same people who claim class size doesn’t matter in educational outcomes. The bubble of luxury surrounding the testocracy has convinced many that most testocrats are too far removed from the realities facing the majority of US residents to ever understand the damage caused by the high-stakes bubble tests they peddle. While it is true that the corporate reform moguls are completely out of touch with the vast majority of people, their strategy for remaking our schools on a business model is not the result of ignorance but of arrogance, not of misunderstanding but of the profit motive, not of silliness but rather of a desire for supremacy.

In fact, you could argue that the MAP test boycott did not actually begin at Garfield High School. A keen observer might recognize that the boycott of the MAP test—and so many other standardized tests—began in earnest at schools like Seattle’s elite private Lakeside High School, alma mater of Bill Gates, where he sends his children, because, of course, Lakeside, like one-percenter schools elsewhere, would never inundate its students with standardized tests. These academies, predominantly serving the children of the financially fortunate, shield students from standardized tests because they want their children to be allowed to think outside the bubble test, to develop critical thinking skills and prioritize time to explore art, music, drama, athletics, and debate. Gates values Lakeside because of its lovely campus, where the average class size is sixteen, the library contains some twenty thousand volumes, and the new sports facility offers cryotherapy and hydrotherapy spas. Moreover, while Gates, President Obama, and Secretary of Education Duncan are all parents of school-age children, none of those children attend schools that use the CCSS or take Common Core exams. As Dao X. Tran, then PTA co-chair at Castle Bridge Elementary School, put it (in chapter 20 of More Than a Score): “These officials don’t even send their children to public schools. They are failing our children, yet they push for our children’s teachers to be accountable based on children’s test data. All while they opt for their own children to go to schools that don’t take these tests, that have small class sizes and project-based, hands-on, arts-infused learning—that’s what we want for our children!” The superrich are not failing to understand the basics of how to provide a nurturing education for the whole child. The problem is that they believe this type of education should be reserved only for their own children.

A Brief History of Test-defying

The United States has a long history of using standardized testing for the purposes of ranking and sorting youth into different strata of society. In fact, standardized tests originally entered the public schools with the eugenics movement, a white-supremacist ideology cloaked in the shabby garments of fraudulent science that became fashionable in the late nineteenth and early twentieth centuries. As Rethinking Schools editorialized,

The United States has a long history of using intelligence tests to support white supremacy and class stratification. Standardized tests first entered the public schools in the 1920s, pushed by eugenicists whose pseudoscience promoted the “natural superiority” of wealthy, white, U.S.-born males. High-stakes standardized tests have disguised class and race privilege as merit ever since. The consistent use of test scores to demonstrate first a “mental ability” gap and now an “achievement” gap exposes the intrinsic nature of these tests: They are built to maintain inequality, not to serve as an antidote to educational disparities.

When the first “common schools” began in the late 1800s, industrialists quickly recognized an opportunity to shape the schools in the image of their factories. These early “education reformers” recognized the value of using standardized tests—first developed in the form of IQ tests used to sort military recruits for World War I—to evaluate the efficiency of the teacher workforce in producing the “student-product.” Proud eugenicist and Princeton University professor Carl Brigham left his school during World War I to implement IQ testing as an army psychologist. Upon returning to Princeton, Brigham developed the SAT exam as the admissions gatekeeper to Princeton, and the test confirmed in his mind that whites born in the United States were the most intelligent of all peoples. As Alan Stoskopf wrote, “By the early 1920s, more than 2 million American school children were being tested primarily for academic tracking purposes. At least some of the decisions to allocate resources and select students for academic or vocational courses were influenced by eugenic notions of student worth.”

Some of the most important early voices in opposition to intelligence testing came from leading African American scholars.

Resistance to these exams surely began the first time a student bubbled in every “A” on the page in defiance of the entire testing process. Yet, beyond these individual forms of protest, an active minority of educators, journalists, labor groups, and parents resisted these early notions of using testing to rank intelligence. Some of the most important early voices in opposition to intelligence testing—especially in service of ranking the races—came from leading African American scholars such as W. E. B. Du Bois, Horace Mann Bond, and Howard Long. Du Bois recalled in 1940, “It was not until I was long out of school and indeed after the [First] World War that there came the hurried use of the new technique of psychological tests, which were quickly adjusted so as to put black folk absolutely beyond the possibility of civilization.”

In a statement that is quite apparently lost on today’s testocracy, Horace Mann Bond, in his work “Intelligence Tests and Propaganda,” wrote:

But so long as any group of men attempts to use these tests as funds of information for the approximation of crude and inaccurate generalizations, so long must we continue to cry, “Hold!” To compare the crowded millions of New York’s East Side with the children of Morningside Heights [an upper-class neighborhood at the time] indeed involves a great contradiction; and to claim that the results of the tests given to such diverse groups, drawn from such varying strata of the social complex, are in any wise accurate, is to expose a fatuous sense of unfairness and lack of appreciation of the great environmental factors of modern urban life.

This history of test-defiers was largely buried until the mass uprisings of the civil rights and Black Power movements of the 1950s, ’60s, and ’70s transformed public education. In the course of these broad mass movements, parents, students, teachers, and activists fought to integrate the schools, budget for equitable funding, institute ethnic studies programs, and even to redefine the purpose of school.

In the Jim Crow–segregated South, literacy was inherently political and employed as a barrier to prevent African Americans from exercising their right to vote. The great activist and educator Myles Horton was a founder of the Highlander Folk School in Tennessee that would go on to help organize the Citizenship Schools of the mid-1950s and 1960s. The Citizenship Schools’ mission was to create literacy programs to help disenfranchised Southern blacks achieve access to the voting booth. Hundreds of thousands of African Americans attended the Citizenship Schools, which launched one of the most important educational programs of the civil rights movement, redefining the purpose of education and the assessment of educational outcomes. Horton described one of the Citizenship Schools he helped to organize, saying, “It was not a literacy class. It was a community organization. . . . They were talking about using their citizenship to do something, and they named it a Citizenship School, not a literacy school. That helped with the motivation.” By the end of the class more than 80 percent of those students passed the final examination, which was to go down to the courthouse and register to vote!

What the Testocracy Wants

The great civil rights movements of the past have reimagined education as a means to creating a more just society. The testocracy, too, has a vision for reimagining the education system and it is flat-out chilling. The testocracy is relentlessly working on new methods to reduce students to data points that can be used to rank, punish, and manipulate. Like something out of a dystopian sci-fi film, the Bill and Melinda Gates Foundation spent $1.4 million to develop bio-metric bracelets designed to send a small current across the skin to measure changes in electrical charges as the sympathetic nervous system responds to stimuli. These “Q Sensors” would then be used to monitor a student’s “excitement, stress, fear, engagement, boredom and relaxation through the skin.” Presumably, then, VAM assessments could be extended to evaluate teachers based on this biometric data. As Diane Ravitch explained to Reuters when the story broke in the spring of 2012, “They should devote more time to improving the substance of what is being taught . . . and give up all this measurement mania.”

But the testocracy remains relentless in its quest to give up on teaching and devote itself to data collection. In a 2011 TIME magazine feature on the future of education, readers are asked to “imagine walking into a classroom and seeing no one in the front of the classroom. Instead you’re led to a computer terminal at a desk and told this will be your teacher for the course. The only adults around are a facilitator to make sure that you stay on task and to fix any tech problems that may arise.” TIME goes on to point out, “For some Florida students, computer-led instruction is a reality. Within the Miami-Dade County Public School district alone, 7,000 students are receiving this form of education, including six middle and K–8 schools, according to the New York Times.” This approach to schooling is known as “e-learning labs,” and from the perspective of the testocracy, if education is about getting a high score, then one hardly needs nurturing, mentorship, or human contact to succeed. Computers can be used to add value—the value of rote memorization, discipline, and basic literacy skills—to otherwise relatively worthless students. Here, then, is a primary objective of an education system run by the testocracy: replace the compassionate hand of the educator with the cold, invisible, all-thumbs hand of the free market.


What will post-democracy look like? (The Sociological Imagination)

 ON JANUARY 19, 2015

As anyone who reads my blog regularly might have noticed, I’m a fan of Colin Crouch’s notion of post-democracy. I’ve interviewed him about it a couple of times: once in 2010 and again in 2013. Whereas he’d initially offered the notion to illuminate a potential trajectory, in the sense that we risk becoming post-democratic, we more latterly see a social order that might be said to have become post-democratic. He intends the term to function analogously to post-industrial: it is not that democracy is gone but that it has been hollowed out:

The term was indeed a direct analogy with ‘post-industrial’. A post-industrial society is not a non-industrial one. It continues to make and to use the products of industry, but the energy and innovative drive of the system have gone elsewhere. The same applies in a more complex way to post-modern, which is not the same as anti-modern or of course pre-modern. It implies a culture that uses the achievements of modernism but departs from them in its search for new possibilities. A post-democratic society therefore is one that continues to have and to use all the institutions of democracy, but in which they increasingly become a formal shell. The energy and innovative drive pass away from the democratic arena and into small circles of a politico-economic elite. I did not say that we were now living in a post-democratic society, but that we were moving towards such a condition.

Crouch is far from the only theorist to have made such a claim. But I think there’s a precision to his argument which distinguishes it from the manner in which someone like, say, Bauman talks about depoliticisation. My current, slightly morbid, interest in representations of civilisational collapsehas left me wondering what entrenched post-democracy would look like. Asking this question does not refer to an absence of democracy, for which endless examples are possible, but rather for a more detailed sketch of what a social order which was once democratic but is now post-democratic would look like. While everyday life might look something like that which can be seen in Singapore, ‘the city of rules’ as this Guardian article puts it, I think there’s more to be said than this. However we can see in Singapore a vivid account of how micro-regulation can be deployed to facilitate a city in which ‘nothing goes wrong, but nothing really happens’ as one ex-pat memorably phrases it in that article. Is it so hard to imagine efficiency and orderliness being used to secure consent, at least amongst some, for a similar level of social control in western Europe or America?

Perhaps we’d also see the exceptional justice that intruded into UK life after the 2011 riots, with courts being kept open 24/7 in order to better facilitate the restoration of social order. There’s something akin to this in mega sporting events: opaque centralised planning overwhelms democratic consultation, ‘world cup courts’ dish out ad hoc justice, the social structure contorts itself for the pleasure of an international oligopoly upon whom proceedings depend, specialised security arrangements are intensively deployed in the interests of the event’s success and we often see a form of social cleansing (destruction of whole neighbourhoods) presented as a technocratic exercise in event management. We also see pre-arrests and predictive policing deployed to these ends and only a fool would not expect to see more of this as the technological apparatus and the political pressures encouraging them grow over time.

These security arrangements point to another aspect of a post-democratic social order: the economic vibrancy of the security sector. There is a technological dimension to this, with a long term growth fuelled by the ‘war on terror’ coupled with an increasing move towards ‘disruptive policing’ that offers technical solutions at a time of fiscal retrenchment, but we shouldn’t forget the more mundane side of the security industry and its interests in privatisation of policing. This is how Securitas, one of the world’s largest security companies, describe the prospects of the security industry. Note the title of the page: taking advantage of changes.

The global security services market employs several million people and is projected to reach USD 110 billion by 2016. Security services are in demand all over the world, in all industries and in both the public and private sectors. Demand for our services is closely linked to global economic development and social and demographic trends. As the global economy grows and develops, so do we.

Historically, the security market has grown 1–2 percent faster than GDP in mature markets. In recent years, due to current market dynamics and the gradual incorporation of technology into security solutions, security markets in Europe and North America have grown at the same pace as GDP. This trend is likely to continue over the next three to five years.

Market growth is crucial to Securitas’ future profitability and growth, but capitalizing on trends and changes in demand is also important. Developing new security solutions with a higher technology content and improved cost efficiency will allow the private security industry to expand the market by assuming responsibility for work presently performed by the police or other authorities. This development will also be a challenge for operations with insourced security services and increase interest in better outsourced solutions.

Consider this against a background of terrorism, as the spectacular narrative of the ‘war on terror’ comes to be replaced by a prospect of state of alert without end. We’ve not seen the end of the ‘war on terror’, we’ve seen a spectacular narrative become a taken for granted part of everyday life. It doesn’t need to be narrativised any more because it’s here to stay. Against this backdrop, we’re likely see an authoritarian slide in political culture, supplementing the institutional arrangements already in place, in which ‘responsibility’ becomes the key virtue in the exercise of freedoms – as I heard someone say on the radio yesterday, “it’s irresponsible to say democracy is the only thing that matters when we face a threat like this” (or words to that effect).

Crucially, I don’t think this process is inexorable and it’s certainly not the unfolding of an historical logic. It’s enacted by people at every level – including those who reinforce the slide at the micro level of everyday social interaction. The intractability of the problem comes because the process itself involves a hollowing out of processes of contestation at the highest level, such that the corporate agents pursuing this changing social order are also benefiting from it by potential sources of resistance being increasingly absent or at least passive on the macro level.  This is how Wolfgang Streeck describes this institutional project, as inflected through management of the financial crisis:

The utopian ideal of present day crisis management is to complete, with political means, the already far-advanced depoliticization of the economy; anchored in recognised nation-stated under the control of internal governmental and financial diplomacy insulated from democratic participation, with a population that would have learned, over years of hegemonic re-education, to regard the distributional outcomes of free markets as fair, or at least as without alternative.

Buying Time, pg 46

Nudge: The gentle science of good governance (New Scientist)

25 June 2013

Magazine issue 2922

NOT long before David Cameron became UK prime minister, he famously prescribed some holiday reading for his colleagues: a book modestly entitled Nudge.

Cameron wasn’t the only world leader to find it compelling. US president Barack Obama soon appointed one of its authors, Cass Sunstein, a social scientist at the University of Chicago, to a powerful position in the White House. And thus the nudge bandwagon began rolling. It has been picking up speed ever since (see “Nudge power: Big government’s little pushes“).

So what’s the big idea? We don’t always do what’s best for ourselves, thanks to cognitive biases and errors that make us deviate from rational self-interest. The premise of Nudge is that subtly offsetting or exploiting these biases can help people to make better choices.

If you live in the US or UK, you’re likely to have been nudged towards a certain decision at some point. You probably didn’t notice. That’s deliberate: nudging is widely assumed to work best when people aren’t aware of it. But that stealth breeds suspicion: people recoil from the idea that they are being stealthily manipulated.

There are other grounds for suspicion. It sounds glib: a neat term for a slippery concept. You could argue that it is a way for governments to avoid taking decisive action. Or you might be concerned that it lets them push us towards a convenient choice, regardless of what we really want.

These don’t really hold up. Our distaste for being nudged is understandable, but is arguably just another cognitive bias, given that our behaviour is constantly being discreetly influenced by others. What’s more, interventions only qualify as nudges if they don’t create concrete incentives in any particular direction. So the choice ultimately remains a free one.

Nudging is a less blunt instrument than regulation or tax. It should supplement rather than supplant these, and nudgers must be held accountable. But broadly speaking, anyone who believes in evidence-based policy should try to overcome their distaste and welcome governance based on behavioural insights and controlled trials, rather than carrot-and-stick wishful thinking. Perhaps we just need a nudge in the right direction.

Descolonização do pensamento (Ciência Hoje)

Em entrevista à CH, o antropólogo brasileiro Cláudio Pinheiro analisa a dominação cultural da Europa e dos Estados Unidos sobre os países menos desenvolvidos, como o Brasil, e aponta mudanças que podem levar a uma produção de ideias e conhecimentos multipolarizada.

Por: Henrique Kugler, Ciência Hoje/ RJ

Publicado em 20/03/2014 | Atualizado em 20/03/2014

Descolonização do pensamento

‘Table bay’, tela de Samuel Scott datada de 1730. Na esteira da colonização, países menos desenvolvidos, entre eles o Brasil, importam padrões culturais e estruturas políticas e intelectuais da Europa e dos Estados Unidos.

Sejamos honestos: nós, brasileiros, tornamo-nos praticantes passivos de alguma espécie de mimetismo pós-colonial. Imitamos padrões europeus e estadunidenses em quase tudo – desde detalhes aparentemente banais, como vestimentas que usamos ou músicas que ouvimos; até estruturas políticas ou intelectuais reproduzidas a partir de matrizes do Norte. E a academia não foge à regra. Os autores que lemos, afinal, são quase sempre os clássicos do Velho Mundo.

Nos ventos do século 21, porém, as periferias geopolíticas pedem um mundo multipolarizado – e, cada vez mais, esse movimento configura a nova realidade global. Ainda perdura, no entanto, a clivagem do cenário internacional em dicotomias datadas que reforçam a segregação do mundo em dois hemisférios simbólicos.

Sobre esse instigante tema, Ciência Hoje ouviu o historiador e antropólogo Cláudio Pinheiro, diretor da Sephis, agência holandesa dedicada à formação de quadros intelectuais de países do Sul, agora sediada no Fórum de Ciência e Cultura da Universidade Federal do Rio de Janeiro. Pinheiro denuncia o colonialismo tardio do qual apenas começamos a nos libertar. E, dono de um papo tão pertinente quanto sofisticado, aposta suas fichas nos países austrais como promissores espaços de enunciação política, cultural e intelectual.

É correto afirmar que no Brasil, como em muitos países em desenvolvimento, ainda somos intelectualmente colonizados?

Essa colonização intelectual e acadêmica que vivemos não é uma conversa nova. Sua denúncia sistemática vem dos anos 1960. Mas, agora, a ideia está sendo desenvolvida com muito mais substância e continuidade. Dois anos atrás, veio ao Brasil uma das grandes intelectuais que debate a ideia de Sul: a antropóloga australiana Raewyn Connell. Sabe o que ela disse? “No evento acadêmico do qual participei aqui, as bancas de livros vendiam o mesmo que eu encontraria em um evento acadêmico na Austrália: Pierre Bourdieu, Jürgen Habermas, enfim, os autores clássicos europeus. Mas eu gostaria de ler, na verdade, autores clássicos brasileiros! E também os africanos, os indianos…”

Se o debate já tem quatro décadas, por que essa colonização permanece?

As agendas de pensamento estão muito profundamente ancoradas em conjuntos de teorias, temas, categorias de análise e agendas de financiamento à produção científica que se referem a uma experiência histórica particular, que é a do Atlântico Norte – tanto europeia, quanto norte-americana. É nessas experiências que nós, da periferia, acabamos baseando nosso discurso intelectual sociológico, antropológico, político e historiográfico.

Um dos grandes autores a denunciar isso, nos anos 1990, foi o indiano Dipesh Chakrabarty, da Universidade de Chicago. Ele escreveu um livro, em 2000, chamado Provincializando a Europa [Provincializing Europe, editado pela Princeton University Press, sem tradução para o português]. O argumento básico está no título: a Europa é uma paróquia. Só que essa paróquia se mundializou, a partir de um longo processo histórico associado ao colonialismo. E passamos a acreditar que nela estaria alguma espécie de grande verdade.

Conhecemos mais detalhes sobre a queda da Bastilha do que sobre grandes revoluções africanas

Pense em um estudante de ensino médio. O que ele estuda em história? História europeia. Estudos sobre África entraram para o nosso currículo apenas recentemente, em 2003, por uma medida governamental. Certo: o estudante sabe então sobre Europa e África. O que falta? Falta tudo. Conhecemos mais detalhes sobre a queda da Bastilha do que sobre grandes revoluções africanas. Estas passam completamente ao largo de nosso conhecimento. Como estudar história mundial sem estudar a história da África? Como entender o impacto que teve a diáspora de africanos nas Américas e na própria África? Como isso interferiu, por gerações e séculos, na capacidade africana de recuperar sua economia? Nossa própria forma de datação do tempo é marcada pela experiência europeia. Compreendemos o mundo em termos de história antiga, medieval, moderna e contemporânea. E é nesse trem que nos localizamos: o Brasil passa a existir no mundo a partir da história moderna – durante a expansão europeia.

Com a emergência de novas forças geopolíticas, a exemplo dos BRICs (Brasil, Rússia, Índia, China e África do Sul), essas ‘categorias de análise’ podem ser remodeladas?

Não obstante países como os BRICs sejam mais e mais importantes no cenário político internacional, continuam não sendo donos do próprio arcabouço que define a maneira pela qual se conhece o conhecimento: a forma de datar o tempo, a forma de classificar sociedades, as categorias de compreensão do mundo. Exemplo: se falamos em ‘família’, um aluno do ensino médio pensa em pai, mãe, avós, tios, filhos, netos. Em muitas sociedades é assim. Mas em muitas outras, não. Para povos nativos brasileiros ou sociedades asiáticas, por exemplo, a noção de família engloba relações mais amplas, que podem incluir até animais.

O conceito ocidental baseado na experiência europeia não dá conta de toda a realidade

O conceito ocidental baseado na experiência europeia não dá conta de toda a realidade. Acontece que os demais modelos são invisibilizados por outros que nos fazem compreender o mundo de forma engessada. Isso vale não só para a ideia de família como também de Estado, política, democracia. Para alguns autores, não é o dinheiro que faz uma sociedade ser classificada como “periférica”. Mas sim o não domínio sobre as categorias que organizam o pensamento, a política e a sociedade.

Essa imitação subalterna é muito perceptível na academia…

Quase todo aluno de graduação no Brasil (desde enfermagem a agronomia, passando pela engenharia) estuda ciências sociais como disciplina obrigatória. Em muitos casos isso envolve a leitura dos ‘clássicos’: Karl Marx [1818-1883], Max Weber [1864-1920], Émile Durkheim [1858-1917]. Eles são interessantíssimos, não há dúvida. Mas parece uma igreja com seus santos principais. Cadê os santos da periferia? Que autores pensaram as sociedades que hoje são periféricas? É um desafio contemporâneo incluir outros clássicos no ensino e no debate. Muito se perde diante do fato de que as estruturas para conhecer o ‘outro’ estão marcadas pela experiência de uma província, de uma paróquia específica, que é a Europa. É preciso universalizar o vocabulário de categorias de análise de modo que o mundo seja mais polifônico.

Você leu apenas o início da entrevista publicada na CH 312. Clique no ícone a seguir para baixar a versão integral. PDF aberto (gif)

Noam Chomsky is right: It’s the so-called serious who devastate the planet and cause the wars (Salon)

MONDAY, JAN 27, 2014 11:52 AM -0200

Fear the sober voices on the New York Times Op-Ed page and in the think tanks — they’re more dangerous than hawks


Noam Chomsky is right: It's the so-called serious who devastate the planet and cause the warsNoam Chomsky (Credit: AP/Hatem Moussa)

A captain ready to drive himself and all around him to ruin in the hunt for a white whale. It’s a well-known story, and over the years, mad Ahab in Herman Melville’s most famous novel, Moby-Dick, has been used as an exemplar of unhinged American power, most recently of George W. Bush’s disastrous invasion of Iraq.

But what’s really frightening isn’t our Ahabs, the hawks who periodically want to bomb some poor country, be it Vietnam or Afghanistan, back to the Stone Age.  The respectable types are the true “terror of our age,” as Noam Chomsky called them collectively nearly 50 years ago.  The really scary characters are our soberest politiciansscholarsjournalistsprofessionals, and managers, men and women (though mostly men) who imagine themselves as morally serious, and then enable the wars, devastate the planet, and rationalize the atrocities.  They are a type that has been with us for a long time.  More than a century and a half ago, Melville, who had a captain for every face of empire, found their perfect expression — for his moment and ours.

For the last six years, I’ve been researching the life of an American seal killer, a ship captain named Amasa Delano who, in the 1790s, was among the earliest New Englanders to sail into the South Pacific.  Money was flush, seals were many, and Delano and his fellow ship captains established the first unofficial U.S. colonies on islands off the coast of Chile.  They operated under an informal council of captains, divvied up territory, enforced debt contracts, celebrated the Fourth of July, and set up ad hoc courts of law.  When no bible was available, the collected works of William Shakespeare, found in the libraries of most ships, were used to swear oaths.

From his first expedition, Delano took hundreds of thousands of sealskins to China, where he traded them for spices, ceramics, and tea to bring back to Boston.  During a second, failed voyage, however, an event took place that would make Amasa notorious — at least among the readers of the fiction of Herman Melville.

Here’s what happened: One day in February 1805 in the South Pacific, Amasa Delano spent nearly a full day on board a battered Spanish slave ship, conversing with its captain, helping with repairs, and distributing food and water to its thirsty and starving voyagers, a handful of Spaniards and about 70 West African men and women he thought were slaves. They weren’t.

Those West Africans had rebelled weeks earlier, killing most of the Spanish crew, along with the slaver taking them to Peru to be sold, and demanded to be returned to Senegal.  When they spotted Delano’s ship, they came up with a plan: let him board and act as if they were still slaves, buying time to seize the sealer’s vessel and supplies.  Remarkably, for nine hours, Delano, an experienced mariner and distant relative of future president Franklin Delano Roosevelt, was convinced that he was on a distressed but otherwise normally functioning slave ship.

Having barely survived the encounter, he wrote about the experience in his memoir, which Melville read and turned into what many consider his “other” masterpiece.  Published in 1855, on the eve of the Civil War, Benito Cereno is one of the darkest stories in American literature.  It’s told from the perspective of Amasa Delano as he wanders lost through a shadow world of his own racial prejudices.

One of the things that attracted Melville to the historical Amasa was undoubtedly the juxtaposition between his cheerful self-regard — he considers himself a modern man, a liberal opposed to slavery — and his complete obliviousness to the social world around him.  The real Amasa was well meaning, judicious, temperate, and modest.

In other words, he was no Ahab, whose vengeful pursuit of a metaphysical whale has been used as an allegory for every American excess, every catastrophic war, every disastrous environmental policy, from Vietnam and Iraq to the explosion of the BP oil rig in the Gulf of Mexico in 2010.

Ahab, whose peg-legged pacing of the quarterdeck of his doomed ship enters the dreams of his men sleeping below like the “crunching teeth of sharks.”  Ahab, whose monomania is an extension of the individualism born out of American expansion and whose rage is that of an ego that refuses to be limited by nature’s frontier.  “Our Ahab,” as a soldier in Oliver Stone’s movie Platoon calls a ruthless sergeant who senselessly murders innocent Vietnamese.

Ahab is certainly one face of American power. In the course of writing a book on the history that inspired Benito Cereno, I’ve come to think of it as not the most frightening — or even the most destructive of American faces.  Consider Amasa.

Killing Seals

Since the end of the Cold War, extractive capitalism has spread over our post-industrialized world with a predatory force that would shock even Karl Marx.  From the mineral-rich Congo to the open-pit gold mines of Guatemala, from Chile’s until recently pristine Patagonia to the fracking fields of Pennsylvania and the melting Arctic north, there is no crevice where some useful rock, liquid, or gas can hide, no jungle forbidden enough to keep out the oil rigs and elephant killers, no citadel-like glacier, no hard-baked shale that can’t be cracked open, no ocean that can’t be poisoned.

And Amasa was there at the beginning.  Seal fur may not have been the world’s first valuable natural resource, but sealing represented one of young America’s first experiences of boom-and-bust resource extraction beyond its borders.

With increasing frequency starting in the early 1790s and then in a mad rush beginning in 1798, ships left New Haven, Norwich, Stonington, New London, and Boston, heading for the great half-moon archipelago of remote islands running from Argentina in the Atlantic to Chile in the Pacific.  They were on the hunt for the fur seal, which wears a layer of velvety down like an undergarment just below an outer coat of stiff gray-black hair.

In Moby-Dick, Melville portrayed whaling as the American industry.  Brutal and bloody but also humanizing, work on a whale ship required intense coordination and camaraderie.  Out of the gruesomeness of the hunt, the peeling of the whale’s skin from its carcass, and the hellish boil of the blubber or fat, something sublime emerged: human solidarity among the workers.  And like the whale oil that lit the lamps of the world, divinity itself glowed from the labor: “Thou shalt see it shining in the arm that wields a pick or drives a spike; that democratic dignity which, on all hands, radiates without end from God.”

Sealing was something else entirely.  It called to mind not industrial democracy but the isolation and violence of conquest, settler colonialism, and warfare.  Whaling took place in a watery commons open to all.  Sealing took place on land.  Sealers seized territory, fought one another to keep it, and pulled out what wealth they could as fast as they could before abandoning their empty and wasted island claims.  The process pitted desperate sailors against equally desperate officers in as all-or-nothing a system of labor relations as can be imagined.

In other words, whaling may have represented the promethean power of proto-industrialism, with all the good (solidarity, interconnectedness, and democracy) and bad (the exploitation of men and nature) that went with it, but sealing better predicted today’s postindustrial extracted, hunted, drilled, fracked, hot, and strip-mined world.

Seals were killed by the millions and with a shocking casualness.  A group of sealers would get between the water and the rookeries and simply start clubbing.  A single seal makes a noise like a cow or a dog, but tens of thousands of them together, so witnesses testified, sound like a Pacific cyclone.  Once we “began the work of death,” one sealer remembered, “the battle caused me considerable terror.”

South Pacific beaches came to look like Dante’s Inferno.  As the clubbing proceeded, mountains of skinned, reeking carcasses piled up and the sands ran red with torrents of blood.  The killing was unceasing, continuing into the night by the light of bonfires kindled with the corpses of seals and penguins.

And keep in mind that this massive kill-off took place not for something like whale oil, used by all for light and fire.  Seal fur was harvested to warm the wealthy and meet a demand created by a new phase of capitalism: conspicuous consumption.  Pelts were used for ladies’ capes, coats, muffs, and mittens, and gentlemen’s waistcoats.  The fur of baby pups wasn’t much valued, so some beaches were simply turned into seal orphanages, with thousands of newborns left to starve to death.  In a pinch though, their downy fur, too, could be used — to make wallets.

Occasionally, elephant seals would be taken for their oil in an even more horrific manner: when they opened their mouths to bellow, their hunters would toss rocks in and then begin to stab them with long lances.  Pierced in multiple places like Saint Sebastian, the animals’ high-pressured circulatory system gushed “fountains of blood, spouting to a considerable distance.”

At first the frenetic pace of the killing didn’t matter: there were so many seals.  On one island alone, Amasa Delano estimated, there were “two to three millions of them” when New Englanders first arrived to make “a business of killing seals.”

“If many of them were killed in a night,” wrote one observer, “they would not be missed in the morning.”  It did indeed seem as if you could kill every one in sight one day, then start afresh the next.  Within just a few years, though, Amasa and his fellow sealers had taken so many seal skins to China that Canton’s warehouses couldn’t hold them.  They began to pile up on the docks, rotting in the rain, and their market price crashed.

To make up the margin, sealers further accelerated the pace of the killing — until there was nothing left to kill.  In this way, oversupply and extinction went hand in hand.  In the process, cooperation among sealers gave way to bloody battles over thinning rookeries.  Previously, it only took a few weeks and a handful of men to fill a ship’s hold with skins.  As those rookeries began to disappear, however, more and more men were needed to find and kill the required number of seals and they were often left on desolate islands for two- or three-year stretches, living alone in miserable huts in dreary weather, wondering if their ships were ever going to return for them.

“On island after island, coast after coast,” one historian wrote, “the seals had been destroyed to the last available pup, on the supposition that if sealer Tom did not kill every seal in sight, sealer Dick or sealer Harry would not be so squeamish.”  By 1804, on the very island where Amasa estimated that there had been millions of seals, there were more sailors than prey.  Two years later, there were no seals at all.

The Machinery of Civilization

There exists a near perfect inverse symmetry between the real Amasa and the fictional Ahab, with each representing a face of the American Empire.  Amasa is virtuous, Ahab vengeful.  Amasa seems trapped by the shallowness of his perception of the world.  Ahab is profound; he peers into the depths.  Amasa can’t see evil (especially his own). Ahab sees only nature’s “intangible malignity.”

Both are representatives of the most predatory industries of their day, their ships carrying what Delano once called the “machinery of civilization” to the Pacific, using steel, iron, and fire to kill animals and transform their corpses into value on the spot.

Yet Ahab is the exception, a rebel who hunts his white whale against all rational economic logic.  He has hijacked the “machinery” that his ship represents and rioted against “civilization.”  He pursues his quixotic chase in violation of the contract he has with his employers.  When his first mate, Starbuck, insists that his obsession will hurt the profits of the ship’s owners, Ahab dismisses the concern: “Let the owners stand on Nantucket beach and outyell the Typhoons. What cares Ahab?  Owners, Owners?  Thou art always prating to me, Starbuck, about those miserly owners, as if the owners were my conscience.”

Insurgents like Ahab, however dangerous to the people around them, are not the primary drivers of destruction.  They are not the ones who will hunt animals to near extinction — or who are today forcing the world to the brink.  Those would be the men who never dissent, who either at the frontlines of extraction or in the corporate backrooms administer the destruction of the planet, day in, day out, inexorably, unsensationally without notice, their actions controlled by an ever greater series of financial abstractions and calculations made in the stock exchanges of New York, London, and Shanghai.

If Ahab is still the exception, Delano is still the rule.  Throughout his long memoir, he reveals himself as ever faithful to the customs and institutions of maritime law, unwilling to take any action that would injure the interests of his investors and insurers.  “All bad consequences,” he wrote, describing the importance of protecting property rights, “may be avoided by one who has a knowledge of his duty, and is disposed faithfully to obey its dictates.”

It is in Delano’s reaction to the West African rebels, once he finally realizes he has been the target of an elaborately staged con, that the distinction separating the sealer from the whaler becomes clear.  The mesmeric Ahab — the “thunder-cloven old oak” — has been taken as a prototype of the twentieth-century totalitarian, a one-legged Hitler or Stalin who uses an emotional magnetism to convince his men to willingly follow him on his doomed hunt for Moby Dick.

Delano is not a demagogue.  His authority is rooted in a much more common form of power: the control of labor and the conversion of diminishing natural resources into marketable items.  As seals disappeared, however, so too did his authority.  His men first began to grouse and then conspire.  In turn, Delano had to rely ever more on physical punishment, on floggings even for the most minor of offences, to maintain control of his ship — until, that is, he came across the Spanish slaver.  Delano might have been personally opposed to slavery, yet once he realized he had been played for a fool, he organized his men to retake the slave ship and violently pacify the rebels.  In the process, they disemboweled some of the rebels and left them writhing in their viscera, using their sealing lances, which Delano described as “exceedingly sharp and as bright as a gentleman’s sword.”

Caught in the pincers of supply and demand, trapped in the vortex of ecological exhaustion, with no seals left to kill, no money to be made, and his own crew on the brink of mutiny, Delano rallied his men to the chase — not of a white whale but of black rebels.  In the process, he reestablished his fraying authority.  As for the surviving rebels, Delano re-enslaved them.  Propriety, of course, meant returning them and the ship to its owners.

Our Amasas, Ourselves

With Ahab, Melville looked to the past, basing his obsessed captain on Lucifer, the fallen angel in revolt against the heavens, and associating him with America’s “manifest destiny,” with the nation’s restless drive beyond its borders.  With Amasa, Melville glimpsed the future.  Drawing on the memoirs of a real captain, he created a new literary archetype, a moral man sure of his righteousness yet unable to link cause to effect, oblivious to the consequences of his actions even as he careens toward catastrophe.

They are still with us, our Amasas.  They have knowledge of their duty and are disposed faithfully to follow its dictates, even unto the ends of the Earth.

TomDispatch regular Greg Grandin’s new book, The Empire of Necessity:  Slavery, Freedom, and Deception in the New World, has just been published. 

To stay on top of important articles like these, sign up to receive the latest updates from here.

Greg Grandin is a professor of history at New York University and the author, most recently, of “Fordlandia: The Rise and Fall of Henry Ford’s Forgotten Jungle City”. Check out a TomDispatch audio interview with Grandin about Henry Ford’s strange adventure in the Amazon by clicking here.

Governo regulamenta uso das Forças Armadas contra manifestações sociais (Vox Política)

Portaria está em vigor desde 20 de dezembro de 2013. Celso Amorim, ministro da Defesa, aprovou o documento.

 | quinta-feira, 23 janeiro 2014 – 2:30


O ministro da Defesa, Celso Amorim, aprovou no fim do ano passado uma Portaria que regulamenta o uso das Forças Armadas (Exército, Marinha e Aeronáutica) em manifestações sociais, protestos e outras ocasiões que possam comprometer “a ordem pública”.

A regra, presente no Manual “Garantia da Lei e da Ordem”, validado junto com a Portaria, está em vigor desde 20 de dezembro, data de sua publicação no Diário Oficial da União. Logo no segundo capítulo, o documento ressalta que, apesar do apreço ao conceito de não-guerra, as operações poderão ter “o uso de força de forma limitada”.

Esse emprego das Forças Armadas nessas operações seria autorizado “em situações de esgotamento dos instrumentos a isso previstos”, ou seja, “quando, em determinado momento, forem eles formalmente reconhecidos pelo respectivo Chefe do Poder Executivo Federal ou Estadual como indisponíveis, inexistentes ou insuficientes ao desempenho regular da missão constitucional”.

Entre as principais ameaças elencadas pelo Ministério da Defesa, duas se destacam por fazer referência à Copa do Mundo e às manifestações de 2013: o combate ao bloqueio de vias públicas de circulação e a ofensiva contra a sabotagem nos locais de grandes eventos. Para tanto, os soldados têm autorização de controlar até o fluxo dos cidadãos.

O anexo do “Controle de Distúrbios em Ambiente Urbano” é o que cita de maneira mais contundente a oposição a grupos populares de protesto.

Em “Cenário”, conforme imagem destacada no início da reportagem, o alerta estatal vislumbra a “atuação de elementos integrantes de movimentos sociais reivindicatórios, de oposição ou protesto, comprometendo a ordem pública”, reservando aos governos estaduais e federal o direito de traçar limites. No apêndice de operações psicológicas, os movimentos sociais recebem classificação ainda pior: forças oponentes.

Como a Inquisição atuava no Brasil (Fapesp)

Tese apoiada pela FAPESP e premiada pela Capes desvenda os mecanismos que possibilitaram ao Tribunal do Santo Ofício estabelecer uma vasta rede de agentes no território (Fundação Biblioteca Nacional, RJ)


Por José Tadeu Arantes

Agência FAPESP – Como o Tribunal da Inquisição, sediado em Lisboa, conseguiu se fazer presente até mesmo nos confins do Brasil colonial, coletando denúncias, prendendo pessoas e levando-as para serem julgadas em Portugal? Com quais instituições a Inquisição se relacionava? Que setores sociais cooperaram com ela? Essas foram as perguntas que inspiraram a tese “Poder eclesiástico e Inquisição no século XVIII luso-brasileiro: agentes, carreiras e mecanismos de promoção social”, apresentada por Aldair Carlos Rodrigues no Departamento de História da Faculdade de Filosofia, Letras e Ciências Humanas da Universidade de São Paulo, para a obtenção do título de doutor.

O estudo – orientado por Laura de Mello e Souza, como parte do Projeto Temático “Dimensões do Império Português” – recebeu o Prêmio Capes 2013 (área de História) e o Grande Prêmio Capes de Tese Darcy Ribeiro (que abrange as grandes áreas de Ciências Humanas, Ciências Sociais Aplicadas, Linguística, Letras e Artes e Multidisciplinar-Ensino e Interdisciplinar), oferecidos pela Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes), do Ministério da Educação (MEC).

“A principal conclusão foi que a Inquisição conseguiu atuar no Brasil porque possuía mecanismos eficazes e oferecia cargos que atraíam as elites da sociedade colonial. O tribunal não tinha uma sede aqui, mas sua atuação se ramificava por meio de uma rede de agentes criada na colônia”, disse Rodrigues à Agência FAPESP.

Segundo o pesquisador, os atrativos para a formação dessa rede eram a distinção social e os privilégios que seu membros passavam a ter. “Não era fácil se tornar um quadro da Inquisição, porque essa instituição possuía vários dispositivos bastante excludentes. Então, quem conseguia passar pelos filtros, adquiria um status social destacado”, afirmou.

Para os integrantes das elites da época, tanto na Península Ibérica quanto nas colônias da América, era muito importante provar a “limpeza de sangue”. Isto é, provar que não se pertencia às “raças” consideradas “infectas” (judeus, muçulmanos, negros, indígenas).

E a Inquisição era tida como a instituição mais rigorosa na apuração da “limpeza de sangue”. Entrar para seus quadros equivalia a apresentar a toda a sociedade um atestado de “sangue puro”. “Isso tornava o pertencimento à Inquisição muito atraente. Por meio do ‘estatuto de limpeza de sangue’, a Inquisição desempenhou um papel importantíssimo no processo de formação e estruturação da elite social do Brasil durante o século XVIII”, informou Rodrigues.

Ele considera esta a grande novidade de seu estudo. “A maioria das pesquisas já feitas sobre a Inquisição teve por foco as vítimas. A minha pesquisa procurou investigar esse outro aspecto, o da inserção social da Inquisição, do seu papel na estruturação da sociedade brasileira, na constituição das hierarquias. São duas abordagens complementares. Estudar a Inquisição sob o ângulo da inserção social me permitiu entender como essa instituição pôde durar tanto tempo. A Inquisição portuguesa foi estabelecida em 1536 e só foi abolida em 1821, no contexto da chamada Revolução Liberal do Porto”, disse.

O início da atuação da Inquisição na Península Ibérica pode ser mais bem compreendido quando se considera que os Estados que estavam se estruturando nesse período se fundamentavam na unidade da fé. Constituídos no contexto da luta de cristãos contra muçulmanos, identificavam-se profundamente com a fé católica. A sobrevivência de outras confissões religiosas no mesmo território punha em xeque essa unidade da fé, e, por extensão, a unidade política.

“A criação de instituições encarregadas da repressão violenta às dissidências religiosas, como foram os tribunais da Inquisição espanhola e portuguesa, se inseriram nesse contexto”, explicou Rodrigues.

Já a sobrevivência desses mesmos tribunais e de seus aparatos em épocas tão tardias quanto a primeira metade do século XIX – e não apenas nas metrópoles ibéricas, mas também nas colônias americanas – exige outro tipo de explicação. E, neste caso, o atrativo social decorrente da aplicação dos “estatutos de limpeza de sangue” e uma série de privilégios, como a isenção fiscal, ajudam a entender como essa instituição pode estruturar uma vasta rede de agentes que perpetuou sua atuação.

“A chancela de ‘limpeza de sangue’ que o pertencimento aos quadros da Inquisição proporcionava exercia enorme atração sobre as elites coloniais – tanto aquelas que já estavam consolidadas quanto as emergentes. Para se ter ideia, segundo o meu levantamento para o século XVIII, havia, em toda a colônia, aproximadamente 200 comissários da Inquisição no setor eclesiástico. Na mesma época, o número de agentes civis ligados à instituição chegava a cerca de 2 mil – 457 deles apenas em Minas Gerais”, sublinhou Rodrigues.

Conforme ele, esses agentes civis, chamados de “familiares do Santo Ofício”, eram, principalmente, pessoas que estavam enriquecendo a partir de atividades comerciais, mas ainda não dispunham de status social. Para tais pessoas, entrar para a Inquisição era uma forma rápida e eficaz de ascender socialmente.

“Quando focamos a análise apenas no número de pessoas sentenciadas, esses mecanismos de inserção profunda da Inquisição na trama social tendem a passar despercebidos. Minha pesquisa me fez perceber que essa instituição estava muito mais enraizada na sociedade colonial do que se supunha”, disse.

Rodrigues passou cerca de nove meses estudando a vasta documentação existente no arquivo da Torre do Tombo, em Lisboa – primeiro com o apoio do Instituto Camões (Cátedra Jaime Cortesão), depois com o apoio da FAPESP. Um dos focos desse estudo foi o sistema de comunicação estabelecido entre o Tribunal da Inquisição, em Lisboa, e a rede eclesiástica instalada no Brasil.

“Estudei 1165 registros de correspondência expedida no século XVIII. E pude observar que havia um sistema de comunicação eficiente ligando o Tribunal de Lisboa ao território do Brasil, um sistema profundamente assentado na hierarquia institucional das dioceses”, disse.

“Cada diocese dividia-se em várias comarcas eclesiásticas, que não necessariamente coincidiam com as comarcas civis; as comarcas, por sua vez, dividiam-se em paróquias; as paróquias, em capelas. Quando a Inquisição distribuía um edital impresso com o objetivo de coletar denúncias, esse edital tinha que ser lido no final da missa e, depois, afixado na porta da igreja ou na porta da sacristia, permanecendo ali até a chegada de um novo edital”, detalhou.

No caso do Centro-Sul, os editais chegavam ao Rio de Janeiro. E, dali, eram distribuídos para toda a região. A partir da sede de cada diocese, os impressos chegavam às comarcas eclesiásticas. O “vigário da vara”, que era o principal agente da comarca, os encaminhava às paróquias. E os párocos os repassavam às capelas. Quando o capelão lia o edital, ele tinha que assinar um recibo, informando a data, e em alguns casos até mesmo o horário da leitura do documento.

“Esse mecanismo permitia que, desde Lisboa, a Inquisição tivesse pleno conhecimento de todo o caminho seguido pela correspondência. E, ao longo do tempo, esse fluxo foi sendo otimizado”, afirmou Rodrigues.

Além disso, havia a cooperação da justiça episcopal. Os agentes dos bispos não se encarregavam da perseguição aos “delitos de heresia”, apenas aos “delitos morais”. Mas, quando deparavam com alguma suspeita de heresia nas instâncias do tribunal episcopal, transmitiam a denúncia a Lisboa. E os prelados dispunham de um mecanismo suplementar de imposição da ortodoxia católica: as “visitas diocesanas”.

“O bispo saía em viagem, percorrendo toda a sua diocese, de freguesia em freguesia, e inspecionando o comportamento do clero e da população. Uma de suas funções era conferir as portas das igrejas ou das sacristias para verificar se os editais da Inquisição estavam ali afixados. Caso não estivessem, havia penas para punir o responsável por essa “falta”. Isso permitia que o inquisidor, em Lisboa, tivesse controle até sobre as portas das igrejas do Brasil”, enfatizou Rodrigues.

Esse mecanismo, antes pouco conhecido, foi agora desvelado pela tese de Rodrigues. “Espero que o meu trabalho contribua para a reformulação dos livros didáticos, eliminando a falsa ideia de que a Inquisição praticamente não esteve presente no Brasil”, afirmou.

A publicação em livro, também com apoio da FAPESP, está agendada para fevereiro de 2014.

Não há razão para que a polícia reproduza estrutura do Exército, diz Luiz Eduardo Soares (Sul 21)

Data:11/dez/2013, 14h43min

Foto: Bernardo Jardim Ribeiro/Sul21

Foto: Bernardo Jardim Ribeiro/Sul21

Samir Oliveira

O antropólogo e cientista Luiz Eduardo Soares esteve em Porto Alegre nesta terça-feira (10), onde promoveu uma palestra sobre a desmilitarização da polícia na Câmara Municipal, a convite da vereadora Fernanda Melchionna (PSOL). Um dos maiores especialistas em segurança pública e direitos humanos do país. Luiz Eduardo foi secretário de Segurança do Rio de Janeiro durante o governo de Anthony Garotinho — ocasião em que foi demitido pelo governador ao vivo durante entrevista para um telejornal –, e foi secretário nacional da área no governo do ex-presidente Lula (PT).

Durante o discurso para a numerosa plateia que lotou o plenário Ana Terra, o especialista explicou porque defende a desmilitarização e a unificação das polícias brasileiras. Ele entende que, constitucional e democraticamente, não existem motivos para que a polícia reproduza a estrutura organizacional do Exército. “Cartesianamente, só haveria uma hipótese que justificaria o caráter militar da polícia ostensiva: se as finalidades fossem as mesmas do Exército. Se fizermos uma leitura minimamente sistemática da Constituição, compreenderemos que o propósito da polícia é garantir direitos à cidadania e defender a legalidade. Não estamos falando de guerra”, disse.

Foto: Bernardo Jardim Ribeiro/Sul21

Foto: Bernardo Jardim Ribeiro/Sul21

Luiz Eduardo defendeu a aprovação da PEC 51, do senador Lindbergh Farias (PT-RJ), que ajudou a elaborar. O projeto desmilitariza a polícia militar, unifica as forças policiais em uma estrutura de ciclo único — ostensivo e investigativo — e unifica também as carreiras de seus servidores.

Ele informou que a proposta conta com amplo apoio nas categorias de base dos policiais civis e militares. Uma pesquisa realizada com 65 mil trabalhadores detectou que 70% deles apoiam a iniciativa. Entretanto, ele ressaltou que o projeto sofre oposição por parte de delegados e oficiais e não encontra respaldo junto ao governo federal.

Para o antropólogo, a PEC resolveria injustiças que identifica em relação aos próprios policiais. Ele considera que os policiais militares estão sujeitos a uma dura disciplina e a um rigoroso código de comportamento — que os impede até de formar organizações coletivas e emitir opiniões. Luiz Eduardo citou casos em que policiais foram presos em quartéis por pautar debates sobre mudanças na categoria.

“Eles se sentem revoltados com essa situação. Então quando lhes pedimos respeito aos direitos humanos, eles não entendem do que estamos falando. Essa não é a realidade em que vivem, não é assim que são educados”, justifica.

Além disso, ele explica que as carreiras das polícias não favorecem o crescimento profissional das categorias da base. “Quem entra na Polícia Militar, com muita sorte, pode se tornar sargento em 20 anos. Na Policia Civil, uma pessoa com 22 anos, recém formada em direito, que não entende nada de segurança pública e de gestão, pode fazer um concurso e se tornar delegado, chefiando 30 servidores que estão há mais de 20 anos na área”, resume.

Foto: Bernardo Jardim Ribeiro/Sul21

Foto: Bernardo Jardim Ribeiro/Sul21

Fonte de ódio entre policiais e manifestantes “está longe” dos protestos

Para Luiz Eduardo Soares, as manifestações deste ano no Brasil foram positivas. Em entrevista coletiva antes da palestra, ele disse à jornalistas que houve uma tática adotada pelos políticos e por parte da mídia para dividir os manifestantes entre os considerados bons e os considerados vândalos. Ele entende que alguns ativistas “morderam essa isca” e isso acabou dificultando a adesão da maioria da população aos protestos.

Sobre a repressão policial identificada durante os atos, ele entende que se trata de algo anterior às manifestações. “Muitas vezes a gente vê o manifestante com ódio do policial e o policial com ódio do manifestante, quando a fonte do ódio de ambos está longe dali”, disse. Para o antropólogo, as condições do conflito gerado entre ambas as partes “geram sua própria história e produzem sua própria dor”, criando “situações ardilosas” para manifestantes e policiais.

Foto: Bernardo Jardim Ribeiro/Sul21

Foto: Bernardo Jardim Ribeiro/Sul21

“Estamos enjaulando jovens negros e pobres com uma voracidade feroz”

Luiz Eduardo Soares defende o fim da política de guerra às drogas e do proibicionismo em relação a substâncias atualmente consideradas ilícitas. Ele apontou que esse é um dos principais fatores responsáveis pelo crescimento da população carcerária do Brasil – a quarta maior do mundo, estimada em mais de 550 mil pessoas.

“Tínhamos 140 mil presos na metade dos anos 1990. Hoje temos 550 mil. Quem está preso? O jovem negro e pobre. Estamos enjaulando jovens negros e pobres com uma voracidade feroz”, lamentou.

O especialista sustenta que a prisão de jovens ligados ao que chama de “varejo do tráfico de drogas” é a solução mais simples encontrada por governantes e pelo aparato de segurança pública para responder às pressões da classe média e da mídia por “menos impunidade e mais segurança”. “A mídia cobra os governos baseada nos problemas da classe média, esse é o recorte”, assegura.

Segundo ele, o proibicionismo em relação às drogas e a atual formatação das forças de segurança geram enorme discriminação social e racial no país. “A proibição das drogas, conectada com nosso modelo policial, constitui um núcleo de reprodução de desigualdades e do racismo no Brasil. Não há situação em que o racismo seja mais pronunciável no país do que na segurança pública”, critica.

Political Motivations May Have Evolutionary Links to Physical Strength (Science Daily)

May 15, 2013 — Men’s upper-body strength predicts their political opinions on economic redistribution, according to new research published inPsychological Science, a journal of the Association for Psychological Science.

The principal investigators of the research — psychological scientists Michael Bang Petersen of Aarhus University, Denmark and Daniel Sznycer of University of California, Santa Barbara — believe that the link may reflect psychological traits that evolved in response to our early ancestral environments and continue to influence behavior today.

“While many think of politics as a modern phenomenon, it has — in a sense — always been with our species,” says Petersen.

In the days of our early ancestors, decisions about the distribution of resources weren’t made in courthouses or legislative offices, but through shows of strength. With this in mind, Petersen, Sznycer and colleagues hypothesized that upper-body strength — a proxy for the ability to physically defend or acquire resources — would predict men’s opinions about the redistribution of wealth.

The researchers collected data on bicep size, socioeconomic status, and support for economic redistribution from hundreds of people in the United States, Argentina, and Denmark.

In line with their hypotheses, the data revealed that wealthy men with high upper-body strength were less likely to support redistribution, while less wealthy men of the same strength were more likely to support it.

“Despite the fact that the United States, Denmark and Argentina have very different welfare systems, we still see that — at the psychological level — individuals reason about welfare redistribution in the same way,” says Petersen. “In all three countries, physically strong males consistently pursue the self-interested position on redistribution.”

Men with low upper-body strength, on the other hand, were less likely to support their own self-interest. Wealthy men of this group showed less resistance to redistribution, while poor men showed less support.

“Our results demonstrate that physically weak males are more reluctant than physically strong males to assert their self-interest — just as if disputes over national policies were a matter of direct physical confrontation among small numbers of individuals, rather than abstract electoral dynamics among millions,” says Petersen.

Interestingly, the researchers found no link between upper-body strength and redistribution opinions among women. Petersen argues that this is likely due to the fact that, over the course of evolutionary history, women had less to gain, and also more to lose, from engaging in direct physical aggression.

Together, the results indicate that an evolutionary perspective may help to illuminate political motivations, at least those of men.

“Many previous studies have shown that people’s political views cannot be predicted by standard economic models,” Petersen explains. “This is among the first studies to show that political views may be rational in another sense, in that they’re designed by natural selection to function in the conditions recurrent over human evolutionary history.”

Co-authors on this research include Aaron Sell, Leda Cosmides, and John Tooby of the University of California, Santa Barbara.

This research was supported by a grant from the Danish Research Council and a Director’s Pioneer Award from the National Institutes of Health.

Journal Reference:

  1. M. B. Petersen, D. Sznycer, A. Sell, L. Cosmides, J. Tooby. The Ancestral Logic of Politics: Upper-Body Strength Regulates Men’s Assertion of Self-Interest Over Economic RedistributionPsychological Science, 2013; DOI: 10.1177/0956797612466415

Women Make Better Decisions Than Men, Study Suggests (Science Daily)

Mar. 25, 2013 — Women’s abilities to make fair decisions when competing interests are at stake make them better corporate leaders, researchers have found.

A survey of more than 600 board directors showed that women are more likely to consider the rights of others and to take a cooperative approach to decision-making. This approach translates into better performance for their companies.

The study, which was published this week in the International Journal of Business Governance and Ethics, was conducted by Chris Bart, professor of strategic management at the DeGroote School of Business at McMaster University, and Gregory McQueen, a McMaster graduate and senior executive associate dean at A.T. Still University’s School of Osteopathic Medicine in Arizona.

“We’ve known for some time that companies that have more women on their boards have better results,” explains Bart. “Our findings show that having women on the board is no longer just the right thing but also the smart thing to do. Companies with few female directors may actually be shortchanging their investors.”

Bart and McQueen found that male directors, who made up 75% of the survey sample, prefer to make decisions using rules, regulations and traditional ways of doing business or getting along.

Female directors, in contrast, are less constrained by these parameters and are more prepared to rock the boat than their male counterparts.

In addition, women corporate directors are significantly more inclined to make decisions by taking the interests of multiple stakeholders into account in order to arrive at a fair and moral decision. They will also tend to use cooperation, collaboration and consensus-building more often — and more effectively — in order to make sound decisions.

Women seem to be predisposed to be more inquisitive and to see more possible solutions. At the board level where directors are compelled to act in the best interest of the corporation while taking the viewpoints of multiple stakeholders into account, this quality makes them more effective corporate directors, explains McQueen.

Globally, women make up approximately 9% of corporate board memberships. Arguments for gender equality, quotas and legislation have done little to increase female representation in the boardroom, despite evidence showing that their presence has been linked to better organizational performance, higher rates of return, more effective risk management and even lower rates of bankruptcy. Bart’s and McQueen’s finding that women’s higher quality decision-making ability makes them more effective than their male counterparts gives boards a method to deal with the multifaceted social issues and concerns currently confronting corporations.

The International Journal of Business Governance and Ethicsis available online.

How do people make decisions?

  • Personal interest reasoning: The decision maker is motivated by ego, selfishness and the desire to avoid trouble. This method is most often exhibited by young children who largely tend to be motivated to seek pleasure and avoid pain.
  • Normative reasoning: The decision maker tries to avoid “rocking the boat” by adhering to rules, laws or norms. Stereotypical examples of groups that use this form of reasoning include organizations with strong established cultures like Mary Kay or the US Marines.
  • Complex moral reasoning: The decision maker acknowledges and considers the rights of others in the pursuit of fairness by using a social cooperation and consensus building approach that is consistently applied in a non-arbitrary fashion.

Why should boards have more female directors?

  • Boards with high female representation experience a 53% higher return on equity, a 66% higher return on invested capital and a 42% higher return on sales (Joy et al., 2007).
  • Having just one female director on the board cuts the risk of bankruptcy by 20% (Wilson, 2009).
  • When women directors are appointed, boards adopt new governance practices earlier, such as director training, board evaluations, director succession planning structures (Singh and Vinnicombe, 2002)
  • Women make other board members more civilized and sensitive to other perspectives (Fondas and Sassalos, 2000) and reduce ‘game playing’ (Singh, 2008)
  • Female directors are more likely to ask questions rather than nodding through decisions (Konrad et al., 2008).

Journal Reference:

  1. Chris Bart, Gregory McQueen. Why women make better directorsInternational Journal of Business Governance and Ethics, 2013; 8 (1): 93 DOI:10.1504/IJBGE.2013.052743

Sismógrafos inaudibles de sociedades cambiantes (Afkar/Ideas)

Driss Ksikes – Afkar / Ideas 34 – /06/2012

La escena artística árabe rebosa de experiencias marginales, erigidas en torno a una idea simple: devolver el arte al corazón de la ciudad, para liberarla de politicastros.

Louis Ferdinand Céline los denomina “los perros nobles”. Se refiere a esas criaturas robustas que tiran de los trineos en el Polo Norte, las únicas capaces de oler a 20 leguas una zanja oculta bajo la superficie glacial aparentemente dura y plana. Por su parte, Edgar Morin habla de “topos” (no en el sentido de agentes secretos), tan enclavados en el propio suelo que notan las sacudidas, apenas perceptibles, sordas, que se producen a lo lejos. Estas metáforas animales subrayan la hipersensibilidad de unos seres que sienten la insidia en la distancia, intuitivamente, sin ninguna ciencia ni modelo de racionalidad reconocible y transmisible a los demás. Es del todo posible, si pensamos en la literatura telúrica del gran poeta marroquí –y sobre todo en sus textos, Agadir y Le déterreur–, hablar de sismógrafos que detectan, mucho antes que los demás, la próxima sacudida social, política, colectiva, que se avecina.

Los antiexpertos

Con ocasión del 2011 árabe, he leído muchos artículos que dan vueltas y más vueltas a la misma letanía: “No vimos venir nada”. Es innegable que los llamados “expertos”, acostumbrados a clasificar la realidad y formatearla en cómodos recuadros de lectura no han hecho precisamente gala de una lucidez excepcional. Los hay que llegaron a errar completamente el tiro, al prever una resistencia donde el derrocamiento de un rais era casi inminente (muy especialmente en el caso de arabistas y otros orientalistas que se expresaron antes de la caída de Hosni Mubarak, negando cualquier similitud entre El Cairo y Túnez). Al basar sus lecturas en los movimientos políticos visibles o en las interacciones geopolíticas, les faltó una perspectiva sociológica y antropológica para ver lo que se tramaba en los intersticios de nuestras sociedades. Hubo artistas y escritores que, libres de los cánones de la ciencia, tuvieron más clarividencia. Sin pretender otorgarles la categoría de adivinos, en este artículo propongo un breve repaso a tres “sismógrafos” prácticamente inaudibles para la multitud, que vislumbraron una nueva pauta o quisieron tomar el pulso a una era agitada.

Un regicidio en escena

Empecemos por Fadhel Yaibi, director y dramaturgo tunecino que, en cuatro décadas, se ha impuesto como uno de los creadores iconoclastas más atinados de la sociedad árabe. En 2010, ya fuera por un arranque de lucidez o por casualidad sincrónica, alumbró, con la complicidad de Yalila Baccar, una obra premonitoria, Amnesia. Un dictador, Yahya Yaich, adulado y alabado por sus cortesanos, se viene abajo y es objeto de humillaciones y torturas en un hospital psiquiátrico, rodeado de sus perros guardianes, transformados en carroñeros. Hasta llegan a rogarle, cuando corre a coger el avión, que dé media vuelta. La obra, representada meses antes de la marcha de Ben Alí, gozó de un gran éxito, sobre todo por su fuerza estética y por revelar, por medio del arte, un hartazgo generalizado. Su extrema afabilidad impide al sismógrafo tunecino, Yaibi, atribuirse ningún rol que no sea el de artista, entremetido, escéptico, humanista, sensible a lo que se cuece en su entorno, deseoso de mostrar otra faceta de los acontecimientos. La de una realidad política insoportable sublimada por un regicidio en escena es necesariamente imperceptible para los estrategas e inaudible para las instituciones, incluso académicas, que subestiman la inteligencia emocional. No obstante, nos remite a algo que cada vez más pensadores, como Bruno Latour, consideran urgente: la reconexión del arte con la política, no como su valedor, sino para tener presente que el arte es en esencia un acto político, bello por su gratuidad, su altruismo y, sobre todo, por su resonancia social, más allá de los muros convencionales del establishment.

Contra el patriotismo de los ‘secretas’

En Egipto se ha impuesto otra figura, a través de textos y otros medios, en la vida literaria cairota, hasta el punto de considerarla uno de los amuletos de la revuelta de la plaza Tahrir. Me refiero al novelista Alaa el Aswany. Tras su superventas, El edificio Yacobián, pasando por Chicago, el dentista y escritor tardío destaca por su aversión al patriotismo de “los secretas” y al islamismo literal que encorsetan a la sociedad egipcia. En 2010, toma carrerilla y publica una serie de relatos cortos de título provocador, ¿Por qué los egipcios no se rebelan? Al explicar lo poco que tardó en desprenderse del dogmatismo marxista sin enterrar a Marx, deconstruye el molde identitario que mantiene a un pueblo sometido a su dictador. Cliente habitual de El Cairo, un café literario muy querido, El Aswany pudo, en los dos años previos a la revolución, afincarse como humanista contestatario, como autor escuchado y ampliamente citado. En Tahrir, tuvo el papel del sabio a quien acuden jóvenes desorientados. Inspirado en las cinco fases de caída del dictador predichas por Gabriel García Márquez (negación, patriotismo de recuperación, concesiones a medias, confesión tipo “os he entendido” y huida), fue capaz de convencerlos de que, aunque pretendiera resistir, Mubarak acabaría escapando. Está claro que la conciencia de este hombre honesto tuvo más peso que centenares de informes de desarrollo humano que, aun tocando a muerto, no calaban en los actores. Ahí reside también la fuerza de un sismógrafo, en su proximidad al terreno, tan alejado de los burócratas.

Zonas Temporalmente Autónomas

El rasgo que comparten estas experiencias es, sin duda, la subversión. Como en tiempos de la generación beat en Estados Unidos, donde nacieron las Zonas Temporalmente Autónomas, hace años que la escena artística árabe rebosa de experiencias, marginales, erigidas en torno a una idea simple: devolver el arte al corazón de la ciudad, para liberarla de politicastros. El sublime escritor alemán Friedrich Hölderlin lo llamaba “hacer el mundo poéticamente habitable”. Tras esta utopía, hay dos experiencias dignas de mención. La primera, alumbrada en Túnez en 2008, se llama Dream city. No se trata de arte callejero, sino de la calle puesta a disposición de los artistas. Por espacio de una semana, la ciudadanía se enfrenta a lo imprevisible, lo improbable, para vivir de otra manera en sus espacios cerrados. Fue una de esas raras ocasiones, inesperadas en la época de Ben Ali, en que el pueblo se reunía y dialogaba libremente.

La segunda experiencia, DABATEATR ciudadano, vio la luz en Rabat en 2009. En ella, el teatro se retoma como lugar público de controversia. Se revisitan las distintas artes, para devolver al público a la raíz del cuestionamiento ciudadano. Y la dramaturgia revisa la actualidad para sacar a relucir la universalidad que anida en las noticias. Antes de su nacimiento, los activistas del Movimiento 20 de Febrero se encontraban de algún modo en este espacio, discutiendo libremente entre blogueros. No hizo falta gritar mucho para que surgiera la ola de indignación.

Estas experiencias insólitas, singulares, pero escasas, no emergen ni en la universidad ni en lugares convencionales. Son fruto de las tentativas y de la experimentación de artistas que siguen conectados a la realidad sin perder de vista la utopía.

Juridiquês (Sopro 83)

 Alexandre Nodari

Se tivesse sido possível construir a torre de Babel sem escalá-la até o topo, ela teria sido permitida

1. Tramita no Congresso Nacional um projeto de lei, de autoria de Maria do Rosário, que pretende acrescer ao artigo 458 do Código de Processo Civil, que diz respeito aos “requisitos essenciais da sentença”, um quarto inciso, tornando obrigatória “a reprodução do dispositivo da sentença em linguagem coloquial, sem a utilização de termos exclusivos da Linguagem técnico-jurídica e acrescida das considerações que a autoridade Judicial entender necessárias, de modo que a prestação jurisdicional possa ser plenamente compreendida por qualquer pessoa do povo”. É evidente que a proposta visa ampliar o acesso à Justiça e tem intenção democratizadora. Todavia, se, por si só, o projeto parece ser razoável, confrontado com a torrente de leis ou projetos de lei que visam regular cada aspecto da vida humana, do cigarro à linguagem (há poucos anos, o comunista-ruralista Aldo Rebelo tentou banir os estrangeirismos do português), não há como não termos uma postura ao menos cética diante dele. Se o projeto em si pode ser bom, contextualizado com a inflação normativa que visa purificar cada aspecto da vida humana, não há como não termos ressalvas. O desejo de limpeza, de higienização, de clareza, atravessa a sociedade como um todo – e tal desejo atende a anseios do poder, ou, pelo menos, é canalizado por ele. Dominique Laporte, em sua História da merda, lembra que foi no mesmo ano de 1539 que a França: 1) primeiro obrigou que as leis, os atos administrativos, os processos judiciais e os documentos notariais, fossem redigidos em vernáculo, eliminando as ambigüidades e incertezas do latim, e possibilitando a “clareza”; 2) e, logo a seguir, proibiu que os cidadãos jogassem na rua seus excrementos – suas fezes e suas urinas. Limpar a linguagem e limpar a cidade: a centralização do poder que daria naquilo que chamamos vulgarmente de absolutismo tem suas raízes nessa vontade de pureza e limpeza, nesse ideal cristalino. Todavia, para além desse “desejo de clareza”, é interessante atentarmos para uma espécie de ato falho contido na “Justificação” do projeto de lei; talvez não seja, de fato, um ato falho, mas algo intencional, o que pouco importa. O parágrafo final da justificativa fala em “tradução para o vernáculo comum do texto técnico da sentença judicial”, como se as sentenças não fossem escritas em português. Há aí uma verdade essencial sobre o Direito: ele é uma linguagem diferente do “vernáculo comum”. Na famosa Apologia de Sócrates, o velho sábio, ao falar diante do tribunal que o acusava de impiedade, diz ser “um estrangeiro à língua” que ali se fala, e pede pra ser tratado como se fosse um estrangeiro que não sabe o grego. O Direito não é uma língua estrangeira como o inglês ou o latim são em relação ao português ou ao grego: o Direito é a língua portuguesa ou grega em outro regime de funcionamento. Diante do Direito pátrio, somos como estrangeiros que não conhecem a própria língua. Mas qual é o regime de funcionamento daquela linguagem que atende, no “vernáculo comum”, pelo nome de “juridiquês”?

2. Em um belíssimo texto sobre a figura do notário, Salvatore Satta, um dos juristas mais brilhantes do século XX, resumiu o “drama” do escrivão ou escrevente, esses mediadores entre os plebeus e os juristas, do seguinte modo: “Conhecer o querer que aquele que quer não conhece”. Não é que “aquele que quer” não conheça o seu querer; “aquele que quer” não sabe traduzi-lo juridicamente. Ou seja, continua Satta, o que o notário faz, de fato, é “reduzir a vontade da parte enquanto vontade do ordenamento”. Eis o sentido do brocardo latino Da mihi factum, dabo tibi jus (“Exponha o fato e te direi o direito”): reduzir a “volição em vista de um escopo prático que a parte se propõe a atingir enquanto vontade jurídica e juridicamente tipificada”, ou seja, traduzir uma vontade, um fato, um ato da vida, em tipos jurídicos. O Direito não lida propriamente com fatos ou atos, mas com fatos ou atos jurídicos, que correspondam a certos tipos previstos. Passar um ato ou fato da vida ao Direito é tipificá-lo. Nesse sentido, o tipo talvez seja o elemento gramatical básico da linguagem jurídica. Mas o que exatamente é um tipo? Quem melhor refletiu sobre a noção de “tipo” não foi um jurista, mas um sociólogo, Max Weber, sedimentando, com os chamados “tipo ideais”, seu método em oposição ao método empírico-comparatista de Durkheim. Para Weber, os tipos puros ou ideais não poderiam ser encontrados “na realidade”; o que existia “de fato” era sempre um compósito, mais ou menos híbrido, de tipos que – e daí a sua natureza circular – se construíam a partir de elementos dispersos nesta mesma “realidade” em que eram aplicados. A própria etimologia de tipo já indica este seu caráter ambíguo, entre a empiria e a abstração: o gregotypos significa imagem, vestígio, rastro, ou seja, ausência, índice de uma presença imemorial. Para usar um exemplo de Vilém Flusser: os “typoi são como vestígios que os pés de um pássaro deixam na área da praia. Então, a palavra significa que esses vestígios podem ser utilizados como modelos para classificação do pássaro mencionado”. As duas formas de Direito que o Ocidente conhece são as duas facetas do tipo: a de matriz romano-gerâmica baseia-se nas leis, na abstração, no tipo, para chegar ao caso empírico; e a Common Law, ao contrário, parte dos casos empíricos para convertê-los em típicos, em abstratos. Mas, como diz Satta, na tipificação, há uma redução, algo se perde – inclusive a linguagem comum.

3. O tipo atende a uma necessidade básica do funcionamento do Direito, e domodus operandi de sua linguagem específica (ou típica): a prescrição. “Se” acontece ou está presente o tipo X, “então” a conseqüência, a sanção, é Y. O problema de todo processo reside em saber se o acontecimento A da vida corresponde ou não ao tipo X para que a conseqüência Y se dê. Como as normas se fundamentam em tipos, que não passam de linguagem sem relação necessária com as coisas e os fatos da vida, é preciso uma construção discursiva que conecte o acontecimento da vida ao tipo jurídico – se o Direito fosse pura subsunção, lembra Giorgio Agamben, poderíamos abdicar desse imenso aparato judicial chamado processo, e que envolve não só o juiz, o advogado e o promotor, mas inúmeros outros mediadores entre a linguagem comum e a linguagem jurídica (o notário, o taquígrafo, etc.). Por isso, para que se dê essa tipificação, não só o fato relevante juridicamente precisa passar à forma de tipo, como também tudo aquilo que o cerca, para que haja a redução da singularidade à tipificação, ou seja, à reprodução daquele caso típico (na forma de jurisprudência). Sabemos bem como isso funciona: dos boletins de ocorrência até as sentenças, os fatos da vida são narrados em uma linguagem que os torna típicos, abstratos – e reprodutíveis. Ítalo Calvino sintetizou de forma magistral esse “inquietante” processo de tradução:

O escrivão está diante da máquina de escrever. O interrogado, sentado em frente a ele, responde às perguntas gaguejando ligeiramente, mas preocupado em dizer, com a maior exatidão possível, tudo o que tem de dizer e nem uma palavra a mais: “De manhã cedo, estava indo ao porão para ligar o aquecedor quando encontrei todos aqueles frascos de vinho atrás da caixa de carvão. Peguei um para tomar no jantar. Não estava sabendo que a casa de bebidas lá em cima havia sido arrombada”. Impassível, o escrivão bate rápido nas teclas sua fiel transcrição: “O abaixo assinado, tendo se dirigido ao porão nas primeiras horas da manhã para dar início ao funcionamento da instalação térmica, declara ter casualmente deparado com boa quantidade de produtos vinícolas, localizados na parte posterior do recepiente destinado ao armazenamento do combustível, e ter efetuado a retirada de um dos referidos artigos com a intenção de consumi-lo durante a refeição vespertina, não estando a par do acontecido arrombamento do estabelecimento comercial sobranceiro.”

Calvino chamou a isso de “terror semântico”, ou “antilíngua”: “a fuga diante de cada vocábulo que tenha por si só um significado” – o perigo, a seu ver, era que essa “antilíngua” invadisse a vida comum. Mas nessa fuga diante do vocábulo que tenha por si só um significado, há um avanço para os vocábulos que abranjam mais de um significado, que podem, portanto, ser reproduzidos em várias situações. Essa reprodutibilidade é, como já sublinhamos, essencial à linguagem baseada em tipos – é ela que diferencia, segundo Flusser, a noção de tipo da noção de caractere, que privilegia aquilo que é característico, isto é, próprio.

4. Portanto, o tipo, como elemento básico da gramática jurídica, serve para tornar reprodutíveis as normas diante da singularidade dos acontecimentos da vida; mas, para tanto, ele abstrai (d)esses acontecimentos. Os processos e as normas, compostos de inúmeros tipos, correm, desse modo, ao largo da vida, como se fossem uma narrativa ficcional. O grande romanista Yan Thomas argumenta que “a ficção é um procedimento que (…) pertence à pragmática do direito”. Os antigos romanos, continua Thomas, não tinham pudor em, diante de uma situação excepcional na qual não queriam fazer uma determinada regra, optar por mudar juridicamente a situação no lugar de alterar a regra. Um exemplo, dentre muitos: buscando tornar válidos os testamentos de alguns cidadãos que haviam morrido quando se encontravam sob custódia dos inimigos, o que, por lei, invalidava tais testamentos, a Lex Cornelia, de 81 a.C., optou por criar uma ficção, da qual conhecemos duas versões: 1) a primeira, uma ficção positiva, era considerar os testamentos como se os cidadãos haveriam morrido sob o estatuto normal da cidadania; 2) e a segunda, uma ficção negativa, pela qual os testamentos eram válidos como se os cidadãos não tivessem morrido sob o poder do inimigo. Por que esse afastamento discursivo da “realidade”, da vida? Por que, na narrativa, ou na sua forma, o Direito se afasta do relato comum, cria uma outra realidade, quase uma dimensão paralela? Aqui entra o segundo elemento da linguagem prescricional que caracteriza o Direito, a sanção, o “então Y”. A função do Direito, como sabemos, é alterar, pela linguagem, pela palavra, a realidade, a vida, ou seja, criar palavras eficazes – nem que para garantir a eficácia de uma lei ou de uma sentença seja preciso usar da força pública. (Aliás, não há vernáculo comum o suficiente capaz de explicar a “qualquer pessoa do povo” que aquela sentença que lhe dá ganho de causa ainda precisa ser executada, em um procedimento que demorará mais alguns anos). É dessa função do Direito de alterar a realidade pela linguagem que nasce a ilusão retrospectiva de que haveria um estágio pré-jurídico em que religião, magia e direito coincidiriam. Na verdade, o que o Direito e a Magia partilham é do mesmomodus operandi da linguagem, o performativo (“eu juro”, “eu te condeno”, “eu prometo”), em que, nas palavras de Agamben, “o significado de uma enunciação (…) coincide com a realidade que é ela mesma produzida pelo ato da enunciação”. Nesse sentido, o Direito é, ainda hoje, mágico. O gosto dos juristas pela linguagem ornamental, pelos brocardos, pela linguagem ritual e pelo eufemismo, provem dessa ligação: a realidade pode ser criada a partir de uma linguagem vazia (ou esvaziada, afastada da realidade). Poderíamos, portanto, dizer que o Direito é, ao mesmo tempo, o saber quase mágico deste modus operandi, e aquilo que garante que tal linguagem performativa se transforme em ato – que os contratos sejam cumpridos, que as leis sejam aplicadas, etc. Todavia, para que o Direito opere magicamente sobre a realidade, ele precisa se afastar dela; para que sua linguagem produza efeitos sobre a vida, ela deve se afastar da linguagem que comunica ou que expressa, o “vernáculo comum”.

5. Portanto, talvez o “juridiquês” não seja (apenas) uma prática judiciária que remonta ao bacharelismo e à pseudo-erudição, um resquício antigo que pode ser removido. Antes, talvez ele seja uma prática judiciária constitutiva daquilo que conhecemos por Direito. Emile Benveniste, ao se deter no fato de que o verbo latino iurare (jurar) é o correspondente ao substantivo ius, que estamos habituados a traduzir por “direito”, argumenta que ius deveria, na verdade, significar “a fórmula da conformidade”: “ius, em geral, é realmente uma fórmula, e não um conceito abstrato”. É interessante notar que Benveniste aponta no ius do direito romano este caráter “mágico” que viemos assinalando, em que há separação da linguagem comum e produção de efeitos sobre a realidade – e mostra ainda que tal caráter estaria presente naquele documento que os juristas costumam considerar uma das pedras basilares do direito ocidental, a Lei das XII Tábuas. Diz Benveniste: “iura é a coleção das sentenças de direito. (…) Essesiura (…) são fórmulas que enunciam uma decisão de autoridade; e sempre que esses termos [ius iura] são tomados em seu sentido estrito, encontramos (…) a noção de textos fixados, de fórmulas estabelecidas, cuja posse é o privilégio de certos indivíduos, certas famílias, certas corporações. O tipo exemplar dessesiura é representado pelo código mais antigo de Roma, a Lei das XII Tábuas, originalmente composta por sentenças formulando o estado de ius e pronunciando: ita ius esto. Aqui é o império da palavra, manifestado por termos de sentido concordante; em latim iu-dex. (…) Não é o fazer, e sim, sempre, opronunciar que é constitutivo do ‘direito’: ius dicereiu-dex nos reconduzem a essa ligação constante. (…) É por intermédio deste ato de fala ius dicere que se desenvolve toda a terminologia da via judiciária: iudex, iudicare, iudicium, iuris-dictio, etc.” Assim, o tipo, a tipificação, é um dos modos pelos quais a linguagem se converte em fórmula. O funcionamento formulário da linguagem no Direito, o afastamento total com a linguagem ordinária, pode ser melhor vista naqueles crimes relacionados justamente à linguagem. Dois exemplos, um da antiguidade e um muito recente podem demonstrar como isso diz respeito à própria lógica do Direito. O primeiro é do famoso orador grego Lísias, que viveu na passagem entre os séculos V e IV a.C. Em seu discurso Contra Theomnestus, Lísias argumenta que a lei contra a calúnia era inócua, na medida em que proibia que se chamasse alguém de “assassino” (androfonon), mas era incapaz de punir aquele que, como Theomnestus, acusava outrem de “matar” (apektonenai) seu pai. O outro caso ocorreu em março de 2010, no Supremo Tribunal Federal. Argumentando contra as cotas, o ex-senador Demóstenes Torres disse que as “negras (escravas) mantinham ‘relações consensuais’ com os brancos (seus patrões)”. Que consensualidade, podemos perguntar, é possível haver entre sujeitos que estão numa relação de senhor e escravo?  Porém, é evidente que nenhum dos 11 magistrados de “reputação ilibada” e “notável saber jurídico” viu racismo aí. Se o argumento tivesse sido enunciado de outra forma (com referência a uma “natural concupiscência” das negras, para dar um exemplo da nefasta tradição racista do Judiciário brasileiro), talvez acarretasse em uma ocorrência jurídica de racismo. Para que algo se inscreva na esfera do Direito, ele precisa se formalizar, ou melhor, se formularizar, se tornar fórmula. Não se trata aqui apenas de inscrição na legislação, em uma lei elaborada pelo Poder Legislativo. O Direito pode existir – e continuar calcado no formalismo – mesmo ali onde não há lei em sentido estrito, o que é provado pelo Direito costumeiro. A formalização é um processo maior do que a lei, e engloba  toda a máquina judiciária, o que inclui juízes, decisões judiciais, advogados, juristas, a chamada “doutrina”, chegando até à sociedade. Trata-se da fixação de conteúdos permitidos ou proibidos em fórmulas, procedimento que, como vimos com os tipos, permite sua reprodução. Esse é o paradoxo do que se costuma chamar, em geral pejorativamente, de “politicamente correto”: ao mesmo tempo que produz avanços materiais inegáveis, está limitado à própria formalidade. Ou seja, as fórmulas – aquilo que (não) se pode fazer ou dizer – repercutem sobre o mundo, modificam o mundo, mas elas não perdem a sua dimensão de fórmulas. Aqueles que defendem o Direito como um mecanismo de transformação social (ou mesmo só como uma ferramenta progressista), mais cedo ou mais tarde esbarram nesse paradoxo: o Direito só garante aquilo que está consubstanciado em fórmulas (e são justamente fórmulas que, por vezes, impedem a transformação social). A partir do momento que se defende o reconhecimento jurídico de certos direitos que o Direito não reconhece, se está defendendo a formalização desses direitos. De fato, a oposição entre direito material e direito formal é inócua: na medida em que a formalização dos direitos é um processo histórico, todo direito formal já foi apenas um direito material, e pode voltar a sê-lo. Ninguém é condenado por emitir discursos de conteúdos racistas (matéria) – só existe o crime de racismo quando este é enunciado de uma certa forma, por uma certa fórmula.

6. Todo jurista conhece a “pirâmide” normativa de Hans Kelsen, em que as normas são ordenadas hierarquicamente (os estratos mais baixos retiram sua validade dos mais altos), e no topo da qual está a “norma fundamental”. O problema, como se sabe, é que essa norma fundamental seria vazia de conteúdo, isto é, pressuposta, imaginária, ficcional (para postular o estatuto da norma fundamental, Kelsen se baseou na Filosofia do como se, de Vaihinger, para o qual até mesmos o discurso científico residia, em última instância, sobre alguma ficção). Ou seja, uma maneira de dar validade ao sistema, de remetê-lo ao Um (ainda que alguns queiram ligá-la ao princípio de que os pactos devem ser cumpridos – pacta sunt servanda –, e outros, muito mais tacanhos, à Constituição). Teríamos, assim, um sistema de normas com conteúdo baseadas numa norma sem conteúdo e fictícia. Talvez, porém, fosse mais produtivo entender o Direito de maneira invertida: um sistema de normas vazias, baseadas numa única norma com conteúdo: o de que a ficção que conhecemos como Direito é verdadeira. No momento histórico atual, poderíamos dizer que tal norma fundamental se cristalizaria em dois princípios: o de que não se pode alegar desconhecimento da lei (fechamento), e o de que o juiz não pode se furtar de decidir uma causa (abertura). Ou seja, o conteúdo da norma fundamental seria o de que o Direito é um sistema, ao mesmo tempo (mas não paradoxalmente), aberto e fechado – o que quer dizer: potencialmente Total. Fechamento e disseminação são conexos no Direito. Para que seja “verdadeiro”, ele não pode assumir seu estatuto de pura linguagem, ou melhor, tem que anulá-lo, dotando toda linguagem de uma potencial “eficácia”. Como as normas e os processos não passam de linguagem sem relação necessária com as coisas, é preciso este princípio que estabelece que alguma relação entre as palavras (normas) e as coisas (fatos) tem que se dar. É desse caráter vazio das normas e dos processos, do seu embasamento na linguagem (e não nas coisas) que deriva a inflação normativa, processo inerente ao Direito. As normas e os processos não passam, no fundo, de fórmulas que se invocam para tentar estabelecer este ou aquele nexo entre as palavras e as coisas – mas todas invocam, como pressuposto, o próprio nome do Direito, isto é, a norma fundamental: a de que a ficção é verdadeira. Portanto, as fórmulas, os tipos, os brocardos, em suma, o juridiquês, são o modo pelo qual se mantém a ficção, e pelo qual a vida, a linguagem comum, é capturada na esfera do Direito, ao mesmo tempo em que é afastada dela.  Nas ficções de Kafka, é comum o confronto, e mesmo o entrelaçamento, entre ficção e direito. O inacabado romance O processo encena bem este confronto e entrelaçamento. Ao início do romance, quando os oficiais da lei vão deter o protagonista K., este imagina se tratar apenas de uma trupe teatral aplicando um trote de aniversário a pedido de amigos. Ao final, quando seus executores chegam para buscá-lo, K. novamente quer acreditar que são apenas de atores encenando e pregando-lhe uma peça. E, de fato, todo o aparato judicial narrado no romance parece ser uma grande ficção: porões obscuros, audiências em cortiços, advogados moribundos. Em nenhum momento aparece a Lei, K. não consegue adentrar a Lei. Em nenhum momento, K. sabe do que está sendo acusado. O romance inteiro é construído sobre a figura dos mediadores – cartorários, advogados, oficiais – que encenam um grandiloqüente e patético processo, uma ficção da qual K. pode a qualquer momento sair. O Direito e o processo são apenas grandes narrativas ficcionais – mas estas encenações, ao contrário das teatrais, tomam vidas. O juridiquês é e não é apenas uma encenação de alguns juristas. É apenas o modo de narrar uma ficção; mas essa ficção atende pelo nome de Direito, que captura e reduz a vida, retirando a sua singularidade e reproduzindo-a como um tipo. Ao “se” da prescrição jurídica, corresponde um “então”. Um “então” que está ausente na verdadeira ficção, que é sempre e apenas um “como se”.

Should Doctors Treat Lack of Exercise as a Medical Condition? Expert Says ‘Yes’ (Science Daily)

ScienceDaily (Aug. 13, 2012) — A sedentary lifestyle is a common cause of obesity, and excessive body weight and fat in turn are considered catalysts for diabetes, high blood pressure, joint damage and other serious health problems. But what if lack of exercise itself were treated as a medical condition? Mayo Clinic physiologist Michael Joyner, M.D., argues that it should be. His commentary is published this month in The Journal of Physiology.

Physical inactivity affects the health not only of many obese patients, but also people of normal weight, such as workers with desk jobs, patients immobilized for long periods after injuries or surgery, and women on extended bed rest during pregnancies, among others, Dr. Joyner says. Prolonged lack of exercise can cause the body to become deconditioned, with wide-ranging structural and metabolic changes: the heart rate may rise excessively during physical activity, bones and muscles atrophy, physical endurance wane, and blood volume decline.

When deconditioned people try to exercise, they may tire quickly and experience dizziness or other discomfort, then give up trying to exercise and find the problem gets worse rather than better.

“I would argue that physical inactivity is the root cause of many of the common problems that we have,” Dr. Joyner says. “If we were to medicalize it, we could then develop a way, just like we’ve done for addiction, cigarettes and other things, to give people treatments, and lifelong treatments, that focus on behavioral modifications and physical activity. And then we can take public health measures, like we did for smoking, drunken driving and other things, to limit physical inactivity and promote physical activity.”

Several chronic medical conditions are associated with poor capacity to exercise, including fibromyalgia, chronic fatigue syndrome and postural orthostatic tachycardia syndrome, better known as POTS, a syndrome marked by an excessive heart rate and flu-like symptoms when standing or a given level of exercise. Too often, medication rather than progressive exercise is prescribed, Dr. Joyner says.

Texas Health Presbyterian Hospital Dallas and University of Texas Southwestern Medical Center researchers found that three months of exercise training can reverse or improve many POTS symptoms, Dr. Joyner notes. That study offers hope for such patients and shows that physicians should consider prescribing carefully monitored exercise before medication, he says.

If physical inactivity were treated as a medical condition itself rather than simply a cause or byproduct of other medical conditions, physicians may become more aware of the value of prescribing supported exercise, and more formal rehabilitation programs that include cognitive and behavioral therapy would develop, Dr. Joyner says.

For those who have been sedentary and are trying to get into exercise, Dr. Joyner advises doing it slowly and progressively.

“You just don’t jump right back into it and try to train for a marathon,” he says. “Start off with achievable goals and do it in small bites.”

There’s no need to join a gym or get a personal trainer: build as much activity as possible into daily life. Even walking just 10 minutes three times a day can go a long way toward working up to the 150 minutes a week of moderate physical activity the typical adult needs, Dr. Joyner says.

Social Identification, Not Obedience, Might Motivate Unspeakable Acts (Science Daily)

ScienceDaily (July 18, 2012) — What makes soldiers abuse prisoners? How could Nazi officials condemn thousands of Jews to gas chamber deaths? What’s going on when underlings help cover up a financial swindle? For years, researchers have tried to identify the factors that drive people to commit cruel and brutal acts and perhaps no one has contributed more to this knowledge than psychological scientist Stanley Milgram.

Just over 50 years ago, Milgram embarked on what were to become some of the most famous studies in psychology. In these studies, which ostensibly examined the effects of punishment on learning, participants were assigned the role of “teacher” and were required to administer shocks to a “learner” that increased in intensity each time the learner gave an incorrect answer. As Milgram famously found, participants were willing to deliver supposedly lethal shocks to a stranger, just because they were asked to do so.

Researchers have offered many possible explanations for the participants’ behavior and the take-home conclusion that seems to have emerged is that people cannot help but obey the orders of those in authority, even when those orders go to the extremes.

This obedience explanation, however, fails to account for a very important aspect of the studies: why, and under what conditions, people did not obey the experimenter.

In a new article published in Perspectives on Psychological Science, a journal of the Association for Psychological Science, researchers Stephen Reicher of the University of St. Andrews and Alexander Haslam and Joanne Smith of the University of Exeter propose a new way of looking at Milgram’s findings.

The researchers hypothesized that, rather than obedience to authority, the participants’ behavior might be better explained by their patterns of social identification. They surmised that conditions that encouraged identification with the experimenter (and, by extension, the scientific community) led participants to follow the experimenters’ orders, while conditions that encouraged identification with the learner (and the general community) led participants to defy the experimenters’ orders.

As the researchers explain, this suggests that participants’ willingness to engage in destructive behavior is “a reflection not of simple obedience, but of active identification with the experimenter and his mission.”

Reicher, Haslam, and Smith wanted to examine whether participants’ willingness to administer shocks across variants of the Milgram paradigm could be predicted by the extent to which the variant emphasized identification with the experimenter and identification with the learner.

For their study, the researchers recruited two different groups of participants. The expert group included 32 academic social psychologists from two British universities and on Australian university. The nonexpert group included 96 first-year psychology students who had not yet learned about the Milgram studies.

All participants were read a short description of Milgram’s baseline study and they were then given details about 15 variants of the study. For each variant, they were asked to indicate the extent to which that variant would lead participants to identify with the experimenter and the scientific community and the extent to which it would lead them to identify with the learner and the general community.

The results of the study confirmed the researchers’ hypotheses. Identification with the experimenter was a very strong positive predictor of the level of obedience displayed in each variant. On the other hand, identification with the learner was a strong negative predictor of the level of obedience. The relative identification score (identification with experimenter minus identification with learner) was also a very strong predictor of the level of obedience.

According to the authors, these new findings suggest that we need to rethink obedience as the standard explanation for why people engage in cruel and brutal behavior. This new research “moves us away from a dominant viewpoint that has prevailed within and beyond the academic world for nearly half a century — a viewpoint suggesting that people engage in barbaric acts because they have little insight into what they are doing and conform slavishly to the will of authority,” they write.

These new findings suggest that social identification provides participants with a moral compass and motivates them to act as followers. This followership, as the authors point out, is not thoughtless — “it is the endeavor of committed subjects.”

Looking at the findings this way has several advantages, Reicher, Haslam, and Smith argue. First, it mirrors recent historical assessments suggesting that functionaries in brutalizing regimes — like the Nazi bureaucrat Adolf Eichmann — do much more than merely follow orders. And it simultaneously accounts for why participants are more likely to follow orders under certain conditions than others.

The researchers acknowledge that the methodology used in this research is somewhat unorthodox — the most direct way to examine the question of social identification would involve recreating the Milgram paradigm and varying different aspects of the paradigm to manipulate social identification with both experimenter and learner. But this kind of research involves considerable ethical challenges. The purpose of the article, the authors say, is to provide a strong theoretical case for such research, “so that work to address the critical question of why (and not just whether) people still prove willing to participate in brutalizing acts can move forward.”

*   *   *

Most People Will Administer Shocks When Prodded By ‘Authority Figure’

ScienceDaily (Dec. 22, 2008) — Nearly 50 years after one of the most controversial behavioral experiments in history, a social psychologist has found that people are still just as willing to administer what they believe are painful electric shocks to others when urged on by an authority figure.

Jerry M. Burger, PhD, replicated one of the famous obedience experiments of the late Stanley Milgram, PhD, and found that compliance rates in the replication were only slightly lower than those found by Milgram. And, like Milgram, he found no difference in the rates of obedience between men and women.

Burger’s findings are reported in the January issue of American Psychologist. The issue includes a special section reflecting on Milgram’s work 24 years after his death on Dec. 20, 1984, and analyzing Burger’s study.

“People learning about Milgram’s work often wonder whether results would be any different today,” said Burger, a professor at Santa Clara University. “Many point to the lessons of the Holocaust and argue that there is greater societal awareness of the dangers of blind obedience. But what I found is the same situational factors that affected obedience in Milgram’s experiments still operate today.”

Stanley Milgram was an assistant professor at Yale University in 1961 when he conducted the first in a series of experiments in which subjects – thinking they were testing the effect of punishment on learning – administered what they believed were increasingly powerful electric shocks to another person in a separate room. An authority figure conducting the experiment prodded the first person, who was assigned the role of “teacher” to continue shocking the other person, who was playing the role of “learner.” In reality, both the authority figure and the learner were in on the real intent of the experiment, and the imposing-looking shock generator machine was a fake.

Milgram found that, after hearing the learner’s first cries of pain at 150 volts, 82.5 percent of participants continued administering shocks; of those, 79 percent continued to the shock generator’s end, at 450 volts. In Burger’s replication, 70 percent of the participants had to be stopped as they continued past 150 volts – a difference that was not statistically significant.

“Nearly four out of five of Milgram’s participants who continued after 150 volts went all the way to the end of the shock generator,” Burger said. “Because of this pattern, knowing how participants react at the 150-volt juncture allows us to make a reasonable guess about what they would have done if we had continued with the complete procedure.”

Milgram’s techniques have been debated ever since his research was first published. As a result, there is now an ethics codes for psychologists and other controls have been placed on experimental research that have effectively prevented any precise replications of Milgram’s work. “No study using procedures similar to Milgram’s has been published in more than three decades,” according to Burger.

Burger implemented a number of safeguards that enabled him to win approval for the work from his university’s institutional review board. First, he determined that while Milgram allowed his subjects to administer “shocks” of up to 450 volts in 15-volt increments, 150 volts appeared to be the critical point where nearly every participant paused and indicated reluctance to continue. Thus, 150 volts was the top range in Burger’s study.

In addition, Burger screened out any potential subjects who had taken more than two psychology courses in college or who indicated familiarity with Milgram’s research. A clinical psychologist also interviewed potential subjects and eliminated anyone who might have a negative reaction to the study procedure.

In Burger’s study, participants were told at least three times that they could withdraw from the study at any time and still receive the $50 payment. Also, these participants were given a lower-voltage sample shock to show the generator was real – 15 volts, as compared to 45 volts administered by Milgram.

Several of the psychologists writing in the same issue of American Psychologist questioned whether Burger’s study is truly comparable to Milgram’s, although they acknowledge its usefulness.

“…there are simply too many differences between this study and the earlier obedience research to permit conceptually precise and useful comparisons,” wrote Arthur G. Miller, PhD, of Miami University in Oxford, Ohio.

“Though direct comparisons of absolute levels of obedience cannot be made between the 150-volt maximum of Burger’s research design and Milgram’s 450-volt maximum, Burger’s ‘obedience lite’ procedures can be used to explore further some of the situational variables studied by Milgram, as well as look at additional variables,” wrote Alan C. Elms, PhD, of the University of California, Davis. Elms assisted Milgram in the summer of 1961.

In Rousseau’s footsteps: David Graeber and the anthropology of unequal society (The Memory Bank)

By Keith Hart

July 4, 2012, 11:14 pm

A review of David Graeber Debt: The first 5,000 years (Melville House, New York, 2011, 534 pages)

Debt is everywhere today. What is “sovereign debt” and why must Greece pay up, but not the United States? Who decides that the national debt will be repaid through austerity programmes rather than job-creation schemes? Why do the banks get bailed out, while students and home-owners are forced to repay loans? The very word debt speaks of unequal power; and the world economic crisis since 2008 has exposed this inequality more than any other since the 1930s. David Graeber has written a searching book that aims to place our current concerns within the widest possible framework of anthropology and world history. He starts from a question: why do we feel that we must repay our debts? This is a moral issue, not an economic one. In market logic, the cost of bad loans should be met by creditors as a discipline on their lending practices. But paying back debts is good for the powerful few, whereas the mass of debtors have at times sought and won relief from them.

What is debt? According to Graeber, it is an obligation with a figure attached and hence debt is inseparable from money. This book devotes a lot of attention to where money comes from and what it does. States and markets each play a role in its creation, but money’s form has fluctuated historically between virtual credit and metal currency. Above all Graeber’s enquiry is framed by our unequal world as a whole. He resists the temptation to offer quick remedies for collective suffering, since this would be inconsistent with the timescale of his argument. Nevertheless, readers are offered a worldview that clearly takes the institutional pillars of our societies to be rotten and deserving of replacement. It is a timely and popular view. Debt: The first 5,000 years is an international best-seller. The German translation recently sold 30,000 copies in the first two weeks.

I place the book here in a classical tradition that I call “the anthropology of unequal society” (Hart 2006), before considering what makes David Graeber a unique figure in contemporary intellectual politics. A summary of the book’s main arguments is followed by a critical assessment, focusing on the notion of a “human economy”.

The anthropology of unequal society

Modern anthropology was born to serve the coming democratic revolution against the Old Regime. A government by the people for the people should be based on what they have in common, their “human nature” or “natural rights”. Writers from John Locke (1690) to Karl Marx (1867) identified the contemporary roots of inequality with money’s social dominance, a feature that we now routinely call “capitalism”. For Locke money was a store of wealth that allowed some individuals to accumulate property far beyond their own immediate needs. For Marx “capital” had become the driving force subordinating the work of the many to machines controlled by a few. In both cases, accumulation dissolved the old forms of society, but it also generated the conditions for its own replacement by a more just society, a “commonwealth” or “communism”. It was, however, the philosophers of the eighteenth-century liberal enlightenment who developed a systematic approach to anthropology as an intellectual source for remaking the modern world.

Following Locke’s example, they wanted to found democratic societies in place of the class system typical of agrarian civilizations. How could arbitrary social inequality be abolished and a more equal society founded on their common human nature? Anthropology was the means of answering that question. The great Victorian synthesizers, such as Morgan, Tylor and Frazer, stood on the shoulders of predecessors motivated by an urgent desire to make world society less unequal. Kant’s Anthropology from a Pragmatic Point of View, a best-seller when published in 1798, was the culmination of that Enlightenment project; but it played almost no part in the subsequent history of the discipline. The main source for nineteenth-century anthropology was rather Jean-Jacques Rousseau.  He revolutionized our understanding of politics, education, sexuality and the self in four books published in the 1760s: The Social ContractEmileJulie and The Confessions. He was forced to flee for his life from hit squads encouraged by the church. But he made his reputation earlier through two discourses of which the second, Discourse on the Origins and Foundations of Inequality among Men (1754), deserves to be seen as the source for an anthropology that combines the critique of unequal society with a revolutionary politics of democratic emancipation.

Rousseau was concerned here not with individual variations in natural endowments which we can do little about, but with the conventional inequalities of wealth, honour and the capacity to command obedience which can be changed. In order to construct a model of human equality, he imagined a pre-social state of nature, a sort of hominid phase of human evolution in which men were solitary, but healthy, happy and above all free. This freedom was metaphysical, anarchic and personal: original human beings had free will, they were not subject to rules of any kind and they had no superiors. At some point humanity made the transition to what Rousseau calls “nascent society”, a prolonged period whose economic base can best be summarized as hunter-gathering with huts. This second phase represents his ideal of life in society close to nature.

The rot set in with the invention of agriculture or, as Rousseau puts it, wheat and iron. Here he contradicted both Hobbes and Locke. The formation of a civil order (the state) was preceded by a war of all against all marked by the absence of law, which Rousseau insisted was the result of social development, not an original state of nature. Cultivation of the land led to incipient property institutions which, far from being natural, contained the seeds of entrenched inequality. Their culmination awaited the development of political society. He believed that this new social contract was probably arrived at by consensus, but it was a fraudulent one in that the rich thereby gained legal sanction for transmitting unequal property rights in perpetuity. From this inauspicious beginning, political society then usually moved, via a series of revolutions, through three stages:

The establishment of law and the right of property was the first stage, the institution of magistrates the second and the transformation of legitimate into arbitrary power the third and last stage. Thus the status of rich and poor was authorized by the first epoch, that of strong and weak by the second and by the third that of master and slave, which is the last degree of inequality and the stage to which all the others finally lead, until new revolutions dissolve the government altogether and bring it back to legitimacy (Rousseau 1984:131).

One-man-rule closes the circle. “It is here that all individuals become equal again because they are nothing, here where subjects have no longer any law but the will of the master”(Ibid: 134). For Rousseau, the growth of inequality was just one aspect of human alienation in civil society. We need to return from division of labour and dependence on the opinion of others to subjective self-sufficiency. His subversive parable ends with a ringing indictment of economic inequality which could well serve as a warning to our world. “It is manifestly contrary to the law of nature, however defined… that a handful of people should gorge themselves with superfluities while the hungry multitude goes in want of necessities” (Ibid: 137).

Lewis H. Morgan (1877) drew on Rousseau’s model for his own fiercely democratic synthesis of human history, Ancient Society, which likewise used an evolutionary classification that we now call bands, tribes and states, each stage more unequal than the one before.  Morgan’s work is normally seen as the launch of modern anthropology proper because of his ability to enrol contemporary ethnographic observations of the Iroquois in an analysis of the historical structures underlying western civilization’s origins in Greece and Rome. Marx and Engels enthusiastically took up Morgan’s work as confirmation of their own critique of the state and capitalism; and the latter, drawing on Marx’s extensive annotations ofAncient Society, made the argument more accessible as The Origin of the Family, Private Property and the State (1884). Engels’s greater emphasis on gender inequality made this a fertile source for the feminist movement in the 1960s and after.

The traditional home of inequality is supposed to be India and Andre Beteille, in Inequality among Men (1977) and other books, has made the subject his special domain, merging social anthropology with comparative sociology. In the United States, Leslie White at Michigan and Julian Steward at Columbia led teams, including Wolf, Sahlins, Service, Harris and Mintz, who took the evolution of the state and class society as their chief focus. Probably the single most impressive work coming out of this American school was Eric Wolf’s Europe and the People without History (1982). But one man tried to redo Morgan in a single book and that was Claude Lévi-Strauss in The Elementary Structures of Kinship (1949). In Tristes Tropiques (1955), Lévi-Strauss acknowledged Rousseau as his master. The aim of Elementary Structures was to revisit Morgan’s three-stage theory of social evolution, drawing on a new and impressive canvas, “the Siberia-Assam axis” and all points southeast as far as the Australian desert. Lévi-Strauss took as his motor of development the forms of marriage exchange and the logic of exogamy. The “restricted reciprocity” of egalitarian bands gave way to the unstable hierarchies of “generalized reciprocity” typical of the Highland Burma tribes. The stratified states of the region turned inwards to endogamy, to the reproduction of class differences and the negation of social reciprocity.

Jack Goody has tried to lift our profession out of a myopic ethnography into an engagement with world history that went out of fashion with the passing of the Victorian founders. Starting with Production and Reproduction (1976), he has produced a score of books over the last three decades investigating why Sub-Saharan Africa differs so strikingly from the pre-industrial societies of Europe and Asia, with a later focus on refuting the West’s claim to being exceptional, especially when compared with Asia (Hart 2006, 2011).  The common thread of Goody’s compendious work links him through the Marxist pre-historian Gordon Childe (1954) to Morgan-Engels and ultimately Rousseau. The key to understanding social forms lies in production, which for us means machine production. Civilization or human culture is largely shaped by the means of communication — once writing, now an array of mechanized forms. The site of social struggles is property, now principally conflicts over intellectual property. And his central issue of reproduction has never been more salient than at a time when the aging citizens of rich countries depend on the proliferating mass of young people out there. Kinship needs to be reinvented too.

David Graeber: the first 50 years

Graeber brings his own unique combination of interests and engagements to renewing this “anthropology of unequal society”. Who is he? He spent the 1960s as the child of working-class intellectuals and activists in New York and was a teenager in the 1970s, which turned out to be the hinge decade of our times, leading to a “neoliberal” counter-revolution against post-war social democracy. This decade was framed at one end by the US dollar being taken off the gold standard in 1971 and at the other by a massive interest rate increase in 1979 induced by a second oil price hike. The world economy has been depressed ever since, especially at its western core. Graeber says that he embraced anarchism at sixteen.

The debt crisis of the 1980s was triggered by irresponsible lending of the oil surplus by western banks to Third World kleptocrats (Hart 2000: 142-143) and by the new international regime of high interest rates. In market theory, bad loans are supposed to discipline lenders, but the IMF and World Bank insisted on every penny of added interest being repaid by the governments of poor countries. This was also the time when structural adjustment policies forced those governments to open up their national economies to the free flow of money and commodities, with terrible consequences for public welfare programmes and jobs. If the anti-colonial revolution inspired my generation in the 1960s, Graeber’s internationalism was shaped by this wholesale looting of the successor states. He took an active part in demonstrations against this new phase of “financial globalization”, a phenomenon now often referred to as the “alter-globalization movement” (Pleyers 2010), but he and his fellow activists call it the “global justice movement”. Its public impact peaked in the years following the financial crisis of 1997-98 (involving Southeast Asia, Russia, Brazil and the failure of a US hedge fund, Long-Term Capital Management), notably through mass mobilizations in Seattle, Genoa and elsewhere. In the Debt book, Graeber claims that they took on the IMF and won.

David Graeber received a doctorate in anthropology from the University of Chicago based on ethnographic and historical research on a former slave village in Madagascar. This was eventually published as a long and exemplary monograph, Lost People: Magic and the legacy of slavery in Madagascar (Graeber 2007a). The history of the slave trade, colonialism and the post-colony figure prominently in how he illustrates global inequality through a focus on debt. Before that, he published a strong collection of essays on value, Toward an Anthropological Theory of Value: The false coin of our own dreams (Graeber 2001), in which he sought to relate economic value (especially value as measured impersonally by money) and the values that shape our subjectivity in society. This hinged on revisiting both Karl Marx and Marcel Mauss, providing the main account in English of how the latter’s cooperative socialism shaped his famous work on the gift (Mauss 1925). A theme of both books is the role of magic and money fetishism in sustaining unequal society.

Politics forms a central strand of Graeber’s work, with four books published so far and more in the works: Fragments of an Anarchist Anthropology (2004), Possibilities: Essays on hierarchy, rebellion, and desire (2007b), Direct Action: An ethnography (2009a) and Revolutions in Reverse: Essays on politics, violence, art, and imagination (2011c). These titles reveal a range of political interests that take in violence, aesthetics and libido. He insists on the “elective affinity” between anthropological theory and method and an anarchist programme of resistance, rebellion and revolution; and this emphasis on “society against the state” makes him a worthy successor to Pierre Clastres (1974). Graeber’s academic career has been fitful, most notoriously when he was “let go” by Yale despite his obvious talent and productivity. This fed rumours about the academic consequences of his political activities. These have led to numerous brushes with the police, but so far not to prolonged incarceration, although his inability to find a job in American universities could be seen as a form of exile.

Debt: The first 5,000 years was published in summer 2011 and Graeber began a year’s sabbatical leave from his teaching job in London by moving to New York, where he became an ubiquitous presence in the print media, television and blogs. In August-September he helped form the first New York City General Assembly which spawned the Occupy Wall Street movement. He has been credited with being the author of that movement’s slogan, “We are the 99%”, and helped to give it an anarchist political style. OWS generated a wave of imitations in the United States and around the world, known collectively as “the Occupy movement”, inviting comparison with the “Arab Spring” and Madrid’s Los Indignados in what seemed then to be a global uprising. Some shared features of this series of political events, such as an emphasis on non-violence, consensual decision-making and the avoidance of sectarian division, evoke Jean-Jacques Rousseau’s idea of the “general will”; and it is not wholly fanciful to compare David Graeber’s career so far with his great predecessor’s.

Graeber and Rousseau both detested the mainstream institutions of the world they live in and devoted their intellectual efforts to building revolutionary alternatives. This means not being satisfied with reporting how the world is, but rather exploring the dialectic linking the actual to the possible. This in turn implies being willing to mix established genres of research and writing and to develop new ones. Both are prolific writers with an accessible prose style aimed at reaching a mass audience. Both achieved unusual fame for an intellectual and their political practice got them into trouble. Both suffered intimidation, neglect and exile for their beliefs. Both attract admiration and loathing in equal measure. Their originality is incontestable, yet each can at times be silly. There is no point in considering their relative significance. The personal parallels that I point to here reinforce my claim that Graeber’s Debt book should be seen as a specific continuation of that “anthropology of unequal society” begun by Rousseau two and a half centuries ago.

Debt: the argument

Much of the contemporary world revolves round the claims we make on each other and on things: ownership, obligations, contracts and payment of taxes, wages, rents, fees etc. David Graeber’s book, Debt: The first 5,000 years, aims to illuminate these questions through a focus on debt seen in very wide historical perspective. It is of course a central issue in global politics today, at every level of society. Every day sees another example of a class struggle between debtors and creditors to shape the distribution of costs after a long credit boom went dramatically bust.

We might be indebted to God, the sovereign or our parents for the gift of life, but Graeber rightly insists that the social logic of debt is revealed most clearly when money is involved. He cites approvingly an early twentieth-century writer who insisted that “money is debt”. This book of over 500 pages is rich in argument and knowledge. The notes and references are compendious, ranging over five millennia of the main Eurasian civilizations (ancient Mesopotamia, Egypt and the Mediterranean, medieval Europe, China, India and Islam) and the ethnography of stateless societies in Africa, the Americas and the Pacific. Its twelve chapters are framed by an introduction to our moral confusion concerning debt and a concluding sketch of the present rupture in world history that began in the early 1970s. Graeber’s case is founded on anthropological and historical comparison more than his grasp of contemporary political economy, although he has plenty to say in passing about that. There is also a current of populist culture running through the book and this is reinforced by a prose style aimed at closing the gap between author and reader that his formidable scholarship might otherwise open up.

Perhaps this aspect of the book may be illustrated by introducing a recent short film. Paul Grignon’s Money as Debt (2006, 47 minutes) — an underground hit in activist circles — seeks to explain where money comes from. Most of the money in circulation is issued by banks whenever they make a loan. The real basis of money, the film claims, is thus our signature whenever we promise to repay a debt. The banks create that money by a stroke of the pen and the promise is then bought and sold in increasingly complex ways. The total debt incurred by government, corporations, small businesses and consumers spirals continuously upwards since interest must be paid on it all. Although the general idea is an old one, it has taken on added salience at a time when the supply of money, which could once plausibly be represented as public currency in circulation, has been overtaken by the creation of private debt.

The film’s attempt to demystify money is admirable, but its message is misleading.  Debt and credit are two sides of the same coin, the one evoking passivity in the face of power, the other individual empowerment. The origin of money in France and Germany is considered to be debt, whereas in the United States and Britain it is traditionally conceived of as credit. Either term alone is loaded, missing the dialectical character of the relations involved. Money as Debt demonizes the banks and interest in particular, letting the audience off the hook by not showing the active role most of us play in sustaining the system. Money today is issued by a dispersed global network of economic institutions of many kinds; and the norm of economic growth is fed by a widespread desire for self-improvement, not just by bank interest.

David Graeber offers a lot more than this, of course; but his book also feeds off popular currents too, which is not surprising given how much time he spends outside the classroom and his study. His analytical framework is spelled out in great detail over six chapters. The first two tackle the origins of money in barter and “primordial debt” respectively. He shows, forcefully and elegantly, how implausible the standard liberal origin myth of money as a medium of exchange is; but he also rejects as a nationalist myth the main opposing theory that traces money’s origins as a means of payment and unit of account to state power. In the first case he follows Polanyi (1944), but by distancing himself from the second, he highlights the interdependence of states and markets in money’s origins.  A short chapter shows that money was always both a commodity and a debt-token (“the two sides of the coin”, Hart 1986), giving rise to a lot of political and moral contestation, especially in the ancient world. Following Nietzsche, Graeber argues that money introduced for the first time a measure of the unequal relations between buyer and seller, creditor and debtor. Whereas Rousseau traced inequality to the invention of property, he locates the roots of human bondage, slavery, tribute and organized violence in debt relations. The contradictions of indebtedness, fed by money and markets, led the first world religions to articulate notions of freedom and redemption in response to escalating class conflict between creditors and debtors, often involving calls for debt cancellation.

The author now lays out his positive story to counter the one advanced by mainstream liberal economics. “A brief treatise on the moral grounds of economic relations” makes explicit his critique of the attempt to construct “the economy” as a sphere separate from society in general. This owes something to Polanyi’s (1957) universal triad of distributive mechanisms – reciprocity, redistribution and market – here identified as “everyday communism”, hierarchy and reciprocity. By the first Graeber means a human capacity for sharing or “baseline sociality”; the second is sometimes confused with the third, since unequal relations are often represented as an exchange – you give me your crops in return for not being beaten up. The difference between hierarchy and reciprocity is that debt is permanent in the first case, but temporary in the second. The western middle classes train their children to say please and thank you as a way of limiting the debt incurred by being given something. All three principles are present everywhere, but their relative emphasis is coloured by dominant economic forms. Thus “communism” is indispensable to modern work practices, but capitalism is a lousy way of harnessing our human capacity for cooperation.

The next two chapters introduce what is for me the main idea of the book, the contrast between “human economies” and those dominated by money and markets (Graeber prefers to call them “commercial economies” and sometimes “capitalism”). First he identifies the independent characteristics of human economies and then shows what happens when they are forcefully incorporated into the economic orbit of larger “civilisations”, including our own. This is to some extent a great divide theory of history, although, as Mauss would insist, elements of human economy persist in capitalist societies. There is a sense in which “human economies” are a world we have lost, but might recover after the revolution. Graeber is at pains to point out that these societies are not necessarily more humane, just that “they are economic systems primarily concerned not with the accumulation of wealth, but with the creation, destruction, and rearranging of human beings” (2011a: 130). They use money, but mainly as “social currencies” whose aim is to maintain relations between people rather than to purchase things.

“In a human economy, each person is unique and of incomparable value, because each is a unique nexus of relations with others” (Ibid: 158). Yet their money forms make it possible to treat people as quantitatively identical in exchange and that requires a measure of violence. Brutality — not just conceptual, but physical too — is omnipresent, more in some cases than others. Violence is inseparable from money and debt, even in the most “human” of economies, where ripping people out of their familiar context is commonplace. This, however, gets taken to another level when they are drawn into systems like the Atlantic slave trade or the western colonial empires of yesteryear. The following extended reflection on slavery and freedom — a pair that Graeber sees as being driven by a culture of honour and indebtedness — culminates in the ultimate contradiction underpinning modern liberal economics, a worldview that conceives of individuals as being socially isolated in a way that could only be prepared for by a long history of enslaving conquered peoples. Since we cannot easily embrace this account of our own history, it is not surprising that we confuse morality and power when thinking about debt.

So far, Graeber has relied heavily on anthropological material, especially from African societies, to illustrate the world that the West transformed, although his account of money’s origins draws quite heavily on the example of ancient Mesopotamia. Now he formalizes his theory of money to organize a compendious review of world history in four stages. These are: the era from c.3000 BC that saw the first urban civilizations; the “Axial Age” which he, rather unusually, dates from 800BC to 600AD; the Middle Ages (600-1450AD); and the age of “the great capitalist empires”, from 1450AD to the US dollar’s symbolic rupture with the gold standard in 1971. As this last date suggests, the periodization relies heavily on historical oscillations between broad types of money. Graeber calls these “credit” and “bullion”, that is, money as a virtual measure of personal relations, like IOUs, and as currency or impersonal things made from precious metals for circulation.

Money started out as a unit of account, administered by institutions such as temples and banks, as well as states, largely as a way of measuring debt relations between people. Coinage was introduced in the first millennium as part of a complex linking warfare, mercenary soldiers, slavery, looting, mines, trade and the provisioning of armies on the move. Graeber calls this “the military-coinage-slavery complex” of which Alexander the Great, for example, was a master. Hence our word, “soldier”, refers to his pay. The so-called “dark ages” offered some relief from this regime and for most of the medieval period, metal currencies were in very short supply and money once again took the dominant form of virtual credit. India, China and the Islamic world are enlisted here to supplement what we know of Europe. But then the discovery of the new world opened up the phase we are familiar with from the last half-millennium, when western imperialism revived the earlier tradition of warfare and slavery lubricated by bullion.

The last four decades are obviously transitional, but the recent rise of virtual credit money suggests the possibility of another long swing of history away from the principles that underpinned the world the West made. It could be a multi-polar world, more like the middle ages than the last two centuries. It could offer more scope for “human economies” or at least “social currencies”. The debt crisis might provoke revolutions and then, who knows, debt cancellation along the lines of the ancient jubilee. Perhaps the whole institutional complex based on states, money and markets or capitalism will be replaced by forms of society more directly responsive to ordinary people and their capacity for “everyday communism”.

All of this is touched on in the final chapter. But Graeber leaves these “policy conclusions” deliberately vague. His aim in this book has been to draw his readers into a vision of human history that runs counter to what makes their social predicament supposedly inevitable. It is a vision inspired in part by his profession as an anthropologist, in part by his political engagement as an activist. Both commitments eschew drawing up programmes for others to follow. Occupy Wall Street has been criticized for its failure to enumerate a list of “demands”. No doubt much the same could be said of this book; but then readers, including this reviewer, will be inspired by it in concrete ways to imagine possibilities that its author could not have envisaged.

Towards a human economy

David Graeber and I came up with the term “human economy” independently during the last decade (Graeber 2009b, 2011a; Hart 2008, Hart, Laville and Cattani 2010). The editors of The Human Economy: A citizen’s guide distanced ourselves, in the introduction and our editorial approach, from any “revolutionary” eschatology that suggested society had reached the end of something and would soon be launched on a quite new trajectory. The idea of a “human economy” drew attention to the fact that people do a lot more for themselves than an exclusive focus on the dominant economic institutions would suggest. Against a singular notion of the economy as “capitalism”, we argued that all societies combine a plurality of economic forms and several of these are distributed across history, even if their combination is strongly coloured by the dominant economic form in particular times and places.

For example, in his famous essay on The Gift (1925), Marcel Mauss showed that other economic principles were present in capitalist societies and that understanding this would provide a sounder basis for building non-capitalist alternatives than the Bolshevik revolution’s attempt to break with markets and money entirely. Karl Polanyi too, in his various writings, insisted that the human economy throughout history combined a number of mechanisms of which the market was only one. We argued therefore that the idea of radical transformation of an economy conceived of monolithically as capitalism into its opposite was an inappropriate way to approach economic change. We should rather pay attention to the full range of what people are doing already and build economic initiatives around giving these a new direction and emphasis, instead of supposing that economic change has to be reinvented from scratch. Although this looks like a gradualist approach to economic improvement, its widespread adoption would have revolutionary consequences.

David Graeber’a anarchist politics inform his economic analysis; and he has always taken an anti-statist and anti-capitalist position, with markets and money usually being subsumed under the concept of capitalism. That is, he sees the future as being based on the opposite of our capitalist states. The core of his politics is “direct action” which he has practised and written about as an ethnographer (Graeber 2009a). In The Human Economy, we argued that people everywhere rely on a wide range of organizations in their economic lives: markets, nation-states, corporations, cities, voluntary associations, families, virtual networks, informal economies, crime. We should be looking for a more progressive mix of these things. We can’t afford to turn our backs on institutions that have helped humanity make the transition to modern world society. Large-scale bureaucracies co-exist with varieties of popular self-organization and we have to make them work together rather than at cross-purposes, as they often do now.

Graeber also believes, as we have seen, that economic life everywhere is based on a plural combination of moral principles which take on a different complexion when organized by dominant forms. Thus, helping each other as equals is essential to capitalist societies, but capitalism distorts and marginalizes this human propensity. Yet he appears to expect a radical rupture with capitalist states fairly soon and this is reflected in a stages theory of history, with categories to match. At first sight, these positions (let’s call them “reform” and “revolution”) are incompatible, but recent political developments (the “Arab Spring” and Occupy movements of 2011, however indeterminate their immediate outcomes) point to the need to transcend such an opposition.

The gap between our approaches to making the economy human is therefore narrowing. Even so, there are differences of theory and method that point to some residual reservations I have about the Debt book. The first of these concerns Graeber’s preference for lumping together states, money, markets, debt and capitalism, along with violence, war and slavery as their habitual bedfellows. Money and markets have redemptive qualities that in my view (Hart 2000) could be put to progressive economic ends in non-capitalist forms; nor do I imagine that modern institutions such as states, corporations and bureaucracy will soon die away. Anti-capitalism as a revolutionary strategy begs the question of the plurality of modern economic institutions. As Mauss showed (Hart 2007), human economies exist in the cracks of capitalist societies. David Graeber seems to agree, at least when it comes to finding “everyday communism” there and, by refusing to sanitize “human economies” in their pristine form, he modifies the categorical and historical division separating them and commercial economies. Revolutionary binaries seem to surface at various points in his book, but an underlying tendency to discern continuity in human economic practices is just as much a feature of David Graeber’s anthropological vision.

An argument of Debt’s scope hasn’t been made by a professional anthropologist for the best part of a century, certainly not one with as much contemporary relevance. The discipline largely abandoned “conjectural history” in the twentieth century in order to embrace the narrower local perspectives afforded by ethnographic fieldwork. Works of broad comparison such as Wolf’s and Goody’s were the exception to this trend. Inevitably Graeber’s methods will come under scrutiny, not just from fellow professionals, but from the general public too. (He tells me that academics don’t read footnotes any more, but laymen do). To this reader, the first half of the book – which relies heavily on ethnographic sources to spell out the argument — is more systematic, in terms of both analytical coherence and documentation, than the second, concerned as it is with fleshing out his cycles of history. In either case, little attempt is made to analyse contemporary political economy, although Graeber makes more explicit reference to this than, for example does Mauss in The Gift, where readers’ understanding of capitalist markets is taken for granted. Nowhere in the book is any reference made to the digital revolution in communications of our times and its scope to transform economies, whether human or commercial (Hart 2000, 2005).

Well, that is not quite true, for the author does occasionally introduce anecdotes based on common or his personal knowledge. The problem is that many readers who take on trust what he has to say about ancient Mesopotamia or the Tiv, may find these stories contradicted by their own knowledge. It is something akin to “Time magazine syndrome”: we accept what Time has to say about the world in general until it impinges on what we know ourselves and then its credibility dissolves. Thus:

Apple Computers is a famous example: it was founded by (mostly Republican) computer engineers who broke from IBM in Silicon Valley in the 1980s, forming little democratic circles of twenty to forty people with their laptops in each other’s garages (Graeber 2011a: 96).

The veracity of this anecdote has been challenged by numerous Californian bloggers and the author’s scholarship with it. Graeber is aware of the pitfalls of making contemporary allusions. In the final chapter (Ibid: 362-3), he cleverly introduces an urban myth he often heard about the gold stored under the World Trade Centre and then (almost) rehabilitates that myth using documented sources. Fortunately, David Graeber has not been deterred by the pedants from crossing the line between academic and general knowledge in this book and his readers benefit immensely as a result. I contributed to the publisher’s blurb for this book and said that he is “the finest anthropological scholar I know”. I stand by that. The very long essay he recently published on the divine kingship of the Shilluk (Graeber 2011c) covers the same ground as a number of famous anthropologists from Frazer onwards, but with an unsurpassed range of scholarship, as well as a democratic political perspective. Inevitably in a book like this one, the fact police will catch him out sometimes. But it is a work of immense erudition and deserves to be celebrated as such.

Our world is still massively unequal and we may be entering a period of war and revolution comparable to the “Second Thirty Years War” of 1914-1945 which came after the last time that several decades of financial imperialism went bust. Capitalism itself sometimes seems today to have reverted to a norm of rent-seeking that resembles the arbitrary inequality of the Old Regime more than Victorian industry. The pursuit of economic democracy is more elusive than ever; yet humanity has also devised universal means of communication at last adequate to the expression of universal ideas. Jean-Jacques Rousseau would have leapt at the chance to make use of this opportunity and several illustrious successors did so in their own way during the last two centuries. We need an anthropology that rises to the challenge posed by our common human predicament today. No-one has done more to meet that challenge than David Graeber, in his work as a whole, but especially in this book.


Beteille, Andre   1977   Inequality among Men. Blackwell: Oxford.

Childe, V. Gordon   1954   What Happened in History. Penguin: Harmondsworth.

Clastres, Pierre    1989 (1974)    Society against the state: Essays in political anthropology. Zone Books: New York.

Engels, Friedrich   1972 (1884)   The Origin of the Family, Private Property, and the State. Pathfinder: New York.

Goody, Jack   1976   Production and Reproduction: A Comparative Study of the Domestic Domain. Cambridge University Press: Cambridge.

Graeber, David   2001   Toward an Anthropological Theory of Value: The false coin of our own dreams. Palgrave: New York.

——    2004    Fragments of an Anarchist Anthropology. Prickly Paradigm: Chicago.

——    2007a   Lost People: Magic and the legacy of slavery in Madagascar. Indiana University Press: Bloomington IN.

——   2007b   Possibilities: Essays on hierarchy, rebellion, and desire . AK Press: Oakland CA.

——    2009a   Direct Action: An ethnography. AK Press: Baltimore MD.

——    2009b   Debt, Violence, and Impersonal Markets: Polanyian Meditations. In Chris Hann and K. Hart editors Market and Society: The Great Transformation today. Cambridge University Press: Cambridge, 106-132.

——   2011a    Debt: The first 5,000 years. Melville House: New York.

——   2011b   The divine kingship of the Shilluk: On violence, utopia, and the human condition or elements for an archaeology of sovereignty, Hau: Journal of Ethnographic Theory 1.1: 1-62.

——   2011c   Revolutions in Reverse: Essays on politics, violence, art, and imagination. Autonomedia: New York.

Hann, Chris and K. Hart   2011   Economic Anthropology: History, ethnography, critique. Polity: Cambridge.

Hart, Keith   1986   Heads or tails? Two sides of the coin. Man 21 (3): 637–56.

——   2000   The Memory Bank: Money in an unequal world. Profile: London; republished in 2001 as Money in an Unequal World. Texere: New York.

—— 2005 The Hit Man’s Dilemma: Or business personal and impersonal. Prickly Paradigm: Chicago.

——   2006   Agrarian civilization and world society. In D. Olson and M. Cole (eds.), Technology, Literacy and the Evolution of Society: Implications of the work of Jack Goody. Lawrence Erlbaum: Mahwah, NJ, 29–48.

——   2007   Marcel Mauss: in pursuit of the whole – a review essay. Comparative Studies in Society and History 49 (2): 473–85.

——   2008   The human economy. ASAonline 1.

——   2011   Jack Goody’s vision of world history and African development today (Jack Goody Lecture 2011). Halle/Saale: Max Planck Institute for Social Anthropology, Department II.

Hart, Keith, J-L. Laville and A. Cattani editors   2010   The Human Economy: A citizen’s guide. Polity: Cambridge.

Kant, Immanuel   2006   Anthropology from a Pragmatic Point of View. Cambridge University Press: Cambridge.

Lévi-Strauss, Claude   1969 (1949)   The Elementary Structures of Kinship. Beacon: Boston.

——    1973 (1955) Tristes Tropiques. Cape: London.

Locke, John   1960 (1690)   Two Treatises of Government. Cambridge University Press: Cambridge.

Marx, Karl   1970 (1867)   Capital Volume 1. Lawrence and Wishart: London.

Mauss, Marcel   1990 (1925)  The Gift: The form and reason for exchange in archaic societies. Routledge: London.

Morgan, Lewis H. 1964 (1877) Ancient Society. Bellknapp: Cambridge MA.

Pleyers, Geoffrey   2010   Alter-globalization: Becoming actors in a global age. Polity: Cambridge.

Polanyi, Karl   2001 (1944)   The Great Transformation: The political and economic origins of our times. Beacon: Boston.

——   1957   The economy as instituted process. In K. Polanyi, C. Arensberg and H. Pearson editors Trade and Market in the early Empires. Free Press: Glencoe IL, 243-269.

Rousseau, Jean-Jacques   1984 (1754)   Discourse on Inequality. Penguin: Harmondsworth.

Jean-Luc Godard on Digital Media, Monday 10 October 2011

Jean-Luc Godard, Director: “The so-called “digital” is not a mere technical medium, but a medium of thought. And when modern democracies turn technical thought into a separate domain, those modern democracies incline towards totalitarianism.”

A internet está cada vez mais política (Folha de S.Paulo)

JC e-mail 4464, de 27 de Março de 2012.


O advogado Marcel Leonardi foi um dos principais colaboradores na discussão pública que elaborou o Marco Civil da Internet, projeto de lei proposto pelo Ministério da Justiça para traçar princípios como neutralidade e privacidade na internet brasileira. Tempos depois, Leonardi foi chamado para assumir o posto de diretor de políticas públicas do Google no Brasil.

Em outras palavras, ele é o responsável por conversar com o governo, articular a defesa dos usuários em casos como o da cobrança do Escritório Central de Arrecadação e Distribuição (Ecad) sobre vídeos do YouTube embedados em blogs e levar à esfera pública princípios básicos da internet.

Tanto é que ele vive entre idas e vindas de Brasília e participa de audiências públicas para expor a opinião do Google – e a sua – sobre projetos de leis em discussão que afetam a maneira como as pessoas usam a internet, como o Código de Defesa do Consumidor, a Lei de Direitos Autorais e o próprio Marco Civil da Internet.

O advogado também responde questionamentos em nome do Google. Recentemente, o Ministério da Justiça exigiu explicações sobre as mudanças das regras de privacidade. A empresa, afinal, é custeada por publicidade – e neste modelo, os dados pessoais dos usuários têm muito valor. E é neste ponto em que os interesses da empresa e os dos usuários se distanciam. Leonardi diz que é uma questão de conscientização dos usuários sobre as novas regras.

Vestindo camiseta e calça jeans, sem o terno habitual, o articulador do Google deixa claro: hoje as empresas também fazem política. Cada vez mais.

O Ministério da Justiça questionou as mudanças na política de privacidade do Google. O que vocês responderam?

A gente está disposto a trabalhar com as autoridades. Há muita apreensão do que a gente faz em relação à privacidade, mas há pouca compreensão. Antes o Google tinha políticas separadas por produtos. Mas todas elas, com exceção de duas, já diziam que dados de um serviço poderiam ser utilizados em outros serviços. Então a unificação não alterou nada. Os dados que a gente coleta são os mesmos. As exceções eram o YouTube, que tinha uma política própria, e o histórico de buscas, que hoje expressamente pode ser usado em outros produtos do Google.

O que é preocupante.

A gente não considera assustador porque damos ao usuário as ferramentas para ele controlar isso. O usuário acessa o painel de controle e diz se quer ou não manter o histórico da busca. A pessoa pode desativar completamente. Seria assustador se acontecesse sem o usuário saber o que está acontecendo. Todas as empresas do setor adotam esse modelo.

Os dados pessoais são valiosos, e as pessoas não têm ideia do que é feito com as informações.

A mudança passou pelo maior esforço de notificação da história do Google. Anunciamos no dia 24 de janeiro, e elas só entraram em vigor no dia 1º de março. Durante todo esse período, tinha um aviso em todas as páginas. A lógica era reduzir o “legalês”, porque a indústria de internet sempre ouviu que as políticas e termos de uso tinham de ser mais claros. Enxugamos radicalmente, só que cai nesse problema: em que momento você consegue forçar alguém a ler? As pessoas sempre dizem que estão preocupadas com a privacidade, mas agem diferente.

O Google foi condenado recentemente por causa de uma postagem no Orkut. A responsabilização de empresas por conteúdo de usuários é recorrente?

É um debate antigo. Mundialmente existe o conceito de que a plataforma não é responsável. Nos EUA e na Europa a lei diz isso expressamente. O Brasil ainda não tem uma lei específica. Uma das propostas é o Marco Civil da Internet, que diz que a responsabilidade só será derivada do descumprimento de uma ordem judicial. Na ausência de leis, os tribunais analisam caso a caso. O Google sempre recorre para mostrar que, pela lógica e pelo bom senso, não existe responsabilidade da plataforma.

Como funciona o processo de remoção de conteúdo, por exemplo, um post de um blog?

Em casos de direito autoral, o Google recebe a notificação de alguém que demonstra que é titular daquele direito e que aquilo não foi autorizado, e existe a verificação se isso viola ou não. Mas existem alguns requisitos. Na lei americana, há os requisitos do DMCA (Digital Millenium Copyright Act, lei de direitos autorais sancionada em 1998). No Brasil, da lei autoral.

O próprio Google verifica?

Existem os times internos que avaliam. Se há infração, a remoção acontece sem intervenção judicial, porque está de acordo com a nossa política de não permitir violação de direito autoral.

Concorda com a proposta do Ministério da Cultura, na nova Lei de Direitos Autorais, de institucionalizar um mecanismo de notificação?

Ainda é controverso. Eles pretendiam incluir o mecanismo que transforma em lei uma prática que muitas empresas adotam. O problema desse modelo é que dá margem para muito abuso. A gente vê muito isso nos EUA. Todo mundo tenta enquadrar própria situação em uma violação para justificar uma remoção.

Por que vocês se posicionaram contra a cobrança do Ecad sobre vídeos do YouTube?

Percebemos uma distorção na postura do Ecad. Achamos importantíssimo deixar pública a nossa posição de que não compactuávamos com aquilo, de que a interpretação da lei estava errada. O grande problema é que os novos modelos de negócio querem florescer, mas eles vêem uma interpretação antiga da lei autoral e isso impede que eles cresçam. O Spotify é um exemplo. O sujeito paga 10 euros e tem acesso a milhões de músicas. Muitas vezes a pirataria nada mais é que uma demanda reprimida que o mercado não está cumprindo.

A reforma da lei de direitos autorais é um avanço?

É uma incógnita. Tenho a impressão de que a versão intermediária é um pouco mais aberta e amigável para esses modelos. Tinha a licença compulsória, que era interessante, e uma linguagem que permitiria um uso mais flexível.

Vocês opinaram nesse texto?

A gente participa dos debates, mas depois da consulta pública a coisa fica fechada. No Congresso dá para conversar. É importante. Inclusive, se não fossem os ativistas, muita coisa de regulação de internet no Brasil teria sido diferente. Toda a oposição à lei Azeredo, toda a pressão para o Marco Civil, é fruto do engajamento. Nos EUA, a o caso Sopa foi interessante. O fato da Wikipedia ter saído do ar apavorou muita gente. Foi só aí que houve conscientização sobre os riscos da lei.

Essa lei nos EUA provocou um movimento em defesa dos princípios da internet. As empresas estão assumindo uma postura política?

Não tem como a gente não pensar politicamente hoje. Não dá para olhar para o próprio umbigo e pensar que enquanto o negócio vai bem não é preciso conversar. Porque existem questões acima. Quando a gente pensa politicamente é isso, todas as empresas do setor tendem a conversar e entender melhor como isso funciona.

Há necessidade de uma lei atualizada de cibercrimes?

Existe a necessidade do juiz ou de quem trabalha com direito criminal entender melhor a internet. Porque a maior parte do que está na lei já funciona. Não podemos correr o risco de adotar um texto tão genérico ao ponto de você estar lá fuçando no celular, sem querer você invade um sistema e vão dizer que você cometeu um crime.

O Brasil ainda é líder nos pedidos de remoção de conteúdo?

Sim. No nosso relatório de transparência constam todas as requisições do governo ou da Justiça de remoção de conteúdo. O Brasil é líder em remoções porque aqui é fácil. Você pode ir sem custo e sem advogado a um tribunal de pequenas causas e pedir uma liminar para tirar um blog do ar. Além disso, muita gente está acostumada com a cultura de “na dúvida, vamos pedir para remover”.

O que pode instituir a censura.

É. A gente já se deparou com casos assustadores. Está crescendo o número de empresas criticadas por consumidores que entram com uma ação para remover qualquer referência negativa.

(Folha de São Paulo)

Revealed – the capitalist network that runs the world (New Scientist)

19 October 2011 by Andy Coghlan and Debora MacKenzie

The 1318 transnational corporations that form the core of the economy. Superconnected companies are red, very connected companies are yellow. The size of the dot represents revenue (Image: PLoS One)

AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters’ worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

The study’s assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.

The idea that a few bankers control a large chunk of the global economy might not seem like news to New York’s Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world’s transnational corporations (TNCs).

“Reality is so complex, we must move away from dogma, whether it’s conspiracy theories or free-market,” says James Glattfelder. “Our analysis is reality-based.”

Previous studies have found that a few TNCs own large chunks of the world’s economy, but they included only a limited number of companies and omitted indirect ownerships, so could not say how this affected the global economy – whether it made it more or less stable, for instance.

The Zurich team can. From Orbis 2007, a database listing 37 million companies and investors worldwide, they pulled out all 43,060 TNCs and the share ownerships linking them. Then they constructed a model of which companies controlled others through shareholding networks, coupled with each company’s operating revenues, to map the structure of economic power.

The work, to be published in PloS One, revealed a core of 1318 companies with interlocking ownerships (see image). Each of the 1318 had ties to two or more other companies, and on average they were connected to 20. What’s more, although they represented 20 per cent of global operating revenues, the 1318 appeared to collectively own through their shares the majority of the world’s large blue chip and manufacturing firms – the “real” economy – representing a further 60 per cent of global revenues.

When the team further untangled the web of ownership, it found much of it tracked back to a “super-entity” of 147 even more tightly knit companies – all of their ownership was held by other members of the super-entity – that controlled 40 per cent of the total wealth in the network. “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.

John Driffill of the University of London, a macroeconomics expert, says the value of the analysis is not just to see if a small number of people controls the global economy, but rather its insights into economic stability.

Concentration of power is not good or bad in itself, says the Zurich team, but the core’s tight interconnections could be. As the world learned in 2008, such networks are unstable. “If one [company] suffers distress,” says Glattfelder, “this propagates.”

“It’s disconcerting to see how connected things really are,” agrees George Sugihara of the Scripps Institution of Oceanography in La Jolla, California, a complex systems expert who has advised Deutsche Bank.

Yaneer Bar-Yam, head of the New England Complex Systems Institute (NECSI), warns that the analysis assumes ownership equates to control, which is not always true. Most company shares are held by fund managers who may or may not control what the companies they part-own actually do. The impact of this on the system’s behaviour, he says, requires more analysis.

Crucially, by identifying the architecture of global economic power, the analysis could help make it more stable. By finding the vulnerable aspects of the system, economists can suggest measures to prevent future collapses spreading through the entire economy. Glattfelder says we may need global anti-trust rules, which now exist only at national level, to limit over-connection among TNCs. Bar-Yam says the analysis suggests one possible solution: firms should be taxed for excess interconnectivity to discourage this risk.

One thing won’t chime with some of the protesters’ claims: the super-entity is unlikely to be the intentional result of a conspiracy to rule the world. “Such structures are common in nature,” says Sugihara.

Newcomers to any network connect preferentially to highly connected members. TNCs buy shares in each other for business reasons, not for world domination. If connectedness clusters, so does wealth, says Dan Braha of NECSI: in similar models, money flows towards the most highly connected members. The Zurich study, says Sugihara, “is strong evidence that simple rules governing TNCs give rise spontaneously to highly connected groups”. Or as Braha puts it: “The Occupy Wall Street claim that 1 per cent of people have most of the wealth reflects a logical phase of the self-organising economy.”

So, the super-entity may not result from conspiracy. The real question, says the Zurich team, is whether it can exert concerted political power. Driffill feels 147 is too many to sustain collusion. Braha suspects they will compete in the market but act together on common interests. Resisting changes to the network structure may be one such common interest.

The top 50 of the 147 superconnected companies

1. Barclays plc
2. Capital Group Companies Inc
3. FMR Corporation
4. AXA
5. State Street Corporation
6. JP Morgan Chase & Co
7. Legal & General Group plc
8. Vanguard Group Inc
10. Merrill Lynch & Co Inc
11. Wellington Management Co LLP
12. Deutsche Bank AG
13. Franklin Resources Inc
14. Credit Suisse Group
15. Walton Enterprises LLC
16. Bank of New York Mellon Corp
17. Natixis
18. Goldman Sachs Group Inc
19. T Rowe Price Group Inc
20. Legg Mason Inc
21. Morgan Stanley
22. Mitsubishi UFJ Financial Group Inc
23. Northern Trust Corporation
24. Société Générale
25. Bank of America Corporation
26. Lloyds TSB Group plc
27. Invesco plc
28. Allianz SE 29. TIAA
30. Old Mutual Public Limited Company
31. Aviva plc
32. Schroders plc
33. Dodge & Cox
34. Lehman Brothers Holdings Inc*
35. Sun Life Financial Inc
36. Standard Life plc
37. CNCE
38. Nomura Holdings Inc
39. The Depository Trust Company
40. Massachusetts Mutual Life Insurance
41. ING Groep NV
42. Brandes Investment Partners LP
43. Unicredito Italiano SPA
44. Deposit Insurance Corporation of Japan
45. Vereniging Aegon
46. BNP Paribas
47. Affiliated Managers Group Inc
48. Resona Holdings Inc
49. Capital Group International Inc
50. China Petrochemical Group Company

* Lehman still existed in the 2007 dataset used

Graphic: The 1318 transnational corporations that form the core of the economy

(Data: PLoS One)  

Rick Perry officials spark revolt after doctoring environment report (The Guardian)

Scientists ask for names to be removed after mentions of climate change and sea-level rise taken out by Texas officials

Suzanne Goldenberg, US environment correspondent, Friday 14 October 2011 13.05 BST

Republican presidential hopeful Texas Gov. Rick Perry

Rick Perry’s administration deleted references to climate change and sea-level rise from the report. Photograph: Evan Vucci/AP

Officials in Rick Perry’s home state of Texas have set off a scientists’ revolt after purging mentions of climate change and sea-level rise from what was supposed to be a landmark environmental report. The scientists said they were disowning the report on the state of Galveston Bay because of political interference and censorship from Perry appointees at the state’s environmental agency.

By academic standards, the protest amounts to the beginnings of a rebellion: every single scientist associated with the 200-page report has demanded their names be struck from the document. “None of us can be party to scientific censorship so we would all have our names removed,” said Jim Lester, a co-author of the report and vice-president of the Houston Advanced Research Centre.

“To me it is simply a question of maintaining scientific credibility. This is simply antithetical to what a scientist does,” Lester said. “We can’t be censored.” Scientists see Texas as at high risk because of climate change, from the increased exposure to hurricanes and extreme weather on its long coastline to this summer’s season of wildfires and drought.

However, Perry, in his run for the Republican nomination, has elevated denial of science, from climate change to evolution, to an art form. He opposes any regulation of industry, and has repeatedly challenged the authority of the Environmental Protection Agency.

Texas is the only state to refuse to sign on to the federal government’s new regulations on greenhouse gas emissions. “I like to tell people we live in a state of denial in the state of Texas,” said John Anderson, an oceanography at Rice University, and author of the chapter targeted by the government censors.

That state of denial percolated down to the leadership of the Texas Commission on Environmental Quality. The agency chief, who was appointed by Perry, is known to doubt the science of climate change. “The current chair of the commission, Bryan Shaw, commonly talks about how human-induced climate change is a hoax,” said Anderson.

But scientists said they still hoped to avoid a clash by simply avoiding direct reference to human causes of climate change and by sticking to materials from peer-reviewed journals. However, that plan began to unravel when officials from the agency made numerous unauthorised changes to Anderson’s chapter, deleting references to climate change, sea-level rise and wetlands destruction.

“It is basically saying that the state of Texas doesn’t accept science results published in Science magazine,” Anderson said. “That’s going pretty far.”

Officials even deleted a reference to the sea level at Galveston Bay rising five times faster than the long-term average – 3mm a year compared to .5mm a year – which Anderson noted was a scientific fact. “They just simply went through and summarily struck out any reference to climate change, any reference to sea level rise, any reference to human influence – it was edited or eliminated,” said Anderson. “That’s not scientific review that’s just straight forward censorship.”

Mother Jones has tracked the changes. The agency has defended its actions. “It would be irresponsible to take whatever is sent to us and publish it,” Andrea Morrow, a spokeswoman said in an emailed statement. “Information was included in a report that we disagree with.”

She said Anderson’s report had been “inconsistent with current agency policy”, and that he had refused to change it. She refused to answer any questions. Campaigners said the censorship by the Texas state authorities was a throwback to the George Bush era when White House officials also interfered with scientific reports on climate change.

In the last few years, however, such politicisation of science has spread to the states. In the most notorious case, Virginia’s attorney general Ken Cuccinelli, who is a professed doubter of climate science, has spent a year investigating grants made to a prominent climate scientist Michael Mann, when he was at a state university in Virginia.

Several courts have rejected Cuccinelli’s demands for a subpoena for the emails. In Utah, meanwhile, Mike Noel, a Republican member of the Utah state legislature called on the state university to sack a physicist who had criticised climate science doubters.

The university rejected Noel’s demand, but the physicist, Robert Davies said such actions had had a chilling effect on the state of climate science. “We do have very accomplished scientists in this state who are quite fearful of retribution from lawmakers, and who consequently refuse to speak up on this very important topic. And the loser is the public,” Davies said in an email.

“By employing these intimidation tactics, these policymakers are, in fact, successful in censoring the message coming from the very institutions whose expertise we need.”