Arquivo da tag: Direito

Júri indígena em Roraima absolve réu de tentativa de homicídio (G1)

24/04/2015 09h56 – Atualizado em 24/04/2015 12h18

Emily Costa – Do G1 RR

Júri ocorreu no Malocão da Demarcação, no interior da Raposa Serra do Sol, Nordeste de Roraima (Foto: Emily Costa/ G1 RR)

Júri ocorreu no Malocão da Demarcação, no interior da Raposa Serra do Sol, Nordeste de Roraima (Foto: Emily Costa/ G1 RR)

Debaixo das 18 mil palhas de buriti do Malocão da Homologação, no interior da Reserva Indígena Raposa Serra do Sol, em Roraima, o primeiro júri popular indígena do Brasil absolveu um réu acusado de tentativa de homicídio e condenou o outro réu do processo por lesão corporal leve. Os dois, que são irmãos e indígenas, foram acusados de atacar um terceiro índio. O julgamento, que durou mais de 13 horas, ocorreu nesta quinta-feira (23) e teve a presença de cerca de 200 pessoas, conforme estimativa da Polícia Militar. O Ministério Público de Roraima (MPRR) informou que vai recorrer da decisão.

Os réus do processo, Elsio e Valdemir da Silva Lopes foram acusados de tentar matar Antônio Alvino Pereira. Os três, que são da etnia Macuxi, se envolveram em uma briga no município de Uiramutã, na Raposa Serra do Sol, na tarde do dia 23 de janeiro de 2013. Durante a confusão, Elsio e Valdemir cortaram o pescoço e o braço de Antônio, respectivamente. Após a briga, os irmãos alegaram legítima defesa contra Antônio e afirmaram que a vítima estava dominada pela entidade indígena Canaimé. À época, eles foram presos em flagrante e ficaram detidos por 10 dias na Penitenciária Agrícola de Monte Cristo, em Boa Vista.

Réus são irmãos da etnia Macuxi; eles não quiseram conceder entrevistas à imprensa (Foto: Emily Costa/ G1 RR)

Réus são irmãos da etnia Macuxi; eles não quiseram conceder entrevistas à imprensa (Foto: Emily Costa/ G1 RR)

Durante o júri, o chamado Conselho de Sentença, formado apenas por índios da própria reserva, considerou a culpa de Elsio e admitiu que ele teve a intenção de matar Antônio. Contudo, o absolveu pela tentativa de homicídio. Valdemir, em contrapartida, foi condenado, mas teve a culpa por lesão corporal grave atenuada para lesão corporal simples. Com isso, ele foi sentenciado a cumprir pena de três meses de pena no regime aberto, podendo ainda recorrer da decisão em liberdade.

Ao todo, dentre réus e vítima, 10 testemunhas foram ouvidas no caso. Todas elas prestaram depoimento ao júri formado por quatro homens e três mulheres das etnias Macuxi, Ingaricó, Patamona e Taurepang. Dentre eles, o filho da vítima, o proprietário do bar onde ocorreu a tentativa de homicídio e o homem que, segundo os réus, teria dito que a vítima estava sob influência do Canaimé.

Ao G1, o juiz responsável pelo caso, Aluizio Ferreira, se limitou a dizer que a “decisão do júri é soberana e tem que ser acatada”. Ele frisou que o júri foi válido, legal e realizado conforme prevê a Constituição Federal e o Código Penal.

Indígenas acompanharam a realização do júri popular indígena na Raposa Serra do Sol, no Nordeste de Roraima (Foto: Emily Costa/ G1 RR)

Indígenas acompanharam a realização do júri popular indígena na Raposa Serra do Sol, no Nordeste de Roraima (Foto: Emily Costa/ G1 RR)

“Foi uma forma muito peculiar de tentar resolver um conflito, foi diferenciado e é algo que deve, no meu entender ser reproduzido. Obviamente, isso depende do Poder Judiciário e dos meus pares, mas eu considero que esse júri provoca reflexão”, alegou.

Os réus e a vítima não quiseram conceder entrevistas à imprensa.

Defesa comemorou a sentença

O defensor público estadual José João e a advogada Thais Lutterbak, que defenderam Valdemir e Elsio, respectivamente, consideraram o resultado do júri como ‘positivo’, apesar da condenação de um dos réus.

“Na verdade, a tese da defesa foi vitoriosa, porque nós afirmamos que o Valdemir não cometeu o crime de lesão corporal grave, conforme a acusação alegava. O júri entendeu que houve uma lesão corporal leve, a qual depende de representação por parte da vítima, o que já prescreveu”, afirmou José João.

Segundo o defensor, para que haja punição no caso, a vítima teria que ter feito uma representação contra o agressor. Entretanto, o prazo para fazê-la é de seis meses depois de saber quem é o autor do fato, o que já teria transcorrido, conforme José João.

Questionada sobre a tese de legítima defesa contra a ação do Canaimé, Thaís, advogada do réu absolvido, reiterou que a ação dele foi confessada, mas justificada sob a ameaça da entidade indígena.

Defesa comemorou veredicto; defensor considera que na prática os dois réus foram absolvidos  (Foto: Emily Costa/ G1 RR)

Defesa comemorou veredicto; defensor considera que na prática os dois réus foram absolvidos (Foto: Emily Costa/ G1 RR)

“A defesa nunca negou a autoria e a materialidade do fato. Então, o júri entendeu que houve um contexto que justificava o cometimento do delito. É claro que não estamos dizendo que a vítima é um canaimé, mas sim que houve um contexto que fundamentou a atuação dos réus”, alegou.

Durante o júri, Valdemir alegou em depoimento que o crime aconteceu pois ele e seu o irmão estavam se defendendo contra do Canaimé. Por sua vez, Elsio confessou aos jurados que golpeou o pescoço da vítima com uma faca de “cortar laranja”.

MP alega ilegalidade do júri

Desde o início do julgamento, os promotores do MPRR, Diego Oquendo e Carlos Paixão, alegaram que o júri é passível de ser anulado, pois a seleção do corpo de jurados formado unicamente por índios exclui pessoas pertencentes a outras etnias da sociedade, o que vai contra o artigo 436 do Código de Processo Penal.

“Se um morador de uma favela do Rio de Janeiro comete um crime, ele vai ser julgado apenas por membros dessa comunidade? Não. Então, porque isso deveria ocorrer em uma comunidade indígena?”, questionou Paixão durante coletiva de imprensa.

Sobre a decisão final do júri, Paixão e Oquendo afirmaram que a setença é contrária às provas do processo, onde ficou claro que houve a lesão corporal grave por parte do réu absolvido. Eles atribuíram a absolvição dele a não compreensão dos jurados sobre os questionamentos feitos no julgamento.

Durante o tribunal do júri popular, é procedimento que após os debates, o juiz apresente uma séria de perguntas simples aos jurados, chamadas de quesitação, onde ele questiona sobre o crime. A essas perguntas, os jurados devem responder ‘sim’ ou ‘não’.

Às perguntas iniciais sobre Elsio, o júri respondeu que houve a tentativa de homcídio e atribuiu a culpa a ele, mas, apesar disso, decidiu absolvê-lo. Por isso, o promotor Carlos Paixão, considerou a decisão ‘juridicamente legal, mas desconexa’.

“Olha só a incongruência: o fulano sofreu a lesão? Sim. O beltrano produziu a lesão? Sim. Ele quis matar? Sim. Daí vem o quesito ‘você o absolve? Sim'”, argumentou, acrescentando que o Ministério Público recorrerá de sentença dentro do prazo de cinco dias.

No sentido horário: líder indígena Zedoeli Alexandre e o juiz de direito responsável pelo caso, Aluizio Ferreira; eles concederam entrevista coletiva antes do início do júri (Foto: Emily Costa/ G1 RR)

No sentido horário: líder indígena Zedoeli Alexandre e o juiz de direito responsável pelo caso, Aluizio Ferreira; eles concederam entrevista coletiva antes do início do júri (Foto: Emily Costa/ G1 RR)

‘É brutal’, diz líder indígena sobre julgamento
Ao G1, o coordenador regional da região das serras, Zedoeli Alexandre, avaliou o julgamento dos ‘brancos’ como brutal. Apesar disso, de acordo com ele, a ação muda a forma como os indígenas lidarão com os conflitos a partir da realização do júri.

“Chegamos ao nosso objetivo de nos ajudar a resolver os nossos problemas. Entretanto, ficou marcada a forma como os brancos realizam um julgamento. É brutal e muito diferente da nossa forma, mais respeitosa e educativa de julgar”, esclareceu Zedoeli.

Sobre o envolvimento do Canaimé no caso, Zedoeli garantiu que a referência à entidade no processo não deixou os jurados nervosos. “Não temos como afirmar o envolvimento do Canaimé, afinal ele faz parte da cultura indígena tradicional. Não temos como dizer que foi ele, ou não. Então, acredito que tudo foi esclarecido e estamos tranquilos com o término do julgamento”, afirmou.

*   *   *

Em júri indígena de RR, réu alega legítima defesa contra espírito (G1)

23/04/2015 22h56 – Atualizado em 23/04/2015 23h08

Inaê Brandão e Emily CostaDo G1 RR

Maturuca, Raposa Serra do Sol (Foto: RCCaleffi/Coordcom/UFRR)

Comunidade Maturuca, na Raposa Serra do Sol (Foto: RCCaleffi/Coordcom/UFRR)

Valdemir da Silva Lopes, um dos indígenas acusado de tentar matar outro índio em janeiro de 2013, no município de Uiramutã, Nordeste de Roraima, alegou durante seu depoimento no júri popular indígena que ocorre nesta quinta-feira (23) que o crime aconteceu pois ele e seu irmão, Elsio da Silva Lopes, estavam se defendendo contra um espírito malígno denominado ‘Canaimé’. Elsio, que também é réu no caso, confessou aos jurados que golpeou o pescoço da vítima com uma faca de “cortar laranja”.

O júri começou na manhã desta quinta na comunidade Maturuca, na Terra Indígena Raposa Serra do Sol, localizada no município onde ocorreu o crime, e não tem previsão para ser encerrado. Segundo o Tribunal de Justiça de Roraima (TJRR), o julgamento é inédito no Brasil, pois ocorre em área indígena e o júri é composto exclusivamente por índios.

Desde que o caso chegou a público, a defesa afirmou que o crime foi motivado pela crença dos réus de que a vítima, Antônio Alvino Pereira, estava ‘dominada’ pelo espírito da entidade malígna ‘Canaimé’. O júri, que aconteceu de forma tranquila pela manhã, ficou tenso durante o depoimento de Elsio.

Ao ser perguntado por qual motivo desferiu um golpe de faca contra a vítima, Elsio respondeu que o fez “porque foi ameaçado”. O promotor do caso, Diego Oquendo, questionou Elsio sobre a tese do ‘Canaimé’. O réu foi orientado por seu advogado a não responder mais perguntas. Diante disso, a promotoria se recusou a fazer novos questionamentos e o depoimento de Elsio foi encerrado.

Durante a oitiva do segundo réu, Valdemir da Silva Lopes, ele esclareceu que estava com o seu irmão e um terceiro homem, que é testemunha ocular do fato, no bar onde o crime ocorreu. Ele afirmou que a vítima chegou “puxando conversa” e que a mesma mantinha uma “postura agressiva”. No depoimento, Valdemir afirmou que a vítima havia dito ao terceiro homem que “matava crianças”, o que teria gerado desconfiança nos irmãos.

Valdemir relatou ainda durante o depoimento que cerca de um mês antes da tentativa de homícidio, um líder indígena e uma criança haviam sido assassinados pelo ‘Canaimé’, pois, segundo ele, tinham marcas no pescoço e folhas na garganta, algo característico da entidade, conforme a crença dos indígenas.

Diante da informação do homem que Antônio Pereira seria um assassino, os irmãos concluíram que a vítima estava ‘dominada’ pelo espírito maligno e o atacaram com uma faca.

Encerrado o depoimento dos réus, o júri seguiu com os debates do Ministério Público de Roraima e da defesa dos acusados da tentativa de homícidio.

Canaimé
Segundo a antropóloga Leda Leitão Martins, o ‘Canaimé’ é um ser maligno. “É uma entidade muito poderosa que tem corpo físico e pode viajar longas distâncias. Uma pessoa pode ser ou pode virar o Canaimé. Ninguém conhece um Canaimé. Ou você é ele ou você é vítima dele”, explicou.

Julgamento
A tentativa de homicídio que está em júri popular aconteceu no 23 de janeiro de 2013, em um bar no município de Uiramutã.

Trinta indígenas, sendo 5 suplentes, das etnias Macuxi, Ingaricó, Patamona e Taurepang foram escolhidos para participar do júri e na manhã desta quinta, 7 foram sorteados para compor o quadro de jurados.

Segundo o juíz responsável pelo caso, Aluizio Ferreira, os líderes indígenas da região se reuniram em assembleia e optaram juntos pelo júri popular. “Em dezembro do ano passado, pelo menos 270 deles foram favoráveis à audiência. Então, a realização do júri é resultado de uma escolha coletiva, não é etnocentrismo ou imposição”.

Anúncios

Deadly Algorithms (Radical Philosophy)

Can legal codes hold software accountable for code that kills?

RP 187 (Sept/Oct 2014)

Schuppli web-web

Algorithms have long adjudicated over vital processes that help to ensure our well-being and survival, from pacemakers that maintain the natural rhythms of the heart, and genetic algorithms that optimise emergency response times by cross-referencing ambulance locations with demographic data, to early warning systems that track approaching storms, detect seismic activity, and even seek to prevent genocide by monitoring ethnic conflict with orbiting satellites. [1] However, algorithms are also increasingly being tasked with instructions to kill: executing coding sequences that quite literally execute.

Guided by the Obama presidency’s conviction that the War on Terror can be won by ‘out-computing’ its enemies and pre-empting terrorists’ threats using predictive software, a new generation of deadly algorithms is being designed that will both control and manage the ‘kill-list,’ and along with it decisions to strike. [2] Indeed, the recently terminated practice of ‘signature strikes’, in which data analytics was used to determine emblematic ‘terrorist’ behaviour and match these patterns to potential targets on the ground, already points to a future in which intelligence-gathering, assessment and military action, including the calculation of who can legally be killed, will largely be performed by machines based upon an ever-expanding database of aggregated information. As such, this transition to execution by algorithm is not simply a continuation of killing at ever greater distances inaugurated by the invention of the bow and arrow that separated warrior and foe, as many have suggested. [3] It is also a consequence of the ongoing automation of warfare, which can be traced back to the cybernetic coupling of Claude Shannon’s mathematical theory of information with Norbert Wiener’s wartime research into feedback loops and communication control systems. [4] As this new era of intelligent weapons systems progresses, operational control and decision-making are increasingly being outsourced to machines.

Computing terror

In 2011 the US Department of Defense (DOD) released its ‘roadmap’ forecasting the expanded use of unmanned technologies, of which unmanned aircraft systems – drones – are but one aspect of an overall strategy directed towards the implementation of fully autonomous Intelligent Agents. It projects its future as follows:

The Department of Defense’s vision for unmanned systems is the seamless integration of diverse unmanned capabilities that provide flexible options for Joint Warfighters while exploiting the inherent advantages of unmanned technologies, including persistence, size, speed, maneuverability, and reduced risk to human life. DOD envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure. [5]

The document is a strange mix of Cold War caricature and Fordism set against the backdrop of contemporary geopolitical anxieties, which sketches out two imaginary vignettes to provide ‘visionary’ examples of the ways in which autonomy can improve efficiencies through inter-operability across military domains, aimed at enhancing capacities and flexibility between manned and unmanned sectors of the US Army, Air Force and Navy. In these future scenarios, the scripting and casting are strikingly familiar, pitting the security of hydrocarbon energy supplies against rogue actors equipped with Russian technology. One concerns an ageing Russian nuclear submarine deployed by a radicalized Islamic nation-state that is beset by an earthquake in the Pacific, thus contaminating the coastal waters of Alaska and threatening its oil energy reserves. The other involves the sabotaging of an underwater oil pipeline in the Gulf of Guinea off the coast of Africa, complicated by the approach of a hostile surface vessel capable of launching a Russian short-range air-to-surface missile. [6]

These Hollywood-style action film vignettes – fully elaborated across five pages of the report – provide an odd counterpoint to the claims being made throughout the document as to the sober science, political prudence and economic rationalizations that guide the move towards fully unmanned systems. On what grounds are we to be convinced by these visions and strategies? On the basis of a collective cultural imaginary that finds its politics within the CGI labs of the infotainment industry? Or via an evidence-based approach to solving the complex problems posed by changing global contexts? Not surprisingly, the level of detail (and techno-fetishism) used to describe unmanned responses to these risk scenarios is far more exhaustive than that devoted to the three primary challenges which the report identifies as specific to the growing reliance upon and deployment of automated and autonomous systems:

1. Investment in science and technology (S&T) to enable more capable autonomous operations.

2. Development of policies and guidelines on what decisions can be safely and ethically delegated and under what conditions.

3. Development of new Verification and Validation (V&V) and T&E techniques to enable verifiable ‘trust’ in autonomy. [7]

As the second of these ‘challenges’ indicates, the delegation of decision-making to computational regimes is particularly crucial here, in so far as it provokes a number of significant ethical dilemmas but also urgent questions regarding whether existing legal frameworks are capable of attending to the emergence of these new algorithmic actors. This is especially concerning when the logic of precedent that organizes much legal decision-making (within common law systems) has followed the same logic that organized the drone programme in the first place: namely, the justification of an action based upon a pattern of behaviour that was established by prior events.

The legal aporia intersects with a parallel discourse around moral responsibility; a much broader debate that has tended to structure arguments around the deployment of armed drones as an antagonism between humans and machines. As the authors of the entry on ‘Computing and Moral Responsibility’ in the Stanford Encyclopedia of Philosophy put it:

Traditionally philosophical discussions on moral responsibility have focused on the human components in moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today’s increasingly technological society, however, human activity cannot be properly understood without making reference to technological artifacts, which complicates the ascription of moral responsibility. [8]

When one poses the question, under what conditions is it morally acceptable to deliberately kill a human being, one is not, in this case, asking whether the law permits such an act for reasons of imminent threat, self-defence or even empathy for someone who is in extreme pain or in a non-responsive vegetative state. The moral register around the decision to kill operates according to a different ethical framework: one that doesn’t necessarily bind the individual to a contract enacted between the citizen and the state. Moral positions can be specific to individual values and beliefs whereas legal frameworks permit actions in our collective name as citizens contracted to a democratically elected body that acts on our behalf but with which we might be in political disagreement. While it is, then, much easier to take a moral stance towards events that we might oppose – US drone strikes in Pakistan – than to justify a claim as to their specific illegality given the anti-terror legislation that has been put in place since 9/11, assigning moral responsibility, proving criminal negligence or demonstrating legal liability for the outcomes of deadly events becomes even more challenging when humans and machines interact to make decisions together, a complication that will only intensify as unmanned systems become more sophisticated and act as increasingly independent legal agents. Moreover, the outsourcing of decision-making to the judiciary as regards the validity of scientific evidence, which followed the 1993 Daubertruling – in the context of a case brought against Merrell Dow Pharmaceuticals – has, in addition, made it difficult for the law to take an activist stance when confronted with the limitations of its own scientific understandings of technical innovation. At present it would obviously be unreasonable to take an algorithm to court when things go awry, let alone when they are executed perfectly, as in the case of a lethal drone strike.

By focusing upon the legal dimension of algorithmic liability as opposed to more wide-ranging moral questions I do not want to suggest that morality and law should be consigned to separate spheres. However, it is worth making a preliminary effort to think about the ways in which algorithms are not simply reordering the fundamental principles that govern our lives, but might also be asked to provide alternate ethical arrangements derived out of mathematical axioms.

Algorithmic accountability

Law, which has already expanded the category of ‘legal personhood’ to include non-human actors such as corporations, also offers ways, then, to think about questions of algorithmic accountability. [9] Of course many would argue that legal methods are not the best frameworks for resolving moral dilemmas. But then again nor are the objectives of counter-terrorism necessarily best serviced by algorithmic oversight. Shifting the emphasis towards a juridical account of algorithmic reasoning might, at any rate, prove useful when confronted with the real possibility that the kill list and other emergent matrices for managing the war on terror will be algorithmically derived as part of a techno-social assemblage in which it becomes impossible to isolate human from non-human agents. It does, however, raise the ‘bar’ for what we would now need to ask the law to do. The degree to which legal codes can maintain their momentum alongside rapid technological change and submit ‘complicated algorithmic systems to the usual process of checks-and-balances that is generally imposed on powerful items that affect society on a large scale’ is of considerable concern. [10] Nonetheless, the stage has already been set for the arrival of a new cast of juridical actors endowed not so much with free will in the classical sense (that would provide the conditions for criminal liability), but intelligent systems which are wilfully free in the sense that they have been programmed to make decisions based upon their own algorithmic logic.[11] While armed combat drones are the most publicly visible of the automated military systems that the DOD is rolling out, they are only one of the many remote-controlled assets that will gather, manage, analyse and act on the data that they acquire and process.

Proponents of algorithmic decision-making laud the near instantaneous response time that allows Intelligent Agents – what some have called ‘moral predators’ – to make micro-second adjustments to avert a lethal drone strike should, for example, children suddenly emerge out of a house that is being targeted as a militant hideout. [12] Indeed robotic systems have long been argued to decrease the error margin of civilian casualties that are often the consequence of actions made by tired soldiers in the field. Nor are machines overly concerned with their own self-preservation, which might likewise cloud judgement under conditions of duress. Yet, as Sabine Gless and Herbert Zech ask, if these ‘Intelligent Agents are often used in areas where the risk of failure and error can be reduced by relying on machines rather than humans … the question arises: Who is liable if things go wrong?’[13]

Typically when injury and death occur to humans, the legal debate focuses upon the degree to which such an outcome was foreseeable and thus adjudicates on the basis of whether all reasonable efforts and pre-emptive protocols had been built into the system to mitigate against such an occurrence. However, programmers cannot of course run all the variables that combine to produce machinic decisions, especially when the degree of uncertainty as to conditions and knowledge of events on the ground is as variable as the shifting contexts of conflict and counter-terrorism. Werner Dahm, chief scientist at the United States Air Force, typically stresses the difficulty of designing error-free systems: ‘You have to be able to show that the system is not going to go awry – you have to disprove a negative.’ [14] Given that highly automated decision-making processes involve complex and rapidly changing contexts mediated by multiple technologies, can we then reasonably expect to build a form of ethical decision-making into these unmanned systems? And would an algorithmic approach to managing the ethical dimensions of drone warfare – for example, whether to strike 16-year-old Abdulrahman al-Awlaki in Yemen because his father was a radicalized cleric; a role that he might inherit – entail the same logics that characterized signature strikes, namely that of proximity to militant-like behaviour or activity? [15] The euphemistically rebranded kill list known as the ‘disposition matrix’ suggests that such determinations can indeed be arrived at computationally. As Greg Miller notes: ‘The matrix contains the names of terrorism suspects arrayed against an accounting of the resources being marshaled to track them down, including sealed indictments and clandestine operations.’ [16]

Intelligent systems are arguably legal agents but not as of yet legal persons, although precedents pointing to this possibility have already been set in motion. The idea that an actual human being or ‘legal person’ stands behind the invention of every machine who might ultimately be found responsible when things go wrong, or even when they go right, is no longer tenable and obfuscates the fact that complex systems are rarely, if ever, the product of single authorship; nor do humans and machines operate in autonomous realms. Indeed, both are so thoroughly entangled with each other that the notion of a sovereign human agent functioning outside the realm of machinic mediation seems wholly improbable. Consider for a moment only one aspect of conducting drone warfare in Pakistan – that of US flight logistics – in which we find that upwards of 165 people are required just to keep a Predator drone in the air for twenty-four hours, the half-life of an average mission. These personnel requirements are themselves embedded in multiple techno-social systems composed of military contractors, intelligence officers, data analysts, lawyers, engineers, programmers, as well as hardware, software, satellite communication, and operation centres (CAOC), and so on. This does not take into account the R&D infrastructure that engineered the unmanned system, designed its operating procedures and beta-tested it. Nor does it acknowledge the administrative apparatus that brought all of these actors together to create the event we call a drone strike. [17]

In the case of a fully automated system, decision-making is reliant upon feedback loops that continually pump new information into the system in order to recalibrate it. But perhaps more significantly in terms of legal liability, decision-making is also governed by the system’s innate ability to self-educate: the capacity of algorithms to learn and modify their coding sequences independent of human oversight. Isolating the singular agent who is directly responsible – legally – for the production of a deadly harm (as currently required by criminal law) suggests, then, that no one entity beyond the Executive Office of the President might ultimately be held accountable for the aggregate conditions that conspire to produce a drone strike and with it the possibility of civilian casualties. Given that the USA doesn’t accept the jurisdiction of the International Criminal Court and Article 25 of the Rome Statute governing individual criminal responsibility, what new legal formulations could, then, be created that would be able to account for indirect and aggregate causality born out of a complex chain of events including so called digital perpetrators? American tort law, which adjudicates over civil wrongs, might be one such place to look for instructive models. In particular, legal claims regarding the use of environmental toxins, which are highly distributed events whose lethal effects often take decades to appear, and involve an equally complex array of human and non-human agents, have been making their way into court, although not typically with successful outcomes for the plaintiffs. The most notable of these litigations have been the mass toxic tort regarding the use of Agent Orange as a defoliant in Vietnam and the Bhopal disaster in India. [18] Ultimately, however, the efficacy of such an approach has to be considered in light of the intended outcome of assigning liability, which in the cases mentioned was not so much deterrence or punishment, but, rather, compensation for damages.

Recoding the law

While machines can be designed with a high degree of intentional behaviour and will out-perform humans in many instances, the development of unmanned systems will need to take into account a far greater range of variables, including shifting geopolitical contexts and murky legal frameworks, when making the calculation that conditions have been met to execute someone. Building in fail-safe procedures that abort when human subjects of a specific size (children) or age and gender (males under the age of 18) appear, sets the stage for a proto-moral decision-making regime. But is the design of ethical constraints really where we wish to push back politically when it comes to the potential for execution by algorithm? Or can we work to complicate the impunity that certain techno-social assemblages currently enjoy? As a 2009 report by the Royal Academy of Engineering on autonomous systems argues,

Legal and regulatory models based on systems with human operators may not transfer well to the governance of autonomous systems. In addition, the law currently distinguishes between human operators and technical systems and requires a human agent to be responsible for an automated or autonomous system. However, technologies which are used to extend human capabilities or compensate for cognitive or motor impairment may give rise to hybrid agents … Without a legal framework for autonomous technologies, there is a risk that such essentially human agents could not be held legally responsible for their actions – so who should be responsible? [19]

Implicating a larger set of agents including algorithmic ones that aid and abet such an act might well be a more effective legal strategy, even if expanding the limits of criminal liability proves unwieldy. As the 2009 ECCHR Study on Criminal Accountability in Sri Lanka put it: ‘Individuals, who exercise the power to organise the pattern of crimes that were later committed, can be held criminally liable as perpetrators. These perpetrators can usually be found in civil ministries such as the ministry of defense or the office of the president.’ [20] Moving down the chain of command and focusing upon those who participate in the production of violence by carrying out orders has been effective in some cases (Sri Lanka), but also problematic in others (Abu Ghraib) where the indictment of low-level officers severed the chain of causal relations that could implicate more powerful actors. Of course prosecuting an algorithm alone for executing lethal orders that the system is in fact designed to make is fairly nonsensical if the objective is punishment. The move must, then, be part of an overall strategy aimed at expanding the field of causality and thus broadening the reach of legal responsibility.

My own work as a researcher on the Forensic Architecture project, alongside Eyal Weizman and several others, in developing new methods of spatial and visual investigation for the UN inquiry into the use of armed drones, provides one specific vantage point for considering how machinic capacities are reordering the field of political action and thus calling forth new legal strategies.[21] In taking seriously the agency of things, we must also take seriously the agency of things whose productive capacities are enlisted in the specific decision to kill. Computational regimes, in operating largely beyond the thresholds of human perception, have produced informatic conjunctions that have redistributed and transformed the spaces in which action occurs, as well as the nature of such consequential actions themselves. When algorithms are being enlisted to out-compute terrorism and calculate who can and should be killed, do we not need to produce a politics appropriate to these radical modes of calculation and a legal framework that is sufficiently agile to deliberate over such events?

Decision-making by automated systems will produce new relations of power for which we have as yet inadequate legal frameworks or modes of political resistance – and, perhaps even more importantly, insufficient collective understanding as to how such decisions will actually be made and upon what grounds. Scientific knowledge about technical processes does not belong to the domain of science alone, as the Daubert ruling implies. However, demands for public accountability and oversight will require much greater participation in the epistemological frameworks that organize and manage these new techno-social systems, and that may be a formidable challenge for all of us. What sort of public assembly will be able to prevent the premature closure of a certain ‘epistemology of facts’, as Bruno Latour would say, that are at present cloaked under a veil of secrecy called ‘national security interests’ – the same order of facts that scripts the current DOD roadmap for unmanned systems?

In a recent ABC Radio interview, Sarah Knuckey, director of the Project on Extrajudicial Executions at New York University Law School, emphasized the degree to which drone warfare has strained the limits of international legal conventions and with it the protection of civilians. [22] The ‘rules of warfare’ are ‘already hopelessly out-dated’, she says, and will require ‘new rules of engagement to be drawn up’: ‘There is an enormous amount of concern about the practices the US is conducting right now and the policies that underlie those practices. But from a much longer-term perspective and certainly from lawyers outside the US there is real concern about not just what’s happening now but what it might mean 10, 15, 20 years down the track.’ [23] Could these new rules of engagement – new legal codes – assume a similarly preemptive character to the software codes and technologies that are being evolved – what I would characterize as a projective sense of the law? Might they take their lead from the spirit of the Geneva Conventions protecting the rights of noncombatants, rather than from those protocols (the Hague Conventions of 1899, 1907) that govern the use of weapons of war, and are thus reactive in their formulation and event-based? If so, this would have to be a set of legal frameworks that is not so much determined by precedent – by what has happened in the past – but, instead, by what may take place in the future.

Notes

1. ^ See, for example, the satellite monitoring and atrocity evidence programmes: ‘Eyes on Darfur’ (www.eyesondarfur.org) and ‘The Sentinel Project for Genocide Prevention’ (http://thesentinelproject.org).

2. ^ Cori Crider, ‘Killing in the Name of Algorithms: How Big Data Enables the Obama Administration’s Drone War’, Al Jazeera America, 2014, http://america.aljazeera.com/opinions/2014/3/drones-big-data-waronterror
obama.html; accessed 18 May 2014. See also the flow chart in Daniel Byman and Benjamin Wittes, ‘How Obama Decides Your Fate if He Thinks You’re a Terrorist,’ The Atlantic, 3 January 2013, http://www.theatlantic.com/
international/archive/2013/01/how-obama-decides-your-fate-if-he-thinks-youre-a-terrorist/266419.

3. ^ For a recent account of the multiple and compound geographies through which drone operations are executed, see Derek Gregory, ‘Drone Geographies’, Radical Philosophy 183 (January/February 2014), pp. 7–19.

4. ^ Contemporary information theorists would argue that the second-order cybernetic model of feedback and control, in which external data is used to adjust the system, doesn’t take into account the unpredictability of evolutive data internal to the system resulting from crunching ever-larger datasets. See Luciana Parisi’s Introduction to Contagious Architecture: Computation, Aesthetics, and Space, MIT Press, Cambridge MA, 2013. For a discussion of Weiner’s cybernetics in this context, see Reinhold Martin, ‘The Organizational Complex: Cybernetics, Space, Discourse’, Assemblage 37, 1998, p. 110.

5. ^ DOD, Unmanned Systems Integrated Roadmap Fy2011–2036, Office of the Undersecretary of Defense for Acquisition, Technology, & Logistics, Washington, DC, 2011, p. 3, http://www.defense.gov/pubs/DOD-USRM-
2013.pdf.

6. ^ Ibid., pp. 1–10.

7. ^ Ibid., p. 27.

8. ^ Merel Noorman and Edward N. Zalta, ‘Computing and Moral Responsibility,’ The Stanford Encyclopedia of Philosophy(2014), http://plato.stanford.edu/archives/sum2014/entries/computing-responsibility.

9. ^ See John Dewey, ‘The Historic Background of Corporate Legal Personality’, Yale Law Journal, vol. 35, no. 6, 1926, pp. 656, 669.

10. ^ Data & Society Research Institute, ‘Workshop Primer: Algorithmic Accountability’, The Social, Cultural & Ethical Dimensions of ‘Big Data’ workshop, 2014, p. 3.

11. ^ See Gunther Teubner, ‘Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law,’ Journal of Law & Society, vol. 33, no.4, 2006, pp. 497–521.

12. ^ See Bradley Jay Strawser, ‘Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles,’ Journal of Military Ethics, vol. 9, no. 4, 2010, pp. 342–68.

13. ^ Sabine Gless and Herbert Zech, ‘Intelligent Agents: International Perspectives on New Challenges for Traditional Concepts of Criminal, Civil Law and Data Protection’, text for ‘Intelligent Agents’ workshop, 7–8 February 2014, University of Basel, Faculty of Law, http://www.snis.ch/sites/default/files/workshop_intelligent_agents.pdf.

14. ^ Agence-France Presse, ‘The Next Wave in U.S. Robotic War: Drones on Their Own’, Defense News, 28 September 2012, p. 2, http://www.defensenews.com/article/20120928/DEFREG02/309280004/The-Next-Wave-
U-S-Robotic-War-Drones-Their-Own.

15. ^ When questioned about the drone strike that killed 16-year old American-born Abdulrahman al-Awlaki, teenage son of radicalized cleric Anwar Al-Awlaki, in Yemen in 2011, Robert Gibbs, former White House press secretary and senior adviser to President Obama’s re-election campaign, replied that the boy should have had ‘a more responsible father’.

16. ^ Greg Miller, ‘Plan for Hunting Terrorists Signals U.S. Intends to Keep Adding Names to Kill Lists’, Washington Post, 23 October 2012, http://www.washingtonpost.com/world/national-security/plan-for-hunting-terrorists-signals-us-intends-to-keep-adding-names-to-kill-lists/2012/10/23/4789b2ae-18b3–11e2–a55c-39408fbe6a4b_story.html.

17. ^ ‘While it might seem counterintuitive, it takes significantly more people to operate unmanned aircraft than it does to fly traditional warplanes. According to the Air Force, it takes a jaw-dropping 168 people to keep just one Predator aloft for twenty-four hours! For the larger Global Hawk surveillance drone, that number jumps to 300 people. In contrast, an F-16 fighter aircraft needs fewer than one hundred people per mission.’ Medea Benjamin, Drone Warfare: Killing by Remote Control, Verso, London and New York, 2013, p. 21.

18. ^ See Peter H. Schuck, Agent Orange on Trial: Mass Toxic Disasters in the Courts, Belknap Press of Harvard University Press, Cambridge MA, 1987. See also: http://www.bhopal.com/bhopal-litigation.

19. ^ Royal Academy of Engineering, Autonomous Systems: Social, Legal and Ethical Issues, RAE, London, 2009, p. 3, http://www.raeng.org.uk/societygov/engineeringethics/pdf/Autonomous_Systems_Report_09.pdf.

20. ^ European Center for Constitutional and Human Rights, Study on Criminal Accountability in Sri Lanka as of January 2009, ECCHR, Berlin, 2010, p. 88.

21. ^ Other members of the Forensic Architecture drone investigative team included Jacob Burns, Steffen Kraemer, Francesco Sebregondi and SITU Research. See http://www.forensic-architecture.org/case/drone-strikes.

22. ^ Bureau of Investigative Journalism, ‘Get the Data: Drone Wars’, http://www.thebureauinvestigates.com/category/projects/drones/drones-graphs.

23. ^ Annabelle Quince, ‘Future of Drone Strikes Could See Execution by Algorithm’, Rear Vision, ABC Radio, edited transcript, pp. 2–3.

It’s Time to Destroy Corporate Personhood (IO9)

July 21, 2014

It's Time to Destroy Corporate Personhood

The United States in the only country in the world that recognizes corporations as persons. It’s a so-called “legal fiction” that’s meant to uphold the rights of groups and to smooth business processes. But it’s a dangerous concept that’s gone too far — and could endanger social freedoms in the future.

Illustration from Judge Dredd: Mega City Two by Ulises Farinas

Corporate personhood is a legal concept that’s used in the U.S. to recognize corporations as individuals in the eyes of the law. Like actual people, corporations hold and exercise certain rights and protections under the law and the U.S. Constitution. As legal persons, they can sue and be sued, have the right to appear in court, enter into contracts, and own property — and they can do this separate from their members or shareholders. At the same time, it provides a single entity for taxation and regulation and it simplifies complex transactions — challenges that didn’t exist during the era of sole proprietorships or partnerships when the owners were held liable for the debts and affairs of the business.

That said, a corporation does not have the full suite of rights afforded to persons of flesh-and-blood. Corporations cannot vote, run for office, or bear arms — nor can they contribute to federal political campaigns. What’s more, the concept doesn’t claim that corporations are biological people in the literal sense of the term.

A “Legal Fiction”

It's Time to Destroy Corporate Personhood

“Corporations are ‘legal fictions’ — a fact or facts assumed or created by courts, used to create rights for convenience and to serve the ends of justice,” says ethicist and attorney-at-law Linda MacDonald Glenn. “The idea of ‘corporations as persons’ though, all started because of a headnote mistake in the 1886 case of Santa Clara County v. Pacific Railroad Co, 113, U.S. 394 — a mistake that has been perpetuated with profound consequences.

Mistake or no mistake, the doctrine was affirmed in 1888 during Pembina Consolidated Silver Mining Co. v. Pennsylvania, when the Court stated that, “Under the designation of ‘person’ there is no doubt that a private corporation is included [in the Fourteenth Amendment]. Such corporations are merely associations of individuals united for a special purpose and permitted to do business under a particular name and have a succession of members without dissolution.”

It’s a doctrine that’s held ever since, one that works off the conviction that corporations are organizations of people, and that people should not be deprived of their constitutional rights when they act collectively.

The concept may seem strange and problematic, but UCLA Law Professor Adam Winkler says corporate personhood has had profound and beneficial economic consequences:

It means that the obligations the law imposes on the corporation, such as liability for harms caused by the firm’s operations, are not generally extended to the shareholders. Limited liability protects the owners’ personal assets, which ordinarily can’t be taken to pay the debts of the corporation. This creates incentives for investment, promotes entrepreneurial activity, and encourages corporate managers to take the risks necessary for growth and innovation. That’s why the Supreme Court, in business cases, has held that “incorporation’s basic purpose is to create a legally distinct entity, with legal rights, obligations, powers, and privileges different from those of the natural individuals who created it, who own it, or whom it employs.

Of course, other nations don’t employ this “fiction”, yet they’ve found ways to cope with these challenges.

Living in a World of Make-believe

Moreover, the problem with evoking a fiction is that it can lead us down some strange paths. By living in a world of make-believe, courts have extended other rights to corporations beyond those necessary. It’s hardly a fiction anymore, with “person” now having a wider meaning than ever before.

It's Time to Destroy Corporate Personhood

(YanLev/Shutterstock)

Here’s what Judge O’Dell-Seneca said last year in the Hallowich v Range case:

Corporations, companies and partnership have no spiritual nature, feelings, intellect, beliefs, thoughts, emotions or sensations because they do not exist in the manner that humankind exists…They cannot be ‘let alone’ by government because businesses are but grapes, ripe upon the vine of the law, that the people of this Commonwealth raise, tend and prune at their pleasure and need.

To this list of attributes, MacDonald Glenn adds a lack of conscience.

“I’ve heard it said that if a corporation had a psychological profile done, it would be a psychopath,” she told io9. ” The concept of corporations was created partially to shield natural persons from liability; and it allowed individuals to create something, a business, that was larger than themselves and could exist in perpetuity. But it’s twisted reasoning to allow them to have equal or higher status than ‘natural’ persons or other sentient beings. A corporation cannot laugh or love; it doesn’t enjoy the warm breezes of summer, or mourn the loss of a loved one. In short, corporations are not sentient beings; they are artifacts.”

Similarly, solicitor general Elena Kagan has warned against expanding the notion of corporate personhood. In 2009 she said: “Few of us are only our economic interests. We have beliefs. We have convictions. [Corporations] engage the political process in an entirely different way, and this is what makes them so much more damaging.”

The New York Times has also come out in condemnation of the concept:

The law also gives corporations special legal status: limited liability, special rules for the accumulation of assets and the ability to live forever. These rules put corporations in a privileged position in producing profits and aggregating wealth. Their influence would be overwhelming with the full array of rights that people have.

One of the main areas where corporations’ rights have long been limited is politics. Polls suggest that Americans are worried about the influence that corporations already have with elected officials. The drive to give corporations more rights is coming from the court’s conservative bloc — a curious position given their often-proclaimed devotion to the text of the Constitution.

The founders of this nation knew just what they were doing when they drew a line between legally created economic entities and living, breathing human beings. The court should stick to that line.

Causing Harm

I asked MacDonald Glenn if the concept of corporate personhood is demeaning or damaging tobona fide persons, particularly women.

“It’s about sentience — the ability to feel pleasure and pain,” she responded. “Corporate personhood emphasizes profits, property, assets. It should be noted that corporations were given legal status as persons before women were.”

MacDonald Glenn says that although the Declaration of Independence starts out idealistically with the words, “We hold these truths to be self-evident, that all men are created equal”, we still live in very hierarchical class-based society.

“Although we have made significant strides towards recognizing the value of all persons, generally speaking, the wealthier you are, the more powerful you are, the more influence you exert,” she says. “So, if corporations are the ones with the money, they become the ones who have the power and influence. The recent Supreme court decisions reinforce that and, sadly, it encourages social stratification — a system not very different than those portrayed recently in recent movies, such as The Hunger Games or Elysium. No notion of ‘all (wo)men are created equally’ there.”

It's Time to Destroy Corporate Personhood

The notion of fictitious persons can be harmful to women in other ways as well. If it can be argued that artifacts are persons — objects devoid of an inner psychological life — it’s conceivable that other crazy fictions can be devised as well — such as fetal personhood. It’s something that should make pro-life advocates very nervous.

At the same time, while corporations are thought of as persons, an entire subset of nonhuman animals deserving of personhood status are refused to be recognized as such. In the future, the concept could lead to the attribution of personhood onto artificial intelligences or robots devoid of sentient capacities. Furthermore, the practice of recognizing artifacts as persons diminishes what it truly means to be a genuine person.

Clearly, corporations deserve rights and protections, but certainly not under the rubric of something as precious and cherished as personhood.

The Hobby Lobby Decision

Which brings us to the controversial Hobby Lobby case — a prime example of what can happen when corporate personhood is taken too far. In this controversial case, the owners of a craft store claimed that their personal religious beliefs would be offended if they had to provide certain forms of birth control coverage to employees.

It's Time to Destroy Corporate PersonhoodEXPAND

(Nicholas Eckhart)

“The purpose of extending rights to corporations is to protect the rights of people associated with the corporation, including shareholders, officers, and employees,” Justice Samuel Alito wrote in the ensuing decision. “Protecting the free-exercise rights of closely held corporations thus protects the religious liberty of the humans who own and control them.”

Of course, the Supreme Court justices failed to acknowledge a number of aspects indelible to the U.S. Constitution, including the right to be free from religion, not to the mention the fact that corporate personhood was never the intention of the Founding Fathers in the first place.

Indeed, as Washington Post’s Dana Milbank recently pointed out, the decision went way too far: “…corporations enjoy rights that ‘natural persons’ do not. The act of incorporating allows officers to avoid personal responsibility for corporate actions. Corporations have the benefits of personhood without those pesky responsibilities.”

And as MacDonald Glenn told me, the decision doesn’t protect religious liberties of individuals — it gives an artifact human rights, previously only reserved to natural persons.

“It’s form of corporate idolatry,” MacDonald Glenn told io9. “Granting the rights of citizens to corporate structures creates a disproportionate impact where the rights of those with wealth supersede the rights of those without.”

Related: 

Hilariously Useless Comments About Science from the US Supreme Court

Professor da USP defende golpe de 64, e alunos invadem a aula (O Globo)

JC e-mail 4925, de 02 de abril de 2014

Coletivo Canto Geral protestou contra discurso do docente Eduardo Gualazzi em homenagem à ‘revolução’. Professor de Direito Administrativo diz que universitários pediram aula sobre a ditadura

No dia em que o golpe militar de 64 completou 50 anos, nesta segunda-feira, o professor Eduardo Lobo Botelho Gualazzi, de Direito Administrativo da USP, resolveu homenagear o que ele denominou de “revolução”. Após iniciar a leitura de um discurso em que afirmou que apoiou “humildemente, em silêncio firme, a revolução de 31 de março de 1964” , um grupo de alunos começou a fazer barulho do lado de fora da sala, simulando cenas de tortura comuns na época da ditadura. Veja, acima, o vídeo postado no YouTube.

Em seguida, os estudantes entraram em sala encapuzados cantando “Opinião”, um hino contra o regime militar, de Zé Ketti. O professor retirou o capuz de uma universitária e tentou segurar outro aluno. Preso e torturado na ditadura, o ex-militante Antonio Carlos Fon, que havia sido convidado para o protesto, dirigiu-se a Gualazzi dizendo que não concordavam com o que ele dizia, “mas ninguém trouxe máquina de choque nem vai botar pau de arara”. Na sequência, o professor saiu de sala, enquanto o ato do Coletivo Canto Geral continuou.

Segundo a aluna Camila Sátolo, uma das organizadoras do ato, o professor, já há alguns anos, faz referências e apologias ao que ele chama de “revolução de 64”. De acordo com a estudante, que faz parte do Canto Geral, Gualazzi já havia programado uma aula especial para a data, e há algumas semanas começara a espalhar isso para sua turma, dizendo que iria falar sobre suas memórias da “revolução”.

– Muitos estudantes incomodados procuraram se articular para questionar a aula planejada pelo professor no mesmo dia em que o país tenta re-significar sua memória, relembrando a resistência ao regime militar e lutando por verdade e justiça. Minutos antes da aula ficamos sabendo que o professor havia preparado um documento oficial no qual confirmou seu apoio dentre outras barbaridades. O professor, entretanto, nem se dispôs a escutar a fala do Fon e se retirou da sala – contou Camila, aluna do 5º período de Direito.

Professor diz achar estranho repercussão sobre ‘assunto banal’
Professor de direito administrativo há 40 anos, Eduardo Gualazzi disse ao GLOBO “achar estranho dar tanta repercussão a um assunto tão banal”.

– Foi uma aula comum, a respeito de fatos que presenciei quando eu tinha 17 anos de idade. São recordações vagas, de um passado remoto. Qualquer ser humano nascido no Brasil tem recordações daquela época. Não sei por que ficam dando à minha aula uma importância que ela não tem – disse Gualazzi, para quem a qualquer momento se encontrará no pátio da universidade “uma série de alunos manifestando-se contra ou a favor de qualquer coisa”.

– Eles estão exercendo o direito de manifestação, são jovens, estão começando a vida. Estão começando a desenvolver capacidades de argumentação lógica, jurídica, administrativa. Isso tudo é absolutamente normal – completou.

Perguntado sobre a relação entre o texto lido em sala e a disciplina ministrada na universidade, o professor alegou ter atendido a um pedido dos próprios alunos:

– Preparei um texto histórico. Os alunos me pediram uma aula a respeito disso, porque sabem que tenho idade. São jovenzinhos, curiosos, querem saber o que aconteceu naquela época. Como sabem que tenho 67 anos, me pediram. Alguns até elogiaram e consideram uma ofensa o que outros fizeram contra mim.

A Faculdade de Direito da USP informou no fim da tarde desta terça-feira que ainda não decidiu se vai se pronunciar sobre o assunto.

(Lauro Neto E Thiago Herdy / O Globo)
http://oglobo.globo.com/educacao/professor-da-usp-defende-golpe-de-64-alunos-invadem-aula-12057932#ixzz2xjUT2de4

PL quer punir “terroristas” e grevistas na Copa (Agência Pública)

27.02.12 Por Andrea Dip, 

Foto: Daniel Kfouri. Arte urbana de Esqueleto Coletivo

“É a ditadura transitória da FIFA” diz presidente da Comissão de Direitos Humanos da OAB-SP, sobre PL que corre no Senado em paralelo à Lei Geral da Copa

Enquanto as atenções estão voltadas para o projeto de Lei Geral da Copa (2.330/11) que está sendo votado na Câmara nesta terça-feira (28), os senadores Marcelo Crivella (PRB-RJ), Ana Amélia (PP-RS) e Walter Pinheiro (PT-BA) correm com outro Projeto de Lei no Senado, conhecido pelos movimentos sociais como “AI-5 da Copa” por, dentre outras coisas, proibir greves durante o período dos jogos e incluir o “terrorismo” no rol de crimes com punições duras e penas altas para quem “provocar terror ou pânico generalizado”.

O PL 728/2011, apresentado no Senado em dezembro de 2011, ainda aguarda voto do relator Álvaro Dias (PSDB-PR) na Comissão de Educação, Cultura e Esporte do Senado. Se for aprovado, vai criar oito novos tipos penais que não constam do nosso Código Penal como “terrorismo”, “violação de sistema de informática” e “revenda ilegal de ingressos”, determinando penas específicas para eles. Essa lei – transitória – valeria apenas durante os jogos da FIFA.

Na justificativa da proposta, os senadores alegam que a Lei Geral da Copa deixa de fora a tipificação de uma série de delitos, necessária para “garantir a segurança durante os jogos”.

O projeto prevê ainda que quem “cometer crimes contra a integridade da delegação, árbitros, voluntários ou autoridades públicas esportivas com o fim de intimidar ou influenciar o resultado da partida de futebol poderá pegar entre dois e cinco anos de prisão”.

Para quem “violar, bloquear ou dificultar o acesso a páginas da internet, sistema de informática ou banco de dados utilizado pela organização dos eventos” a pena seria de um a quatro anos de prisão, além de multa. E para deixar a aplicação das penas ainda mais eficaz, o projeto prevê a instauração de um “incidente de celeridade processual” (art. 15), um regime de urgência em que a comunicação do delito poderia se dar por mensagem eletrônica ou ligação telefônica e funcionaria também nos finais de semana e feriados.

O presidente da Comissão de Direitos Humanos da OAB de São Paulo Martim Sampaio considera o projeto um “atentado contra o Estado Democrático de Direito”. “É um projeto de lei absurdo que quer sobrepor os interesses de mercado à soberania popular. Uma lei para proteger a FIFA e não os cidadãos e que, além de tudo, abre precedentes para injustiças por suas definições vagas”, diz o advogado.

Para Thiago Hoshino, assessor jurídico da organização de direitos humanos Terra de Direitos e integrante do Comitê Popular da Copa de Curitiba, a questão é ainda mais complicada. Ele acredita que a junção de tantos assuntos em um mesmo projeto é uma tentativa de aprovar leis antigas que endurecem principalmente a legislação penal: “É um bloco perigoso que viola garantias básicas da Constituição. E há sempre o risco de estas leis transitórias se tornarem permanentes. A legislação da Copa é, na verdade, um grande laboratório de inovações jurídicas. Depois o que for proveitoso pode permanecer. É mais fácil tornar uma lei transitória permanente do que criar e aprovar uma nova” explica.

Terrorismo

O que chama a atenção logo de cara no projeto de lei é a tipificação de “terrorismo”, que até hoje não existe no nosso código penal. No PL, ele é definido como “o ato de provocar terror ou pânico generalizado mediante ofensa à integridade física ou privação da liberdade de pessoa, por motivo ideológico, religioso, político ou de preconceito racial, étnico ou xenófobo” com pena de no mínimo 15 e no máximo 30 anos de reclusão. Martim Sampaio diz que este é o artigo mais perigoso por não dar definições exatas sobre o termo: “Da maneira como está na lei, qualquer manifestação, passeata, protesto, ato individual ou coletivo pode ser entendido como terrorismo. Isso é um cheque em branco na mão da FIFA e do Estado”.

Documentos revelados pelo WikiLeaks revelaram a pressão americana para que o Brasil criasse uma lei para o “terrorismo”, principalmente para assegurar os megaeventos. No relatório de Lisa Kubiske, conselheira da Embaixada americana em Brasília, enviado para os EUA em 24 de dezembro de 2010, a diplomata mostra-se preocupada com as declarações de Vera Alvarez, chefe da Coordenação-Geral de Intercâmbio e Cooperação Esportiva do Itamaraty porque a brasileira “admite que terroristas podem atacar o Brasil por conta das Olimpíadas, uma declaração pouco comum de um governo que acredita que não haja terrorismo no País”.

Os banqueiros também pressionam o Estado a criar uma lei antiterrorismo há algum tempo. Também em 2010, a falta de uma legislação específica sobre terrorismo foi o principal foco em um congresso sobre lavagem de dinheiro e financiamento de grupos extremistas organizado pela Federação Brasileira de Bancos (Febraban), em São Paulo. A questão poderia custar ao Brasil a exclusão do Grupo de Ação Financeira Internacional (Gafi), órgão multinacional que atua na prevenção desses crimes.

Greves

O projeto de lei também mira reduzir o direito à greve, prevendo a ampliação dos serviços essenciais à população durante a Copa – como a manutenção de portos e aeroportos, serviços de hotelaria e vigilância – e restringe a legalidade da greve de trabalhadores destes setores, incluindo os que trabalham nas obras da Copa, de três meses antes dos eventos até o fim dos jogos. Se aprovado, os sindicatos que decidirem fazer uma paralisação terão de avisar com 15 dias de antecedência e manter ao menos 70% dos trabalhadores em atividade. O governo ainda estará autorizado a contratar trabalhadores substitutos para manter o atendimento, o que é proibido pela lei 7.283/1989 em vigor no país, que estabelece 72 horas de antecedência para o aviso de greve e não determina um percentual mínimo de empregados em atividade durante as paralisações.

Eli Alves, presidente da Comissão de Direito Trabalhista da OAB-SP, lembra que o direito à greve também é garantido na Constituição Federal e diz que a sensação que fica é a de que “o Brasil está sendo alugado para a FIFA, flexibilizando suas próprias regras para fazer a Copa no país”. Martim Sampaio lembra que as greves foram proibidas durante a ditadura militar: “A gente conquistou este direito com o fim da ditadura, muitas vidas foram perdidas neste processo. Não é possível que agora criemos uma ditadura transitória da FIFA”. E convoca: “O único jeito de não deixar esta lei ser aprovada é por pressão popular. A gente tem bons exemplos de que isso funciona como a da lei da ficha limpa. É preciso conquistar a democracia todos os dias”.

Foto de abertura gentilmente cedida por Daniel Kfouri

Legislated to Health? If People Don’t Take Their Health Into Their Own Hands, Governments May Use Policies to Do It for Them (Science Daily)

ScienceDaily (Aug. 31, 2012) — Obesity rates in North America are a growing concern for legislators. Expanded waistlines mean rising health-care costs for maladies such as diabetes, heart disease and some cancers. One University of Alberta researcher says that if people do not take measures to get healthy, they may find that governments will throw their weight into administrative measures designed to help us trim the fat.

Nola Ries of the Faculty of Law’s Health Law and Science Policy Group has recently published several articles exploring potential policy measures that could be used to promote healthier behaviour. From the possibility of zoning restrictions on new fast-food outlet locations, mandatory menu labels, placing levies on items such as chips and pop or offering cash incentives for leading a more healthy and active lifestyle, she says governments at all levels are looking to adopt measures that will help combat both rising health-care costs and declining fitness levels. But she cautions that finding a solution to such a widespread, complex problem will require a multi-layered approach.

“Since eating and physical activity behaviour are complex and influenced by many factors, a single policy measure on its own is not going to be the magic bullet,” said Ries. “Measures at multiple levels — directed at the food and beverage industry, at individuals, at those who educate and those who restrict — must work together to be effective.”

Junk-food tax: A lighter wallet equals a lighter you?

Ries notes that several countries have already adopted tax measures against snack foods and beverages, similar to “sin taxes” placed on alcohol and tobacco. Although Canada has imposed its GST on various sugary and starchy snacks (no tax is charged on basic groceries such as meats, vegetables and fruits), Ries points to other countries such as France and Romania, where the tax rate is much higher. She says taxing products such as sugar-sweetened beverages would likely not only reduce consumption (and curb some weight gain) if the tax is high enough, but also provide a revenue stream to combat the problem on other levels.

“Price increases through taxation do help discourage consumption of ‘sin’ products, especially for younger and lower-income consumers,” said Ries. “Such taxes would provide a source of government revenue that could be directed to other programs to promote healthier lifestyles.”

Warning: This menu label may make you eat healthier

Ries notes that prevailing thought says putting nutrition-value information where consumers can see it will enable them to make better food choices. She says many locales in the United States have already implemented mandatory menu labelling. Even though some studies say menu labels do not have a significant impact on consumer behaviour, nutrition details might help some people make more informed eating choices.

“Providing information is less coercive than taxation and outright bans, so governments should provide information along with any other more restrictive measure,” said Ries. “If a more coercive policy is being implemented, it’s important for citizens to understand the rationale for it.”

Coaxing our way to good health?

Ries notes that some programs designed to create more active citizens, such as the child fitness tax credit, do not seem to have the desired effect. Yet, she says that offering incentives for living healthier and exercising more may have a greater impact on getting people active. She points to similar programs used for weight loss and smoking cessation, which had a positive effect on behaviour change, at least in the short term. More work needs to be done to establish an enticement plan with longer-term effects, one that may incorporate points accumulated for healthy types of behaviour that could be redeemed for health- and fitness-related products and services. She says investing money into more direct incentive programs may be more effective than messages that simply give general advice about healthy lifestyles.

“Instead of spending more money on educational initiatives to tell people what they already know — like eat your greens and get some exercise — I suggest it’s better to focus on targeted programs that help people make and sustain behaviour change,” said Ries. “Financial incentive programs are one option; the question there is how best to target such programs and to design them to support long-term healthy behaviour.”