Todos os posts de renzotaddei

Avatar de Desconhecido

Sobre renzotaddei

Anthropologist, professor at the Federal University of São Paulo

Gastos no País com desastres crescem 15 vezes em seis anos (O Estado de são Paulo)

JC e-mail 4564, de 17 de Agosto de 2012.

Relatório do IPCC aponta que eventos extremos aliados à alta exposição humana a situações de risco podem aumentar tragédias.

Nos últimos 30 anos, o aumento da ocorrência de desastres naturais no mundo foi responsável por perdas que saltaram de poucos bilhões de dólares em 1980 para mais de 200 bilhões em 2010. No Brasil, em somente seis anos (2004-2010), os gastos das três esferas governamentais com a reconstrução de estruturas afetadas nesses eventos evoluíram de US$ 65 milhões para mais de US$ 1 bilhão – um aumento de mais de 15 vezes.

Os dados foram citados ontem durante evento de divulgação do Relatório Especial sobre Gestão de Riscos de Extremos Climáticos e Desastres (SREX), do Painel Intergovernamental de Mudanças Climáticas (IPCC). A elaboração do documento foi motivada justamente por conta dessa elevação já observada de desastres e perdas. O alerta, porém, é para o futuro – a expectativa é de que essas situações ocorram com frequência cada vez maior em consequência do aquecimento global.

Alguns dos autores do relatório estiverem presentes ontem em São Paulo, em evento promovido pela Fapesp e pelo Instituto Nacional de Pesquisas Espaciais (Inpe), para divulgar para a comunidade científica e tomadores de decisão os resultados específicos de América Latina e Caribe. A principal conclusão é que para evitar os desastres naturais, os cuidados vão muito além de lidar com o clima.

Vulnerabilidade – “O desastre natural não tem nada de natural. É a conjunção do evento natural com a vulnerabilidade e a exposição das populações a situações críticas”, afirma Vicente Barros, da Universidade de Buenos Aires e um dos coordenadores do relatório.

Segundo ele, desde 1950 vem ocorrendo um aumento do número de dias extremamente quentes e com chuvas extremas. Apesar disso, afirma o climatologista Carlos Nobre, co-autor do trabalho, o que foi considerado como fator determinante para os desastres foi a maior exposição dos seres humanos por conta do aumento do adensamento urbano. No final das contas, acaba sendo um problema de planejamento urbano.

Com base nas pesquisas existentes, ainda não dá para dizer com elevado grau de confiança que esse aumento de eventos extremos já seja resultado das mudanças climáticas. Mas para o futuro a indicação é de que o aquecimento possivelmente irá impulsioná-los. Situações consideradas hoje extremas poderão se tornar mais comuns – chuvas ou secas que acontecem a cada 20 anos, poderão aparecer a cada cinco, dois ou até anualmente. Outra tendência também é que elas possam se inverter, chuva forte num ano, seca em outro.

Independentemente do clima, porém, o relatório alerta que o risco de desastres continuará subindo uma vez que mais pessoas estarão em situação vulnerável. “É daí que virão os problemas. É um alerta para pensarmos em formas de adaptação. O Nordeste teve uma grande seca neste ano e o que o governo fez? Mandou cesta básica. A população, assim, não se adapta”, afirma o pesquisador José Marengo, do Inpe.

Além de alertar para ações dos governos, os pesquisadores chamaram a atenção também para a necessidade de mais estudos regionais. A confiança sobre o que é mais provável de acontecer, principalmente na Amazônia, ainda não é alta. Uma das ferramentas para isso é o desenvolvimento de modelos climáticos regionais. O projeto de um está sendo coordenado pelo Inpe e pela Fapesp, que pode estar pronto em até um ano, adaptado para a realidade brasileira.

A desigualdade social na Argentina (Luis Nassif)

Enviado por luisnassif, sex, 17/08/2012 – 14:17

Argentina demanda políticas públicas sociais unificadas que efetivem os direitos humanos

Por Maíra Vasconcelos, especial para o blog

O expressivo crescimento econômico experimentado pela Argentina, entre 2003 e 2007, passada a crise de 2001/2002, não representou em igual escala desenvolvimento social ao país. Ainda que indicadores socioeconômicos demonstrem algumas melhorias nos índices de pobreza e indigência, especialistas destacam a necessidade da construção de projetos de políticas públicas que unifiquem as demandas sociais e visem o cumprimento total dos direitos humanos.

Cerca de 33% das crianças e adolescentes na Argentina, menores de 18 anos, nos centros urbanos e rurais encontram-se na linha de pobreza, e 8,5% em estado de indigência. Respectivamente, ambos indicadores referentes a 2011, representam queda de 7,2% e 4,5% em relação a 2010. Os dados foram apresentados no último dia 14 de agosto, no informe “A Infância Argentina Sujeito de Direito”, do Observatório da Dívida Social Argentina (ODSA), na sede da Pontifícia Universidade Católica Argentina (UCA).

De acordo com a investigadora Laura Pautassi, membro do Conselho Nacional de Investigações Científicas e Técnicas (Conicet), e do Instituto de Investigações Jurídicas e Sociais A. Gioja, da Faculdade de Direito da Universidade de Buenos Aires (UBA), falta integração nas políticas sociais do Estado, para que o funcionamento em conjunto desses programas possa suprir as carências não só no que diz respeito ao ingresso econômico.

“Temos um conjunto de melhorias econômicas importantes, após a crise de 2001, mas as políticas públicas estão muito divididas. Uma para assalariados formais e outro tipo são as políticas assistenciais, onde há muitos projetos, alguns de transferência de ingresso, e outros mais globais como pode ser considerada a “Asignação Universal por Filho”, afirmou Pautassi.

O projeto “Asignação Universal por Filho” foi criado em 2009, durante o primeiro mandato da presidente Cristina Kirchner, hoje, em torno de 3,5 milhões de crianças e jovens são beneficiados.

Entre os resultados divulgados pelo ODSA, em relação à infância e adolescência, um destaque é o salto significativo no acesso à internet, que passou de 29,3%, em 2007, para 52,7%, em 2011, entre os adolescentes de13 a17 anos.

Por outro lado, a estrutura familiar marca sérios problemas como, por exemplo, as agressões físicas sofridas em casa saltaram de 31,6%, em 2007, para 36,4% em 2011, no total de aproximadamente 12,3 milhões de crianças e adolescentes.

Indicadores de Direitos Humanos

Instrumentalizar a medição da pobreza e indigência com indicadores não apenas socioeconômicos, mas que permitam visualizar os resultados do cumprimento dos direitos humanos na sociedade. Para combater e erradicar a pobreza e indigência na Argentina, investigadores afirmam que apenas a “visão monetária” limita a percepção das demandas para obtenção de ferramentas de trabalho que contribuam à formulação de exigências e propostas ao Estado.

Segundo a investigadora Laura Pautassi, recentemente a Organização dos Estados Americanos (OEA) aprovou um instrumento para controlar o cumprimento das obrigações, por parte das 16 nações que ratificaram o “Protocolo de São Salvador”. Assim, deverão ser desenvolvidos indicadores específicos, que não englobam apenas dados socioeconômicos, mas também permitem mesurar o cumprimento ou violação dos direitos humanos.

“Hoje podemos ver desigualdades que antes não eram medidas, a desigualdade étnica, socioeconômica, de gênero. Mas as variáveis consideradas para avaliar os direitos humanos são diferentes daquelas dispostas para medir índices socioeconômicos, pois o que avaliam é a efetiva execução dos direitos”, ressaltou Pautassi.

Climate change and the Syrian uprising (Bulletin of the Atomic Scientists)

BY SHAHRZAD MOHTADI | 16 AUGUST 2012

Article Highlights

  • A drought unparalleled in recent Syrian history lasted from 2006 to 2010 and led to an unprecedented mass migration of 1.5 million people from farms to urban centers.
  • Because the Assad regime’s economic policies had largely ignored water issues and sustainable agriculture, the drought destroyed many farming communities and placed great strain on urban populations.
  • Although not the leading cause of the Syrian rebellion, the drought-induced migration from farm to city clearly contributed to the uprising and serves as a warning of the potential impact of climate change on political stability.

Two days short of Egyptian leader Hosni Mubarak’s resignation, Al Jazeera published anarticle, headlined “A Kingdom of Silence,” that contended an uprising was unlikely in Syria. The article cited the country’s “popular president, dreaded security forces, and religious diversity” as reasons that the regime of Bashar al-Assad would not be challenged, despite the chaos and leadership changes already wrought by the so-called Arab Spring. Less than one month later, security forces arrested a group of schoolchildren in the Syrian city of Dara’a, the country’s southern agricultural hub, for scrawling anti-government slogans on city walls. Subsequent protests illustrated the chasm between the regime’s public image — encapsulated in the slogan “Unity, Freedom and Socialism” — and a reality of widespread public disillusion with Assad and his economic policies.

Among the many historical, political, and economic factors contributing to the Syrian uprising, one has been devastating to Syria, yet remains largely unnoticed by the outside world. That factor is the complex and subtle, yet powerful role that climate change has played in affecting the stability and longevity of the state.

The land now encompassed by Syria is widely credited as being the place where humans first experimented with agriculture and cattle herding, some 12,000 years ago. Today, the World Bank predicts the area will experience alarming effects of climate change, with the annual precipitation level shifting toward a permanently drier condition, increasing the severity and frequency of drought.

From 1900 until 2005, there were six droughts of significance in Syria; the average monthly level of winter precipitation during these dry periods was approximately one-third of normal. All but one of these droughts lasted only one season; the exception lasted two. Farming communities were thus able to withstand dry periods by falling back on government subsidies and secondary water resources. This most recent, the seventh drought, however, lasted from 2006 to 2010, an astounding four seasons — a true anomaly in the past century. Furthermore, the average level of precipitation in these four years was the lowest of any drought-ridden period in the last century.

While impossible to deem one instance of drought as a direct result of anthropogenic climate change, a 2011 report from the National Oceanic and Atmospheric Administration regarding this recent Syrian drought states: “Climate change from greenhouse gases explained roughly half the increased dryness of 1902-2010.” Martin Hoerling, the lead researcher of the study, explains: “The magnitude and frequency of the drying that has occurred is too great to be explained by natural variability alone. This is not encouraging news for a region that already experiences water stress, because it implies natural variability alone is unlikely to return the region’s climate to normal.” The Intergovernmental Panel on Climate Change predicts that global warming will induce droughts even more severe in this region in the coming decades.

It is estimated that the Syrian drought has displaced more than 1.5 million people; entire families of agricultural workers and small-scale farmers moved from the country’s breadbasket region in the northeast to urban peripheries of the south. The drought tipped the scale of an unbalanced agricultural system that was already feeling the weight of policy mismanagement and unsustainable environmental practices. Further, lack of contingency planning contributed to the inability of the system to cope with the aftermath of the drought. Decades of poorly planned agricultural policies now haunt Syria’s al-Assad regime.

An unsustainable history. Hafez al-Assad — the father of the current president, Bashar al-Assad — ruled Syria for three decades in a fairly non-religious and paradoxical way. To some degree, he modernized the nation’s economy and opened it to the outside world; at the same time, his regime was infamous for repression and the murder of citizens. The elder al-Assad relied on support from the rural masses to maintain his authority, and during his rule, the agricultural sector became one of the most important pillars of the economy. In a 1980 address to the nation, he said: “I am first and last — and of this I hope every Syrian citizen and every Arab outside of Syria will take cognizance — a peasant and the son of a peasant. To lie amidst the spikes of grain or on the threshing floor is, in my eyes, worth all the palaces in the world.” Hafez al-Assad assured the Syrian people of their right to food security and economic stability, granting subsidies to reduce the price of food, oil, and water. The regime emphasized food self-sufficiency, first achieved with wheat in the 1980s. Cotton, a water-intensive crop requiring irrigation, was heavily promoted as a “strategic crop,” at one point becoming Syria’s second-largest export, after oil. As agricultural production swelled, little to no attention was paid to the environmental effects of such short-term, unsustainable agricultural goals.

With a steadfast emphasis on quick agricultural and industrial advancements, the Baathist regime did little to promote the sustainable use of water. In the two decades before the current drought, the state invested heavily in irrigation systems — yet they remain underdeveloped, extremely inefficient, and insufficient. The majority of irrigation systems use groundwater as their main source, because the amount of water from rivers is inadequate. As of 2005, the government began requiring licenses to dig agricultural wells. There are claims that the regime wishes to keep the Kurdish-majority region in the northeast of the country underdeveloped and has denied licenses to some farmers in the region. Whatever the reasons, well licenses are generally difficult to obtain; as a result, more than half the country’s wells are dug illegally and are therefore unregulated. Groundwater reserves PDF in the years leading up to the drought were rapidly depleted.

Unheeded warnings. In 2001, the World Bank warned PDF, “The (Syrian) Government will need to recognize that achieving food security with respect to wheat and other cereals in the short-term as well as the encouragement of water-intensive cotton appear to be undermining Syria’s security over the long-term by depleting available groundwater resources.” With energy and water heavily subsidized by the state, farmers were further encouraged to increase production rather than set sustainable goals.

The price of wheat skyrocketed in 2005, and an overconfident Syrian government sold much of its emergency wheat reserve. In 2008, due to the drought, the Syrian government was forced to concede that its policy of self-sufficiency had failed, and for the first time in two decades it began importing wheat. Meanwhile, nearly 90 percent of the barley crop failed, doubling the price of animal feed in the first year of the drought alone. Small livestock herders in the northeast have lost 70 percent and more of their herds, and many have been forced to migrate. According to the UN Food and Agriculture Organization, one-fourth of the country’s herds were lost as a result of the drought.

In recent years, Assad’s promises of food security have vanished; the United Nationsreports that the diet of 80 percent of those severely affected by the drought now consists largely of bread and sugared tea. For those who have remained in the nearly deserted rural communities of Syria’s northeast, food prices have skyrocketed, and 80 percent of residents in the drought-stricken regions are living under the poverty threshold. In 2003, agriculture accounted for one-fourth of Syria’s gross domestic product; in 2008, a year into the drought, that fraction was just 17 percent. The government’s drought management has been reactive, untimely, poorly coordinated, and poorly targeted, according to the UN Office for Disaster Risk Reduction. PDF

The chaotic result. Since the drought began, temporary settlements composed largely of displaced rural people have formed on the outskirts of Damascus, Hama, Homs, Aleppo, and Dara’a — the latter city being the site of the first significant protest in the country in March 2011. This migration has exacerbated economic strains already caused by nearly two million refugees from neighboring Iraq and Palestine. A confidential cable from the American embassy in Damascus to the US State Department, written shortly after the drought began, warned of the unraveling social and economic fabric of Syria’s rural farming communities due to the drought. It noted that the mass migration “could act as a multiplier on social and economic pressures already at play and undermine stability in Syria.” Reporting during the uprising in late 2011, the late New York Times correspondent Anthony Shadid recounts: “There’s that sense of corruption in the society itself, that the society itself is falling apart, being pulled apart; that the countryside is miserable; that there’s nothing being done to make lives better there.” Reports show that the earliest points of unrest were those that were most economically devastated by the drought and served as migratory settlement points.

“The regime’s failure to put in place economic measures to alleviate the effects of drought was a critical driver in propelling such massive mobilizations of dissent,” concludes Suzanne Saleeby, a contributor to Jadaliyya, a digital magazine produced by the Arab Studies Institute. “In these recent months, Syrian cities have served as junctures where the grievances of displaced rural migrants and disenfranchised urban residents meet and come to question the very nature and distribution of power.”

The considerations that impel an individual to protest in streets that are known to be lined by armed security forces extend beyond an abstract desire for democracy. Only a sense of extreme desperation and hopelessness can constitute the need — rather than a mere desire — to bring change to a country’s economic, political, and social systems. A combination of stress factors resulting from policies of economic liberalization — including growing income disparities and the geographic limitations of the economic reforms — shattered the Syrian regime’s projected image of stability. Even if it was not the leading cause of the Syrian rebellion, the drought and resulting migration played an important role in triggering the civil unrest now underway in Syria.

The drought in Syria is one of the first modern events in which a climactic anomaly resulted in mass migration and contributed to state instability. This is a lesson and a warning for the greater catalyst that climate change will become in a region already under the strains of cultural polarity, political repression, and economic inequity.

No Vale do Ribeira, Defensoria Pública defende comunidades tradicionais contra corrupção e mercado de carbono (Racismo Ambiental)

Por racismoambiental, 24/06/2012 11:45

Tania Pacheco*

“Posto diante de todos estes homens reunidos, de todas estas mulheres, de todas estas crianças (sede fecundos, multiplicai-vos e enchei a terra, assim lhes fora mandado), cujo suor não nascia do trabalho que não tinham, mas da agonia insuportável de não o ter, Deus arrependeu-se dos males que havia feito e permitido, a um ponto tal que, num arrebato de contrição, quis mudar o seu nome para um outro mais humano. Falando à multidão, anunciou: “A partir de hoje chamar-me-eis Justiça”. E a multidão respondeu-lhe: “Justiça, já nós a temos, e não nos atende”. Disse Deus: “Sendo assim, tomarei o nome de Direito”. E a multidão tornou a responder-lhe: “Direito, já nós o temos, e não nos conhece”. E Deus: “Nesse caso, ficarei com o nome de Caridade, que é um nome bonito”. Disse a multidão: “Não necessitamos de caridade, o que queremos é uma Justiça que se cumpra e um Direito que nos respeite”. José Saramago (Prefácio à obra Terra, de Sebastião Salgado).

O trecho acima foi retirado de uma peça jurídica. Um mandado de segurança com pedido de liminar impetrado no dia 6 de junho pelos Defensores Thiago de Luna Cury e Andrew Toshio Hayama, respectivamente da 2ª e da 3ª Defensorias Publicas de Registro, São Paulo, contra o Prefeito de Iporanga, região de Lageado, Vale do Ribeira. Seu objetivo: impedir que, seguindo uma prática que vem se tornando constante no estado, a autoridade municipal expulse comunidades tradicionais e desaproprie vastas extensões de terras, transformando-as em Parques Naturais a serem transacionados no mercado de carbono.

Para ganhar dinheiro a qualquer custo, não interessa investigar se nessas terras há comunidades tradicionais, quilombolas e camponeses. Não interessa se o Direito à Consulta Prévia e Informada estipulado pela Convenção 169 da OIT foi respeitado. Não interessa, inclusive, se, caso audiências públicas tivessem sido realizadas, as comunidades teriam condições de entender plenamente o que estava sendo proposto e decidir se seria de seu interesse abandonar seus territórios, suas tradições, suas gentes, uma vez que nesse tipo de unidade de conservação integral não pode haver moradores. Em parcerias com empresas e ONGs fajutas, o esquema é montado; de uma penada decretado; e o lucro é garantido e dividido entre os integrantes das quadrilhas.

Mas não foi bem assim que aconteceu em Iporanga. A Defensoria Pública agiu, e agiu pela Justiça e pelo Direito, de forma indignada, culta, forte, poética e, sempre, muito bem fundamentada nas leis. E coube ao Juiz Raphael Garcia Pinto, de Eldorado, São Paulo, reconhecê-lo em decisão do dia 11 de junho de 2012.

Este Blog defende intransigentemente a “democratização do sistema de Justiça”. E tanto no mandado como na decisão é um exemplo disso que temos presente: da prática da democracia pelos operadores do Direito. Por isso fazemos questão de socializá-los, não só como uma homenagem aos Defensores Thiago de Luna Cury e Andrew Toshio Hayama (e também ao Juiz Raphael Garcia Pinto), mas também como um exemplo a ser seguido Brasil afora, como forma de defender as comunidades e honrar a tod@s nós.

Para ver o mandado de segurança clique AQUI. Para ver a decisão clique AQUI. Boa leitura.

* Com informações enviadas por Luciana Zaffalon.

Traços exóticos da ‘partícula de Deus’ surpreendem físicos (Folha de são Paulo)

JC e-mail 4559, de 10 de Agosto de 2012.

Estudo com participação de brasileiro indica que bóson de Higgs pode não se encaixar em teoria mais aceita hoje. Análise preliminar dá pistas de partículas ainda desconhecidas; outros cientistas pedem cautela com os dados.

A partícula de Deus está, ao que parece, do jeito que o diabo gosta: malcomportada. É o que indica uma análise preliminar de dados coletados no LHC, maior acelerador de partículas do mundo.

O trabalho, feito por Oscar Éboli, do Instituto de Física da Universidade de São Paulo (USP), sugere que o chamado bóson de Higgs, que seria responsável por dar massa a tudo o que existe, não está se portando como deveria, a julgar pela teoria que previu sua existência, o Modelo Padrão. Se confirmado, o comportamento anômalo da partícula seria a deixa para uma nova era da física.

A descoberta do possível bóson, anunciada com estardalhaço no mês passado, foi comemorada como a finalização de uma etapa gloriosa no estudo das partículas fundamentais da matéria. Sua existência, em resumo, explicaria porque o Sol pode produzir sua energia e criaturas como nós podem existir.

Dada sua importância para a consistência do Universo (e fazendo uma analogia com a história bíblica da torre de Babel), o físico ganhador do Nobel Leon Lederman deu ao bóson o apelido de “partícula de Deus”.

Para analisar o bóson de Higgs, é preciso primeiro produzir uma colisão entre prótons em altíssima velocidade – função primordial do LHC. Então, do impacto de alta energia, surgem montes de novas partículas, dentre as quais o Higgs, que rapidamente decai, como se diz.

É que, por ser muito instável, o bóson se “decompõe” quando a energia da colisão diminui. Aparecem, no lugar dele, outras partículas. É esse subproduto que pode ser detectado e indicar a existência do bóson de Higgs. Contudo, isso exige a realização de muitos impactos, até que as estatísticas comecem a sugerir a presença do procurado bóson.

Os dados coletados até aqui são suficientes para apontar a existência da partícula, mas suas características específicas ainda não puderam ser determinadas. “Estamos ainda num estágio inicial da exploração das propriedades da dela”, diz Éboli. “Contudo, há uma indicação de que o Higgs decaia mais em dois fótons [partículas de luz] do que seria esperado no Modelo Padrão.”

Os resultados dessa análise preliminar foram divulgados no Arxiv.org, repositório de estudos de física na internet, e abordados na revista “Pesquisa Fapesp”.

Surpresa bem-vinda – A novidade anima os cientistas. “Para a maioria dos físicos, o Modelo Padrão é uma boa representação da natureza, mas não é a teoria final”, afirma Éboli. “Se de fato for confirmado que o Higgs está decaindo mais que o esperado em dois fótons, isso pode significar que novas partículas podem estar dentro do alcance de descoberta do LHC.”

Poderia ser o primeiro vislumbre de um novo “zoológico” de tijolos elementares da matéria. Previa-se que essas partículas exóticas começassem a aparecer com as energias elevadas do LHC.

Tudo muito interessante, mas nada resolvido. “É um trabalho muito sério, mas eu acho que ainda é muito cedo para se tirar qualquer conclusão se se trata ou não do Higgs padrão”, afirma Sérgio Novaes, pesquisador da Unesp que participa de um dos experimentos que detectaram o bóson de Higgs. “Até o final do ano as coisas estarão um pouco mais claras”, avalia ele.

População indígena no País cresceu 205% em duas décadas (Agência Brasil)

JC e-mail 4559, de 10 de Agosto de 2012.

No contexto do dia 9 de agosto, Dia Internacional dos Povos Indígenas, lideranças realizaram um protesto na sede da Advocacia Geral da União (AGU) para apelar pela suspensão da portaria 303, que autoriza a intervenção em terras indígenas sem a necessidade de consultar os índios.

Hoje (10), Instituto Brasileiro de Geografia e Estatística (IBGE) divulgou os dados do Censo 2010 que mostram que os índios no Brasil somam 896,9 mil pessoas, de 305 etnias, que falam 274 línguas indígenas. É a primeira vez que o órgão coleta informações sobre a etnia dos povos. O levantamento marca também a retomada da investigação sobre as línguas indígenas, parada por 60 anos.

Com base nos dados do Censo 2010, o IBGE revela que a população indígena no País cresceu 205% desde 1991, quando foi feito o primeiro levantamento no modelo atual. À época, os índios somavam 294 mil. O número chegou a 734 mil no Censo de 2000, 150% de aumento na comparação com 1991.

A pesquisa mostra que, dos 896,9 mil índios do País, mais da metade (63,8%) vivem em área rural. A situação é o inverso da de 2000, quando mais da metade estavam em área urbana (52%).

Na avaliação do IBGE, a explicação para o crescimento da população indígena pode estar na queda da taxa de fecundidade das mulheres em áreas rurais, apesar de o índice de 2010 não estar fechado ainda. Entre 1991 e 2000, essa taxa passou de 6,4 filhos por mulher para 5,8.

Outro fator que pode explicar o aumento do número de índios é o processo de etnogênese, quando há “reconstrução das comunidades indígenas”, que supostamente não existiam mais, explica o professor de antropologia da Universidade de Campinas (Unicamp), José Maurício Arruti.

Os dados do IBGE indicam que a maioria dos índios (57,7%) vive em 505 terras indígenas reconhecidas pelo governo até o dia 31 de dezembro de 2010, período de avaliação da pesquisa. Essas áreas equivalem a 12,5% do território nacional, sendo que maior parte fica na Região Norte – a mais populosa em indígenas (342 mil). Já na Região Sudeste, 84% dos 99,1 mil índios estão fora das terras originárias. Em seguida vem o Nordeste (54%).

Para chegar ao número total de índios, o IBGE somou aqueles que se autodeclararam indígenas (817,9 mil) com 78,9 mil que vivem em terras indígenas, mas não tinham optado por essa classificação ao responder à pergunta sobre cor ou raça. Para esse grupo, foi feita uma segunda pergunta, indagando se o entrevistado se considerava índio. O objetivo foi evitar distorções.

A responsável pela pesquisa, Nilza Pereira, explicou que a categoria índios foi inventada pela população não índia e, por isso, alguns se confundiram na autodeclaração e não se disseram indígenas em um primeiro momento. “Para o índio, ele é um xavante, um kaiapó, da cor parda, verde e até marrom”, justificou.

A terra indígena mais populosa no País é a Yanomami, com 25,7 mil habitantes (5% do total) distribuídos entre o Amazonas e Roraima. Já a etnia Tikúna (AM) é mais numerosa, com 46 mil indivíduos, sendo 39,3 mil na terra indígena e os demais fora. Em seguida, vem a etnia Guarani Kaiowá (MS), com 43 mil índios, dos quais 35 mil estão na terra indígena e 8,1 mil vivem fora.

O Censo 2010 também revelou que 37,4% índios com mais de 5 anos de idade falam línguas indígenas, apesar de anos de contato com não índios. Cerca de 120 mil não falam português. Os povos considerados índios isolados, pelas limitações da própria política de contato, com objetivo de preservá-los, não foram entrevistados e não estão contabilizados no Censo 2010.

Ensino público manda 45% dos alunos às universidades federais (Valor Econômico)

JC e-mail 4559, de 10 de Agosto de 2012.

De acordo com o estudo “Perfil Socioeconômico e Cultural dos Estudantes de Graduação das Universidades Federais Brasileiras”, concluído pelo Fórum Nacional de Pró-Reitores de Assuntos Comunitários e Estudantis (Fonaprace) em julho de 2011, 45% dos cerca de 900 mil alunos matriculados nas 59 instituições da rede de ensino superior do País vieram do ensino médio público.

No que se refere especificamente à obrigatoriedade de destinar 50% das vagas nas universidades federais a alunos que cursaram o ensino médio em escolas públicas, a lei de cotas aprovada na terça-feira (7) vai causar um impacto muito menor no atual sistema de matrículas do que a polêmica que tem gerado.

De acordo com o estudo “Perfil Socioeconômico e Cultural dos Estudantes de Graduação das Universidades Federais Brasileiras”, levantamento feito pelo Fórum Nacional de Pró-Reitores de Assuntos Comunitários e Estudantis (Fonaprace) entre outubro e novembro de 2010 e concluído em julho de 2011, 45% dos cerca de 900 mil alunos matriculados nas 59 instituições da rede de ensino superior do País vieram do ensino médio público.

A terceira edição do estudo – a primeira foi produzida em 1996-1997, mostra que os maiores percentuais de alunos oriundos das escolas públicas são nas regiões Norte, com 71,5%, e Sul (50,5%). Nordeste e Centro-Oeste vêm em seguida com, respectivamente, 41,5% e 40,5%. O Sudeste é a região com o menor índice: 37%.

O “Perfil Socioeconômico” é uma pesquisa amostral do Fonaprace baseado no conjunto dos estudantes das universidades federais matriculados nos cursos presenciais de graduação. Adotou-se um nível de confiabilidade de 95% e erro amostral de 5% por instituição. A base de dados do foi fornecida pela Secretaria de Ensino Superior do Ministério da Educação (Sesu-MEC) e passou por um processo de validação com cada universidade participante, que respondeu questionários quantitativos e qualitativos num sistema online.

Rosana Pereira Parente, pró-reitora de graduação da Universidade Federal do Amazonas (Ufam), integrante do Fonaprace, observa que uma política de Estado de ação afirmativa para equalizar o acesso ao ensino superior é importante por combater a desigualdade no País, mas um modelo único para realidades diferentes poder ser considerada “uma estratégia complicada”. “Algumas particularidades devem ser observadas, aqui na Região Norte o mercado privado da educação básica não é tão forte quanto nos grandes centros e temos mais índios que negros e pardos. Com a lei, ações afirmativas que temos aqui são prejudicadas”, pondera Rosana.

Na Ufam, a criação de novas políticas de cotas é discutida por um grupo específico dentro do conselho universitário. No momento, a instituição dá prioridade à entrada de 50% via Exame Nacional do Ensino Médio (Enem) e o vestibular contínuo dá os outros 50% das vagas a alunos do ensino médio – eles fazem “minivestibulares” desde o primeiro ano do ensino médio. “Mas queremos instituir ações que beneficiem alunos de baixa renda”, acrescenta Rosana.

Uma das principais conclusões do estudo é que o número de estudantes negros, pardos e índios e pobres aumentou nos últimos anos, tópicos também contemplados na lei de cotas aprovada nesta semana. Os responsáveis pela pesquisa sugerem que, devido a esse resultado, é urgente ampliar os investimentos na política de assistência estudantil.

“Já existem pesquisas no Brasil que tentam monitorar o avanço das políticas afirmativas na educação. Agora, com uma lei nacional, o novo modelo tem de vir acompanhado de ações de assistência estudantil para garantir não só o acesso, mas a permanência desse ‘novo’ aluno”, avalia Dalila Andrade Oliveira, professora da Faculdade de Educação da Universidade Federal de Minas Gerais (UFMG) e presidente da Associação Nacional de Pós-Graduação e Pesquisa em Educação (Anped).

De 2008 a 2012 o orçamento do MEC para o Programa Nacional de Assistência Estudantil (Pnaes), que garante bolsas mensais e auxílios financeiros para alimentação, moradia e compra de material didático, cresceu 300% em valores nominais, para R$ 500 milhões. Mas dirigentes federais falam que os recursos são insuficientes. “O Pnaes precisaria subir para R$ 1,5 bilhão para dar conta das atuais necessidades atuais. A nova lei, que fere a autonomia universitária, poderia vir acompanhada de um item para garantir a contrapartida orçamentária à universidade que receber estudantes mais pobres”, critica Gustavo Balduíno, da direção da Associação Nacional de Dirigentes de Instituições de Ensino Federais (Andifes).

Sobre o aspecto racial, o levantamento do Fonaprace mostra que em 2010 estudantes brancos eram maioria nas universidades federais: 54%. Na pesquisa anterior, em 2004, o percentual de brancos era de 59%. Os pretos aumentaram de 5,9% em 2004 para 8,7% em 2010, percentual que subiu em todas as regiões do País: com destaque para o Norte, que praticamente dobrou o seu percentual (13,4%, ante 6,8% em 2004), e Nordeste, cujas marcas passaram de 8,6% para 12,5%.

Perto de 45% dos estudantes das universidades federais pertencem às classes C, D e E. Os estudantes da classe A somam 15% do total de matrículas de 2010, com maior concentração na região Centro-Oeste (22%). Os universitários enquadrados na classe B representam 41% do total.

Doctors Often Don’t Disclose All Possible Risks to Patients Before Treatment (Science Daily)

ScienceDaily (Aug. 7, 2012) — Most informed consent disputes involve disagreements about who said what and when, not stand-offs over whether a particular risk ought to have been disclosed. But doctors may “routinely underestimate the importance of a small set of risks that vex patients” according to international experts writing in this week’s PLoS Medicine.

Increasingly, doctors are expected to advise and empower patients to make rational choices by sharing information that may affect treatment decisions, including risks of adverse outcomes. However, authors from Australia and the US led by David Studdert from the University of Melbourne argue that doctors, especially surgeons, are often unsure which clinical risks they should disclose and discuss with patients before treatment.

To understand more about the clinical circumstances in which disputes arise between doctors and patients in this area, the authors analyzed 481 malpractice claims and patient complaints from Australia involving allegations of deficiencies in the process of obtaining informed consent.

The authors found that 45 (9%) of the cases studied were disputed duty cases — that is, they involved head-to-head disagreements over whether a particular risk ought to have been disclosed before treatment. Two-thirds of these disputed duty cases involved surgical procedures, and the majority (38/45) of them related to five specific outcomes that had quality of life implications for patients, including chronic pain and the need for re-operation.

The authors found that the most common justifications doctors gave for not telling patients about particular risks before treatment were that they considered such risks too rare to warrant discussion or the specific risk was covered by a more general risk that was discussed.

However, nine in ten of the disputes studied centered on factual disagreements — arguments over who said what, and when. The authors say: “Documenting consent discussions in the lead-up to surgical procedures is particularly important, as most informed consent claims and complaints involved factual disagreements over the disclosure of operative risks.”

The authors say: “Our findings suggest that doctors may systematically underestimate the premium patients place on understanding certain risks in advance of treatment.”

They conclude: “Improved understanding of these situations helps to spotlight gaps between what patients want to hear and what doctors perceive patients want — or should want — to hear. It may also be useful information for doctors eager to avoid medico-legal disputes.”

Rooting out Rumors, Epidemics, and Crime — With Math (Science Daily)

ScienceDaily (Aug. 10, 2012) — A team of EPFL scientists has developed an algorithm that can identify the source of an epidemic or information circulating within a network, a method that could also be used to help with criminal investigations.

Investigators are well aware of how difficult it is to trace an unlawful act to its source. The job was arguably easier with old, Mafia-style criminal organizations, as their hierarchical structures more or less resembled predictable family trees.

In the Internet age, however, the networks used by organized criminals have changed. Innumerable nodes and connections escalate the complexity of these networks, making it ever more difficult to root out the guilty party. EPFL researcher Pedro Pinto of the Audiovisual Communications Laboratory and his colleagues have developed an algorithm that could become a valuable ally for investigators, criminal or otherwise, as long as a network is involved. The team’s research was published August 10, 2012, in the journal Physical Review Letters.

Finding the source of a Facebook rumor

“Using our method, we can find the source of all kinds of things circulating in a network just by ‘listening’ to a limited number of members of that network,” explains Pinto. Suppose you come across a rumor about yourself that has spread on Facebook and been sent to 500 people — your friends, or even friends of your friends. How do you find the person who started the rumor? “By looking at the messages received by just 15-20 of your friends, and taking into account the time factor, our algorithm can trace the path of that information back and find the source,” Pinto adds. This method can also be used to identify the origin of a spam message or a computer virus using only a limited number of sensors within the network.

Trace the propagation of an epidemic

Out in the real world, the algorithm can be employed to find the primary source of an infectious disease, such as cholera. “We tested our method with data on an epidemic in South Africa provided by EPFL professor Andrea Rinaldo’s Ecohydrology Laboratory,” says Pinto. “By modeling water networks, river networks, and human transport networks, we were able to find the spot where the first cases of infection appeared by monitoring only a small fraction of the villages.”

The method would also be useful in responding to terrorist attacks, such as the 1995 sarin gas attack in the Tokyo subway, in which poisonous gas released in the city’s subterranean tunnels killed 13 people and injured nearly 1,000 more. “Using this algorithm, it wouldn’t be necessary to equip every station with detectors. A sample would be sufficient to rapidly identify the origin of the attack, and action could be taken before it spreads too far,” says Pinto.

Identifying the brains behind a terrorist attack

Computer simulations of the telephone conversations that could have occurred during the terrorist attacks on September 11, 2001, were used to test Pinto’s system. “By reconstructing the message exchange inside the 9/11 terrorist network extracted from publicly released news, our system spit out the names of three potential suspects — one of whom was found to be the mastermind of the attacks, according to the official enquiry.”

The validity of this method thus has been proven a posteriori. But according to Pinto, it could also be used preventatively — for example, to understand an outbreak before it gets out of control. “By carefully selecting points in the network to test, we could more rapidly detect the spread of an epidemic,” he points out. It could also be a valuable tool for advertisers who use viral marketing strategies by leveraging the Internet and social networks to reach customers. For example, this algorithm would allow them to identify the specific Internet blogs that are the most influential for their target audience and to understand how in these articles spread throughout the online community.

Populations Survive Despite Many Deleterious Mutations: Evolutionary Model of Muller’s Ratchet Explored (Science Daily)

ScienceDaily (Aug. 10, 2012) — From protozoans to mammals, evolution has created more and more complex structures and better-adapted organisms. This is all the more astonishing as most genetic mutations are deleterious. Especially in small asexual populations that do not recombine their genes, unfavourable mutations can accumulate. This process is known as Muller’s ratchet in evolutionary biology. The ratchet, proposed by the American geneticist Hermann Joseph Muller, predicts that the genome deteriorates irreversibly, leaving populations on a one-way street to extinction.

Equilibrium of mutation and selection processes: A population can be divided into groups of individuals that carry different numbers of deleterious mutations. Groups with few mutations are amplified by selection but loose members to other groups by mutation. Groups with many mutations don’t reproduce as much, but gain members by mutation. (Credit: © Richard Neher/MPI for Developmental Biology)

In collaboration with colleagues from the US, Richard Neher from the Max Planck Institute for Developmental Biology has shown mathematically how Muller’s ratchet operates and he has investigated why populations are not inevitably doomed to extinction despite the continuous influx of deleterious mutations.

The great majority of mutations are deleterious. “Due to selection individuals with more favourable genes reproduce more successfully and deleterious mutations disappear again,” explains the population geneticist Richard Neher, leader of an independent Max Planck research group at the Max Planck Institute for Developmental Biology in Tübingen, Germany. However, in small populations such as an asexually reproducing virus early during infection, the situation is not so clear-cut. “It can then happen by chance, by stochastic processes alone, that deleterious mutations in the viruses accumulate and the mutation-free group of individuals goes extinct,” says Richard Neher. This is known as a click of Muller’s ratchet, which is irreversible — at least in Muller’s model.

Muller published his model on the evolutionary significance of deleterious mutations in 1964. Yet to date a quantitative understanding of the ratchet’s processes was lacking. Richard Neher and Boris Shraiman from the University of California in Santa Barbara have now published a new theoretical study on Muller’s ratchet. They chose a comparably simple model with only deleterious mutations all having the same effect on fitness. The scientists assumed selection against those mutations and analysed how fluctuations in the group of the fittest individuals affected the less fit ones and the whole population. Richard Neher and Boris Shraiman discovered that the key to the understanding of Muller’s ratchet lies in a slow response: If the number of the fittest individuals is reduced, the mean fitness decreases only after a delay. “This delayed feedback accelerates Muller’s ratchet,” Richard Neher comments on the results. It clicks more and more frequently.

“Our results are valid for a broad range of conditions and parameter values — for a population of viruses as well as a population of tigers.” However, he does not expect to find the model’s conditions one-to-one in nature. “Models are made to understand the essential aspects, to identify the critical processes,” he explains.

In a second study Richard Neher, Boris Shraiman and several other US-scientists from the University of California in Santa Barbara and Harvard University in Cambridge investigated how a small asexual population could escape Muller’s ratchet. “Such a population can only stay in a steady state for a long time when beneficial mutations continually compensate for the negative ones that accumulate via Muller’s ratchet,” says Richard Neher. For their model the scientists assumed a steady environment and suggest that there can be a mutation-selection balance in every population. They have calculated the rate of favourable mutations required to maintain the balance. The result was surprising: Even under unfavourable conditions, a comparably small proportion in the range of several percent of positive mutations is sufficient to sustain a population.

These findings could explain the long-term maintenance of mitochondria, the so-called power plants of the cell that have their own genome and divide asexually. By and large, evolution is driven by random events or as Richard Neher says: “Evolutionary dynamics are very stochastic.”

Why Do Organisms Build Tissues They Seemingly Never Use? (Science Daily)

ScienceDaily (Aug. 10, 2012) — Why, after millions of years of evolution, do organisms build structures that seemingly serve no purpose?

A study conducted at Michigan State University and published in the current issue of The American Naturalist investigates the evolutionary reasons why organisms go through developmental stages that appear unnecessary.

“Many animals build tissues and structures they don’t appear to use, and then they disappear,” said Jeff Clune, lead author and former doctoral student at MSU’s BEACON Center of Evolution in Action. “It’s comparable to building a roller coaster, razing it and building a skyscraper on the same ground. Why not just skip ahead to building the skyscraper?”

Why humans and other organisms retain seemingly unnecessary stages in their development has been debated between biologists since 1866. This study explains that organisms jump through these extra hoops to avoid disrupting a developmental process that works. Clune’s team called this concept the “developmental disruption force.” But Clune says it also could be described as “if the shoe fits, don’t change a thing.”

“In a developing embryo, each new structure is built in a delicate environment that consists of everything that has already developed,” said Clune, who is now a postdoctoral fellow at Cornell University. “Mutations that alter that environment, such as by eliminating a structure, can thus disrupt later stages of development. Even if a structure is not actually used, it may set the stage for other functional tissues to grow properly.”

Going back to the roller coaster metaphor, even though the roller coaster gets torn down, the organism needs the parts from that teardown to build the skyscraper, he added.

“An engineer would simply skip the roller coaster step, but evolution is more of a tinkerer and less of an engineer,” Clune said. “It uses whatever parts that are lying around, even if the process that generates those parts is inefficient.”

An interesting consequence is that newly evolved traits tend to get added at the end of development, because there is less risk of disrupting anything important. That, in turn, means that there is a similarity between the order things evolve and the order they develop.

A new technology called computational evolution allowed the team to conduct experiments that would be impossible to reproduce in nature.

Rather than observe embryos grow, the team of computer scientists and biologists used BEACON’s Avida software to perform experiments with evolution inside a computer. The Avidians — self-replicating computer programs — mutate, compete for resources and evolve, mimicking natural selection in real-life organisms. Using this software, Clune’s team observed as Avidians evolved to perform logic tasks. They recorded the order that those tasks evolved in a variety of lineages, and then looked at the order those tasks developed in the final, evolved organism.

They were able to help settle an age-old debate that developmental order does resemble evolutionary order, at least in this computationally evolving system. Because in a computer thousands of generations can happen overnight, the team was able to repeat this experiment many times to document that this similarity repeatedly occurs.

Additional MSU researchers contributing to the study included BEACON colleagues Richard Lenski, Robert Pennock and Charles Ofria. The research was funded by the National Science Foundation.

USDA: Ongoing Drought Causes Significant Crop Yield Declines (Science Daily)

ScienceDaily (Aug. 10, 2012) — Corn production will drop 13 percent to a six-year low, the U.S. Agriculture Department said today (Aug. 10), confirming what many farmers already knew — they are having a very bad year, Ohio State University Extension economist Matt Roberts said.

Drought’s impact on corn. (Credit: Image courtesy of OSU Extension)

In its monthly crops report, USDA today cut its projected U.S. corn production to 10.8 billion bushels, down 17 percent from its forecast last month of nearly 13 billion bushels and 13 percent lower than last year. Soybean production is forecast to be down as well, to 2.69 billion bushels, which is 12 percent lower than last year, as well as lower than the 3.05 billion bushels the USDA forecast last month.

The projections mean this year’s corn production will be the lowest production since 2006, with soybeans at its lowest production rate since 2003, Roberts said. The USDA said it expects corn growers to average 123.4 bushels per acre, down 24 bushels from last year, while soybean growers are expected to average 36.1 bushels per acre, down 5.4 bushels from last year.

In Ohio, those numbers translate into a projected 126 bushels per acre yield, which is down 32 bushels per acre from last year for corn, he said. Soybeans are projected at 42 bushels per acre, down from last year’s 47.5 bushels per acre yield.

The impact on growers is going to be tough, Roberts said.

“I don’t think this is a surprise to anyone, especially growers,” he said. “For most farmers, this is the year that they will lose much of the profits they’ve made over five good years.

“I don’t expect to see a lot of bankruptcies, but certainly there will be a lot of belt-tightening among farmers this year. With crop insurance so widespread, it will help ensure that we don’t see a lot of bankruptcies and help farmers weather this storm.”

This as Ohioans have suffered through multiple days of record-setting temperatures of over 100 degrees this summer, with scant rainfall that has resulted in parched crop fields. In fact, with an average temperature of 77.6 degrees, July was the hottest month ever recorded nationwide, breaking a record set during the Dust Bowl of the 1930s, according to the National Climatic Data Center.

Most of Ohio except for some counties near the Kentucky, West Virginia and Pennsylvania borders is experiencing moderate drought, with some counties near the Indiana and Michigan borders experiencing severe and extreme drought as of Aug. 7, according to the most recent U.S. Drought Monitor. Nationwide, 80 percent of the U.S. is experiencing drought conditions, up from 40 percent in May, according to the monitor.

Currently, topsoil moisture in Ohio was rated 45 percent very short, 41 percent short and 14 percent adequate, with no surplus, according to the latest U.S. Department of Agriculture Weekly Crop Report.

The lack of rainfall has decimated many corn crops, which were damaged as a result of not enough rain during its crucial pollination period. So even though growers planted a record acreage of corn this year in anticipation of a strong year with record yields, the lack of enough rainfall has caused yield forecasts to continue to decline, Roberts said.

And while soybeans weren’t as negatively impacted by the lack of rain earlier in the growing season, ongoing drought conditions are taking a toll on crops, which are seeing yield estimates decline as well, he said, noting that further yield declines are likely as the growing season continues.

The corn and soybean forecasts are largely in line with market expectations, Roberts said.

Corn prices through yesterday increased 63 percent since mid-June, reaching an all-time high today (Aug. 10) of $8.49 a bushel on the Chicago Board of Trade.

“Most analysts in February expected a corn yield of 163, meaning there has now been a 40 bushel per acre yield cut from the beginning of the year, with many analysts expecting yields to go below 120 per bushels when it is all said and done,” he said. “That means there’s just a lot less corn around than what we expected.

“That leaves 2.3 billion fewer bushels of corn to be consumed than in 2011, which means that consumption has to be rationed out. And even though ethanol will be down about 10 percent and exports will be down by 25 percent from two years ago, we will still end up with extremely tight inventories.”

For livestock farmers, the situation is even worse, Roberts said.

“Livestock producers will feel more pain from higher feed prices and negative profit margins,” he said. “We will see a lot more stress on the entire livestock end, from poultry all the way up to cows.

“Cow/calf producers are in a very difficult situation because of poor pasture conditions and high hay costs as a result of this historic drought. Overall, it’s going to be a very bad year for the farm economy. While there will be pockets of growers that don’t feel it as bad, livestock farmers will feel it just all around because of the overall feed costs.”

NOAA Raises Hurricane Season Prediction Despite Expected El Niño (Science Daily)

ScienceDaily (Aug. 10, 2012) — This year’s Atlantic hurricane season got off to a busy start, with 6 named storms to date, and may have a busy second half, according to the updated hurricane season outlook issued Aug. 9, 2012 by NOAA’s Climate Prediction Center, a division of the National Weather Service. The updated outlook still indicates a 50 percent chance of a near-normal season, but increases the chance of an above-normal season to 35 percent and decreases the chance of a below-normal season to only 15 percent from the initial outlook issued in May.

Satellite image of Hurricane Ernesto taken on Aug. 7, 2012 in the Gulf of Mexico. (Credit: NOAA)

Across the entire Atlantic Basin for the season — June 1 to November 30 — NOAA’s updated seasonal outlook projects a total (which includes the activity-to-date of tropical storms Alberto, Beryl, Debbie, Florence and hurricanes Chris and Ernesto) of:

  • 12 to 17 named storms (top winds of 39 mph or higher), including:
  • 5 to 8 hurricanes (top winds of 74 mph or higher), of which:
  • 2 to 3 could be major hurricanes (Category 3, 4 or 5; winds of at least 111 mph)

The numbers are higher from the initial outlook in May, which called for 9-15 named storms, 4-8 hurricanes and 1-3 major hurricanes. Based on a 30-year average, a normal Atlantic hurricane season produces 12 named storms, six hurricanes, and three major hurricanes.

“We are increasing the likelihood of an above-normal season because storm-conducive wind patterns and warmer-than-normal sea surface temperatures are now in place in the Atlantic,” said Gerry Bell, Ph.D., lead seasonal hurricane forecaster at the Climate Prediction Center. “These conditions are linked to the ongoing high activity era for Atlantic hurricanes that began in 1995. Also, strong early-season activity is generally indicative of a more active season.”

However, NOAA seasonal climate forecasters also announced today that El Niño will likely develop in August or September.

“El Niño is a competing factor, because it strengthens the vertical wind shear over the Atlantic, which suppresses storm development. However, we don’t expect El Niño’s influence until later in the season,” Bell said.

“We have a long way to go until the end of the season, and we shouldn’t let our guard down,” said Laura Furgione, acting director of NOAA’s National Weather Service. “Hurricanes often bring dangerous inland flooding as we saw a year ago in the Northeast with Hurricane Irene and Tropical Storm Lee. Even people who live hundreds of miles from the coast need to remain vigilant through the remainder of the season.”

“It is never too early to prepare for a hurricane,” said Tim Manning, FEMA’s deputy administrator for protection and national preparedness. “We are in the middle of hurricane season and now is the time to get ready. There are easy steps you can take to get yourself and your family prepared. Visit www.ready.gov to learn more.”

How Computation Can Predict Group Conflict: Fighting Among Captive Pigtailed Macaques Provides Clues (Science Daily)

ScienceDaily (Aug. 13, 2012) — When conflict breaks out in social groups, individuals make strategic decisions about how to behave based on their understanding of alliances and feuds in the group.

Researchers studied fighting among captive pigtailed macaques for clues about behavior and group conflict. (Credit: iStockphoto/Natthaphong Phanthumchinda)

But it’s been challenging to quantify the underlying trends that dictate how individuals make predictions, given they may only have seen a small number of fights or have limited memory.

In a new study, scientists at the Wisconsin Institute for Discovery (WID) at UW-Madison develop a computational approach to determine whether individuals behave predictably. With data from previous fights, the team looked at how much memory individuals in the group would need to make predictions themselves. The analysis proposes a novel estimate of “cognitive burden,” or the minimal amount of information an organism needs to remember to make a prediction.

The research draws from a concept called “sparse coding,” or the brain’s tendency to use fewer visual details and a small number of neurons to stow an image or scene. Previous studies support the idea that neurons in the brain react to a few large details such as the lines, edges and orientations within images rather than many smaller details.

“So what you get is a model where you have to remember fewer things but you still get very high predictive power — that’s what we’re interested in,” says Bryan Daniels, a WID researcher who led the study. “What is the trade-off? What’s the minimum amount of ‘stuff’ an individual has to remember to make good inferences about future events?”

To find out, Daniels — along with WID co-authors Jessica Flack and David Krakauer — drew comparisons from how brains and computers encode information. The results contribute to ongoing discussions about conflict in biological systems and how cognitive organisms understand their environments.

The study, published in the Aug. 13 edition of the Proceedings of the National Academy of Sciences, examined observed bouts of natural fighting in a group of 84 captive pigtailed macaques at the Yerkes National Primate Research Center. By recording individuals’ involvement — or lack thereof — in fights, the group created models that mapped the likelihood any number of individuals would engage in conflict in hypothetical situations.

To confirm the predictive power of the models, the group plugged in other data from the monkey group that was not used to create the models. Then, researchers compared these simulations with what actually happened in the group. One model looked at conflict as combinations of pairs, while another represented fights as sparse combinations of clusters, which proved to be a better tool for predicting fights. From there, by removing information until predictions became worse, Daniels and colleagues calculated the amount of information each individual needed to remember to make the most informed decision whether to fight or flee.

“We know the monkeys are making predictions, but we don’t know how good they are,” says Daniels. “But given this data, we found that the most memory it would take to figure out the regularities is about 1,000 bits of information.”

Sparse coding appears to be a strong candidate for explaining the mechanism at play in the monkey group, but the team points out that it is only one possible way to encode conflict.

Because the statistical modeling and computation frameworks can be applied to different natural datasets, the research has the potential to influence other fields of study, including behavioral science, cognition, computation, game theory and machine learning. Such models might also be useful in studying collective behaviors in other complex systems, ranging from neurons to bird flocks.

Future research will seek to find out how individuals’ knowledge of alliances and feuds fine tunes their own decisions and changes the groups’ collective pattern of conflict.

The research was supported by the National Science Foundation, the John Templeton Foundation through the Santa Fe Institute, and UW-Madison.

Why Are People Overconfident So Often? It’s All About Social Status (Science Daily)

ScienceDaily (Aug. 13, 2012) — Researchers have long known that people are very frequently overconfident — that they tend to believe they are more physically talented, socially adept, and skilled at their job than they actually are. For example, 94% of college professors think they do above average work (which is nearly impossible, statistically speaking). But this overconfidence can also have detrimental effects on their performance and decision-making. So why, in light of these negative consequences, is overconfidence still so pervasive?

The lure of social status promotes overconfidence, explains Haas School Associate Professor Cameron Anderson. He co-authored a new study, “A Status-Enhancement Account of Overconfidence,” with Sebastien Brion, assistant professor of managing people in organizations, IESE Business School, University of Navarra, Haas School colleagues Don Moore, associate professor of management, and Jessica A. Kennedy, now a post-doctoral fellow at the Wharton School of Business. The study will be published in theJournal of Personality and Social Psychology (forthcoming).

“Our studies found that overconfidence helped people attain social status. People who believed they were better than others, even when they weren’t, were given a higher place in the social ladder. And the motive to attain higher social status thus spurred overconfidence,” says Anderson, the Lorraine Tyson Mitchell Chair in Leadership and Communication II at the Haas School.

Social status is the respect, prominence, and influence individuals enjoy in the eyes of others. Within work groups, for example, higher status individuals tend to be more admired, listened to, and have more sway over the group’s discussions and decisions. These “alphas” of the group have more clout and prestige than other members. Anderson says these research findings are important because they help shed light on a longstanding puzzle: why overconfidence is so common, in spite of its risks. His findings suggest that falsely believing one is better than others has profound social benefits for the individual.

Moreover, these findings suggest one reason why in organizational settings, incompetent people are so often promoted over their more competent peers. “In organizations, people are very easily swayed by others’ confidence even when that confidence is unjustified,” says Anderson. “Displays of confidence are given an inordinate amount of weight.”

The studies suggest that organizations would benefit from taking individuals’ confidence with a grain of salt. Yes, confidence can be a sign of a person’s actual abilities, but it is often not a very good sign. Many individuals are confident in their abilities even though they lack true skills or competence.

The authors conducted six experiments to measure why people become overconfident and how overconfidence equates to a rise in social stature. For example:

In Study 2, the researchers examined 242 MBA students in their project teams and asked them to look over a list of historical names, historical events, and books and poems, and then to identify which ones they knew or recognized. Terms included Maximilien Robespierre, Lusitania, Wounded Knee, Pygmalion, and Doctor Faustus. Unbeknownst to the participants, some of the names were made up. These so-called “foils” included Bonnie Prince Lorenzo, Queen Shaddock, Galileo Lovano, Murphy’s Last Ride, and Windemere Wild. The researchers deemed those who picked the most foils the most overly confident because they believed they were more knowledgeable than they actually were. In a survey at the end of the semester, those same overly confident individuals (who said they had recognized the most foils) achieved the highest social status within their groups.

It is important to note that group members did not think of their high status peers as overconfident, but simply that they were terrific. “This overconfidence did not come across as narcissistic,” explains Anderson. “The most overconfident people were considered the most beloved.”

Study 4 sought to discover the types of behaviors that make overconfident people appear to be so wonderful (even when they were not). Behaviors such as body language, vocal tone, rates of participation were captured on video as groups worked together in a laboratory setting. These videos revealed that overconfident individuals spoke more often, spoke with a confident vocal tone, provided more information and answers, and acted calmly and relaxed as they worked with their peers. In fact, overconfident individuals were more convincing in their displays of ability than individuals who were actually highly competent.

“These big participators were not obnoxious, they didn’t say, ‘I’m really good at this.’ Instead, their behavior was much more subtle. They simply participated more and exhibited more comfort with the task — even though they were no more competent than anyone else,” says Anderson.

Two final studies found that it is the “desire” for status that encourages people to be more overconfident. For example, in Study 6, participants read one of two stories and were asked to imagine themselves as the protagonist in the story. The first story was a simple, bland narrative of losing then finding one’s keys. The second story asked the reader to imagine him/herself getting a new job with a prestigious company. The job had many opportunities to obtain higher status, including a promotion, a bonus, and a fast track to the top. Those participants who read the new job scenario rated their desire for status much higher than those who read the story of the lost keys.

After they were finished reading, participants were asked to rate themselves on a number of competencies such as critical thinking skills, intelligence, and the ability to work in teams. Those who had read the new job story (which stimulated their desire for status) rated their skills and talent much higher than did the first group. Their desire for status amplified their overconfidence.

De-emphasizing the natural tendency toward overconfidence may prove difficult but Prof. Anderson hopes this research will give people the incentive to look for more objective indices of ability and merit in others, instead of overvaluing unsubstantiated confidence.

Should Doctors Treat Lack of Exercise as a Medical Condition? Expert Says ‘Yes’ (Science Daily)

ScienceDaily (Aug. 13, 2012) — A sedentary lifestyle is a common cause of obesity, and excessive body weight and fat in turn are considered catalysts for diabetes, high blood pressure, joint damage and other serious health problems. But what if lack of exercise itself were treated as a medical condition? Mayo Clinic physiologist Michael Joyner, M.D., argues that it should be. His commentary is published this month in The Journal of Physiology.

Physical inactivity affects the health not only of many obese patients, but also people of normal weight, such as workers with desk jobs, patients immobilized for long periods after injuries or surgery, and women on extended bed rest during pregnancies, among others, Dr. Joyner says. Prolonged lack of exercise can cause the body to become deconditioned, with wide-ranging structural and metabolic changes: the heart rate may rise excessively during physical activity, bones and muscles atrophy, physical endurance wane, and blood volume decline.

When deconditioned people try to exercise, they may tire quickly and experience dizziness or other discomfort, then give up trying to exercise and find the problem gets worse rather than better.

“I would argue that physical inactivity is the root cause of many of the common problems that we have,” Dr. Joyner says. “If we were to medicalize it, we could then develop a way, just like we’ve done for addiction, cigarettes and other things, to give people treatments, and lifelong treatments, that focus on behavioral modifications and physical activity. And then we can take public health measures, like we did for smoking, drunken driving and other things, to limit physical inactivity and promote physical activity.”

Several chronic medical conditions are associated with poor capacity to exercise, including fibromyalgia, chronic fatigue syndrome and postural orthostatic tachycardia syndrome, better known as POTS, a syndrome marked by an excessive heart rate and flu-like symptoms when standing or a given level of exercise. Too often, medication rather than progressive exercise is prescribed, Dr. Joyner says.

Texas Health Presbyterian Hospital Dallas and University of Texas Southwestern Medical Center researchers found that three months of exercise training can reverse or improve many POTS symptoms, Dr. Joyner notes. That study offers hope for such patients and shows that physicians should consider prescribing carefully monitored exercise before medication, he says.

If physical inactivity were treated as a medical condition itself rather than simply a cause or byproduct of other medical conditions, physicians may become more aware of the value of prescribing supported exercise, and more formal rehabilitation programs that include cognitive and behavioral therapy would develop, Dr. Joyner says.

For those who have been sedentary and are trying to get into exercise, Dr. Joyner advises doing it slowly and progressively.

“You just don’t jump right back into it and try to train for a marathon,” he says. “Start off with achievable goals and do it in small bites.”

There’s no need to join a gym or get a personal trainer: build as much activity as possible into daily life. Even walking just 10 minutes three times a day can go a long way toward working up to the 150 minutes a week of moderate physical activity the typical adult needs, Dr. Joyner says.

Deeply Held Religious Beliefs Prompting Sick Kids to Be Given ‘Futile’ Treatment (Science Daily)

ScienceDaily (Aug. 13, 2012) — Parental hopes of a “miraculous intervention,” prompted by deeply held religious beliefs, are leading to very sick children being subjected to futile care and needless suffering, suggests a small study in the Journal of Medical Ethics.

The authors, who comprise children’s intensive care doctors and a hospital chaplain, emphasise that religious beliefs provide vital support to many parents whose children are seriously ill, as well as to the staff who care for them.

But they have become concerned that deeply held beliefs are increasingly leading parents to insist on the continuation of aggressive treatment that ultimately is not in the best interests of the sick child.

It is time to review the current ethics and legality of these cases, they say.

They base their conclusions on a review of 203 cases which involved end of life decisions over a three year period.

In 186 of these cases, agreement was reached between the parents and healthcare professionals about withdrawing aggressive, but ultimately futile, treatment.

But in the remaining 17 cases, extended discussions with the medical team and local support had failed to resolve differences of opinion with the parents over the best way to continue to care for the very sick child in question.

The parents had insisted on continuing full active medical treatment, while doctors had advocated withdrawing or withholding further intensive care on the basis of the overwhelming medical evidence.

The cases in which withdrawal or withholding of intensive care was considered to be in the child’s best interests were consistent with the Royal College of Paediatrics and Child Health guidance.

Eleven of these cases (65%) involved directly expressed religious claims that intensive care should not be stopped because of the expectation of divine intervention and a complete cure, together with the conviction that the opinion of the medical team was overly pessimistic and wrong.

Various different faiths were represented among the parents, including Christian fundamentalism, Islam, Judaism, and Roman Catholicism.

Five of the 11 cases were resolved after meeting with the relevant religious leaders outside the hospital, and intensive care was withdrawn in a further case after a High Court order.

But five cases were not resolved, so intensive care was continued. Four of these children eventually died; one survived with profound neurological disability.

Six of the 17 cases in which religious belief was not a cited factor, were all resolved without further recourse to legal, ethical, or socio-religious support. Intensive care was withdrawn in all these children, five of whom died and one of whom survived, but with profound neurological disability.

The authors emphasise that parental reluctance to allow treatment to be withdrawn is “completely understandable as [they] are defenders of their children’s rights, and indeed life.”

But they argue that when children are too young to be able to actively subscribe to their parents’ religious beliefs, a default position in which parental religion is not the determining factor might be more appropriate.

They cite Article 3 of the Human Rights Act, which aims to ensure that no one is subjected to torture or inhumane or degrading treatment or punishment.

“Spending a lifetime attached to a mechanical ventilator, having every bodily function supervised and sanitised by a carer or relative, leaving no dignity or privacy to the child and then adult, has been argued as inhumane,” they argue.

And they conclude: “We suggest it is time to reconsider current ethical and legal structures and facilitate rapid default access to courts in such situations when the best interests of the child are compromised in expectation of the miraculous.”

In an accompanying commentary, the journal’s editor, Professor Julian Savulescu, advocates: “Treatment limitation decisions are best made, not in the alleged interests of patients, but on distributive justice grounds.”

In a publicly funded system with limited resources, these should be given to those whose lives could be saved rather than to those who are very unlikely to survive, he argues.

“Faced with the choice between providing an intensive care bed to a [severely brain damaged] child and one who has been at school and was hit by a cricket ball and will return to normal life, we should provide the bed to the child hit by the cricket ball,” he writes.

In further commentaries, Dr Steve Clarke of the Institute for Science and Ethics maintains that doctors should engage with devout parents on their own terms.

“Devout parents, who are hoping for a miracle, may be able to be persuaded, by the lights of their own personal…religious beliefs, that waiting indefinite periods of time for a miracle to occur while a child is suffering, and while scarce medical equipment is being denied to other children, is not the right thing to do,” he writes.

Leading ethicist, Dr Mark Sheehan, argues that these ethical dilemmas are not confined to fervent religious belief, and to polarise the issue as medicine versus religion is unproductive, and something of a “red herring.”

Referring to the title of the paper, Charles Foster, of the University of Oxford, suggests that the authors have asked the wrong question. “The legal and ethical orthodoxy is that no beliefs, religious or secular, should be allowed to stonewall the best interests of the child,” he writes.

How Do They Do It? Predictions Are in for Arctic Sea Ice Low Point (Science Daily)

ScienceDaily (Aug. 14, 2012) — It’s become a sport of sorts, predicting the low point of Arctic sea ice each year. Expert scientists with decades of experience do it but so do enthusiasts, whose guesses are gamely included in a monthly predictions roundup collected by Sea Ice Outlook, an effort supported by the U.S. government.

Arctic sea ice, as seen from an ice breaker. (Credit: Bonnie Light, UW)

When averaged, the predictions have come in remarkably close to the mark in the past two years. But the low and high predictions are off by hundreds of thousands of square kilometers.

Researchers are working hard to improve their ability to more accurately predict how much Arctic sea ice will remain at the end of summer. It’s an important exercise because knowing why sea ice declines could help scientists better understand climate change and how sea ice is evolving.

This year, researchers from the University of Washington’s Polar Science Center are the first to include new NASA sea ice thickness data collected by airplane in a prediction.

They expect 4.4 million square kilometers of remaining ice (about 1.7 million square miles), just barely more than the 4.3 million kilometers in 2007, the lowest year on record for Arctic sea ice. The median of 23 predictions collected by the Sea Ice Outlook and released on Aug. 13 is 4.3 million.

“One drawback to making predictions is historically we’ve had very little information about the thickness of the ice in the current year,” said Ron Lindsay, a climatologist at the Polar Science Center, a department in the UW’s Applied Physics Laboratory.

To make their prediction, Lindsay and Jinlun Zhang, an oceanographer in the Polar Science Center, start with a widely used model pioneered by Zhang and known as the Pan-Arctic Ice Ocean Modeling and Assimilation System. That system combines available observations with a model to track sea ice volume, which includes both ice thickness and extent.

But obtaining observations about current-year ice thickness in order to build their short-term prediction is tough. NASA is currently in the process of designing a new satellite that will replace one that used to deliver ice thickness data but has since failed. In the meantime, NASA is running a program called Operation IceBridge that uses airplanes to survey sea ice as well as Arctic ice sheets.

“This is the first year they made a concerted effort to get the data from the aircraft, process it and get it into hands of scientists in a timely manner,” Lindsay said. “In the past, we’ve gotten data from submarines, moorings or satellites but none of that data was available in a timely manner. It took months or even years.”

There’s a shortcoming to the IceBridge data, however: It’s only available through March. The radar used to measure snow depth on the surface of the ice, an important element in the observation system, has trouble accurately gauging the depth once it has melted and so the data is only collected through the early spring before the thaw.

The UW scientists have developed a method for informing their prediction that is starting to be used by others. Researchers have struggled with how best to forecast the weather in the Arctic, which affects ice melt and distribution.

“Jinlun came up with the idea of using the last seven summers. Because the climate is changing so fast, only the recent summers are probably relevant,” Lindsay said.

The result is seven different possibilities of what might happen. “The average of those is our best guess,” Lindsay said.

Despite the progress in making predictions, the researchers say their abilities to foretell the future will always be limited. Because they can’t forecast the weather very far in advance and because the ice is strongly affected by winds, they have little confidence beyond what the long-term trend tells us in predictions that are made far in advance.

“The accuracy of our prediction really depends on time,” Zhang said. “Our June 1 prediction for the Sept. 15 low point has high uncertainty but as we approach the end of June or July, the uncertainty goes down and the accuracy goes up.”

In hindsight, that’s true historically for the average predictions collected by Study of Environmental Arctic Change’s Sea Ice Outlook, a project funded by the National Science Foundation and the National Oceanic and Atmospheric Administration.

While the competitive aspect of the predictions is fun, the researchers aren’t in it to win it.

“Essentially it’s not for prediction but for understanding,” Zhang said. “We do it to improve our understanding of sea ice processes, in terms of how dynamic processes affect the seasonal evolution of sea ice.”

That may not be entirely the same for the enthusiasts who contribute a prediction. One climate blog polls readers in the summer for their best estimate of the sea ice low point. It’s included among the predictions collected by the Sea Ice Outlook, with an asterisk noting it as a “public outlook.”

The National Science Foundation and NASA fund the UW research into the Arctic sea ice low point.

Nova legislação dará base científica à prevenção de desastres naturais, dizem especialistas (Fapesp)

Lei sancionada em abril obrigará municípios a elaborar carta geotécnica, instrumento multidisciplinar que orientará implantação de sistemas de alerta e planos diretores (Valter Campanato/ABr)

08/08/2012

Por Fábio de Castro

Agência FAPESP – Em janeiro de 2011, enchentes e deslizamentos deixaram cerca de mil mortos e 500 desaparecidos na Região Serrana do Rio de Janeiro. A tragédia evidenciou a precariedade dos sistemas de alerta no Brasil e foi considerada por especialistas como a prova definitiva de que era preciso investir na prevenção de desastres.

O mais importante desdobramento dessa análise foi a Lei 12.608, sancionada em abril, que estabelece a Política Nacional de Proteção e Defesa Civil e cria o sistema de informações e monitoramento de desastres, de acordo com especialistas reunidos no seminário “Caminhos da política nacional de defesa de áreas de risco”, realizado pela Escola Politécnica da Universidade de São Paulo (USP) no dia 6 de agosto.

A nova lei obriga as prefeituras a investir em planejamento urbano na prevenção de desastres do tipo enchentes e deslizamentos de terra. Segundo os especialistas, pela primeira vez a prevenção de desastres poderá ser feita com fundamento técnico e científico sólido, já que a lei determina que, para fazer o planejamento, todas as prefeituras precisarão elaborar cartas geotécnicas dos municípios.

Katia Canil, pesquisadora do Laboratório de Riscos Ambientais do Instituto de Pesquisas Tecnológicas (IPT), disse que as prefeituras terão dois anos para elaborar as cartas geotécnicas para lastrear seus planos diretores, que deverão contemplar ações de prevenção e mitigação de desastres. Os municípios que não apresentarem esse planejamento não receberão recursos federais para obras de prevenção e mitigação.

“As cartas geotécnicas são documentos cartográficos que reúnem informações sobre as características geológicas e geomorfológicas dos municípios, identificando riscos geológicos e facilitando a criação de regras para a ocupação urbana. Com a obrigatoriedade desse instrumento, expressa na lei, poderemos ter estratégias de prevenção de desastres traçadas com base no conhecimento técnico e científico”, disse Canil à Agência FAPESP.

A primeira carta geotécnica do Brasil foi feita em 1979, no município de Santos (SP), mas, ainda assim, o instrumento se manteve pouco difundido no país. Segundo Canil, a institucionalização da ferramenta será um fator importante para a adequação dos planos diretores em relação às características geotécnicas dos terrenos.

“Poucos municípios têm carta geotécnica, porque não era um instrumento obrigatório. Agora, esse panorama deve mudar. Mas a legislação irá gerar uma grande demanda de especialistas em diversas áreas, porque as cartas geotécnicas integram uma gama de dados interdisciplinares”, disse a pesquisadora do IPT.

As cartas geotécnicas reúnem documentos que resultam de levantamentos geológicos e geotécnicos de campo, além de análises laboratoriais, com o objetivo de sintetizar todo o conhecimento disponível sobre o meio físico e sua relação com os processos geológicos e humanos presentes no local. “E tudo isso precisa ser expresso em uma linguagem adequada para que os gestores compreendam”, disse Canil.

As cidades terão que se organizar para elaborar cartas geotécnicas e a capacitação técnica necessária não é trivial. “Não se trata apenas de cruzar mapas. É preciso ter experiência aliada ao treinamento em áreas como geologia, engenharia, engenharia geotécnica, cartografia, geografia, arquitetura e urbanismo”, disse Canil. O IPT já oferece um curso de capacitação para elaboração de cartas geotécnicas.

Uma dificuldade importante para a elaboração das cartas será a carência de mapeamento geológico de base nos municípios brasileiros. “A maior parte dos municípios não tem dados primários, como mapeamentos geomorfológicos, pedológicos e geológicos”, disse Canil.

Plano nacional de prevenção

A tragédia da Região Serrana fluminense, em janeiro de 2011, foi um marco que mudou o rumo das discussões sobre desastres, destacando definitivamente o papel central da prevenção, segundo Carlos Nobre, secretário de Políticas e Programas de Pesquisa e Desenvolvimento do Ministério da Ciência, Tecnologia e Inovação (MCTI).

“Aquele episódio foi um solavanco que chacoalhou a percepção brasileira para o tema dos grandes desastres. Tornou-se óbvio para os gestores e para a população que é preciso enfatizar o eixo da prevenção. Foi um marco que mudou nossa perspectiva para sempre: prevenção é fundamental”, disse durante o evento.

Segundo Nobre, que também é pesquisador do Instituto Nacional de Pesquisas Espaciais (Inpe) e membro da coordenação do Programa FAPESP de Pesquisa sobre Mudanças Climáticas Globais, a experiência internacional mostra que a prevenção pode reduzir em até 90% o número de vítimas fatais em desastres naturais, além de diminuir em cerca de 35% os danos materiais. “Além de poupar vidas, a economia com os prejuízos materiais já compensa com sobras todos os investimentos em prevenção”, disse.

De acordo com Nobre, a engenharia terá um papel cada vez mais importante na prevenção, à medida que os desastres naturais se tornarem mais extremos por consequência das mudanças climáticas.

“O engenheiro do século 21 precisará ser treinado para a engenharia da sustentabilidade – um campo transversal da engenharia que ganhará cada vez mais espaço. A engenharia, se bem conduzida, é central para solucionar alguns dos principais problemas da atualidade”, afirmou.

Segundo Nobre, além da nova legislação, que obrigará o planejamento com base em cartas geotécnicas dos municípios, o Brasil conta com diversas iniciativas na área de prevenção de desastres. Uma delas será anunciada nesta quarta-feira (08/08): o Plano Nacional de Prevenção a Desastres Naturais, que enfatiza as obras voltadas para a instalação de sistemas de alerta.

“Há obras de grande escala necessárias no Brasil, especialmente no que se refere aos sistemas de alerta. Um dos elementos importantes do novo plano é a questão do alerta precoce. Experiências internacionais mostram que um alerta feito até duas horas antes de um deslizamento é capaz de salvar vidas”, disse.

Segundo Nobre, as iniciativas do plano serão coerentes com a nova legislação. O governo federal deverá investir R$ 4,6 bilhões, nos próximos meses, em iniciativas de prevenção de desastres nos estados do Rio de Janeiro, Minas Gerais e Santa Catarina.

Mas, para pleitear verbas federais, o município deverá cumprir uma série de requisitos, como incorporar as ações de proteção e defesa civil no planejamento municipal, identificar e mapear as áreas de risco de desastres naturais, impedir novas ocupações e vistoriar edificações nessas áreas.

Segundo Nobre, outra ação voltada para a prevenção de desastres foi a implantação do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden), do MCTI, que começou a operar em dezembro de 2011, no campus do Inpe em Cachoeira Paulista (SP).

“Esse centro já tinha um papel importante na previsão de tempo, mas foi reformulado e contratou 35 profissionais. O Cemaden nasce como um emblema dos novos sistemas de alerta: uma concepção que une geólogos, meteorólogos e especialistas em desastres naturais para identificar vulnerabilidades, algo raro no mundo”, afirmou.

Segundo ele, essa nova estrutura já tem um sistema de alertas em funcionamento. “É um sistema que ainda vai precisar ser avaliado com o tempo. Mas até agora, desde dezembro de 2011, já foram lançados mais de 100 alertas. O país levará vários anos para reduzir as fatalidades como os países que têm bons sistemas de prevenção. Mas estamos no caminho certo”, disse Nobre.

Heatwave turns America’s waterways into rivers of death (The Independent)

Falling water levels are killing fish and harming exports

DAVID USBORNE

SUNDAY 05 AUGUST 2012

The cruel summer heat-wave that continues to scorch agricultural crops across much of the United States and which is prompting comparisons with the severe droughts of the 1930s and 1950s is also leading to record-breaking water temperatures in rivers and streams, including the Mississippi, as well as fast-falling navigation levels.

While in the northern reaches of the Mississippi, near Moline in Illinois, the temperature touched 90 degrees last week – warmer than the Gulf of Mexico around the Florida Keys – towards the river’s southern reaches the US Army Corps of Engineers is dredging around the clock to try to keep barges from grounding as water levels dive.

For scientists the impact of a long, hot summer that has plunged more than two-thirds of the country into drought conditions – sometimes extreme – has been particularly striking in the Great Lakes. According to the Great Lakes Environmental Research Laboratory, all are experiencing unusual spikes in water temperature this year. It is especially the case for Lake Superior, the northernmost, the deepest, and therefore the coolest.

“It’s pretty safe to say that what we’re seeing here is the warmest that we’ve seen in Lake Superior in a century,” said Jay Austin, a professor at the University of Minnesota at Duluth. The average temperature recorded for the lake last week was 68F (20C). That compares with 56F (13C) at this time last year.

It is a boon to shoreline residents who are finding normally chilly waters suddenly inviting for a dip. But the warming of the rivers, in particular, is taking a harsh toll on fish, which are dying in increasingly large numbers. Significant tolls of fresh-water species, from pike to trout, have been reported, most frequently in the Midwest.

“Most problems occur in ponds that are not deep enough for fish to retreat to cooler and more oxygen-rich water,” said Jake Allman of the Missouri Department of Conservation. “Hot water holds less oxygen than cool water. Shallow ponds get warmer than deeper ponds, and with little rain, area ponds are becoming shallower by the day. Evaporation rates are up to 11 inches per month in these conditions.”

In some instances, fish are simply left high and dry as rivers dry up entirely. It is the case of the normally rushing River Platte which has simply petered out over a 100-mile stretch in Nebraska, large parts of which are now federal disaster areas contending with so-called “exceptional drought” conditions.

“This is the worst I’ve ever seen it, and I’ve been on the river since I was a pup,” Dan Kneifel, owner of Geno’s Bait and Tackle Shop, told TheOmahaChannel.com. “The river was full of fish, and to see them all die is a travesty.”

As water levels in the Mississippi ebb, so barge operators are forced to offload cargo to keep their vessels moving. About 60 per cent of exported US corn is conveyed by the Mississippi, which is now 12ft below normal levels in some stretches. Navigation on the Mississippi has not been so severely threatened since the 1988 drought in the US. Few forget, meanwhile, that last summer towns up and down the Mississippi were battling flooding.

One welcome side-effect, however, is data showing that the so-called “dead zone” in the Gulf of Mexico around the Mississippi estuary is far less extensive this summer because the lack of rain and the slow running of the water has led to much less nitrate being washed off farmland and into the system than in normal years. The phenomenon occurs because the nitrates feed blooms of algae in Gulf waters which then decompose, stripping the water of oxygen.

Chronic 2000-04 drought, worst in 800 years, may be the ‘new normal’ (Oregon State Univ)

Public release date: 29-Jul-2012

By Beverly Law

Oregon State University

CORVALLIS, Ore. – The chronic drought that hit western North America from 2000 to 2004 left dying forests and depleted river basins in its wake and was the strongest in 800 years, scientists have concluded, but they say those conditions will become the “new normal” for most of the coming century.

Such climatic extremes have increased as a result of global warming, a group of 10 researchers reported today in Nature Geoscience. And as bad as conditions were during the 2000-04 drought, they may eventually be seen as the good old days.

Climate models and precipitation projections indicate this period will actually be closer to the “wet end” of a drier hydroclimate during the last half of the 21st century, scientists said.

Aside from its impact on forests, crops, rivers and water tables, the drought also cut carbon sequestration by an average of 51 percent in a massive region of the western United States, Canada and Mexico, although some areas were hit much harder than others. As vegetation withered, this released more carbon dioxide into the atmosphere, with the effect of amplifying global warming.

“Climatic extremes such as this will cause more large-scale droughts and forest mortality, and the ability of vegetation to sequester carbon is going to decline,” said Beverly Law, a co-author of the study, professor of global change biology and terrestrial systems science at Oregon State University, and former science director of AmeriFlux, an ecosystem observation network.

“During this drought, carbon sequestration from this region was reduced by half,” Law said. “That’s a huge drop. And if global carbon emissions don’t come down, the future will be even worse.”

This research was supported by the National Science Foundation, NASA, U.S. Department of Energy, and other agencies. The lead author was Christopher Schwalm at Northern Arizona University. Other collaborators were from the University of Colorado, University of California at Berkeley, University of British Columbia, San Diego State University, and other institutions.

It’s not clear whether or not the current drought in the Midwest, now being called one of the worst since the Dust Bowl, is related to these same forces, Law said. This study did not address that, and there are some climate mechanisms in western North America that affect that region more than other parts of the country.

But in the West, this multi-year drought was unlike anything seen in many centuries, based on tree ring data. The last two periods with drought events of similar severity were in the Middle Ages, from 977-981 and 1146-1151. The 2000-04 drought affected precipitation, soil moisture, river levels, crops, forests and grasslands.

Ordinarily, Law said, the land sink in North America is able to sequester the equivalent of about 30 percent of the carbon emitted into the atmosphere by the use of fossil fuels in the same region. However, based on projected changes in precipitation and drought severity, scientists said that this carbon sink, at least in western North America, could disappear by the end of the century.

“Areas that are already dry in the West are expected to get drier,” Law said. “We expect more extremes. And it’s these extreme periods that can really cause ecosystem damage, lead to climate-induced mortality of forests, and may cause some areas to convert from forest into shrublands or grassland.”

During the 2000-04 drought, runoff in the upper Colorado River basin was cut in half. Crop productivity in much of the West fell 5 percent. The productivity of forests and grasslands declined, along with snowpacks. Evapotranspiration decreased the most in evergreen needleleaf forests, about 33 percent.

The effects are driven by human-caused increases in temperature, with associated lower soil moisture and decreased runoff in all major water basins of the western U.S., researchers said in the study.

Although regional precipitations patterns are difficult to forecast, researchers in this report said that climate models are underestimating the extent and severity of drought, compared to actual observations. They say the situation will continue to worsen, and that 80 of the 95 years from 2006 to 2100 will have precipitation levels as low as, or lower than, this “turn of the century” drought from 2000-04.

“Towards the latter half of the 21st century the precipitation regime associated with the turn of the century drought will represent an outlier of extreme wetness,” the scientists wrote in this study.

These long-term trends are consistent with a 21st century “megadrought,” they said.

Sob o céu Guarani (Jornal da Ciência)

JC e-mail 4555, de 06 de Agosto de 2012

Livro lançado na 64ª Reunião da SBPC resgata técnicas da astronomia indígena no Mato Grosso do Sul.

Clarissa Vasconcellos – Jornal da Ciência

Lançado na 64ª Reunião Anual da Sociedade Brasileira para o Progresso da Ciência (SBPC), em São Luís, o livro ‘O Céu dos Índios de Dourados – Mato Grosso do Sul’ (Editora UEMS), de Germano Bruno Afonso e Paulo Souza da Silva, escrito em guarani e português, nasceu com a ideia de recuperar a tradição indígena de observação do céu. Trata-se de uma publicação voltada para o ensino de alunos de cultura indígena (mas não exclusivamente para eles), usada por professores Guarani como referência para mostrar como esses povos procuravam o melhor aproveitamento dos recursos naturais.

A publicação nasceu do projeto ‘Etnoastronomia dos Índios Guarani da Região da Grande Dourados – MS’, cuja meta era reconstruir três observatórios solares em Dourados, dois deles em escolas. “Eram uma espécie de relógio que os Guarani usavam para vários fins, como festejos ou medição das estações, e com isso podiam fazer previsões e criar cronogramas até para a concepção de bebês”, detalha ao Jornal da Ciência Paulo Souza da Silva, professor do curso de Física da Universidade Estadual do Mato Grosso do Sul (UEMS).

As técnicas dos índios também ajudam a explicar as marés e o comportamento da fauna e flora (útil para a caça e cultivo), entre outros fenômenos, mostrando que seu sistema astronômico vai muito mais além do que apenas a observação dos corpos celestes. O que acaba despertando o interesse até dos não índios.

Foi o que constatou o astrônomo do Museu da Amazônia Germano Bruno Afonso na palestra sobre o tema que ofereceu na Reunião da SBPC. “Foi mais gente do que esperávamos. A recepção das pessoas em São Luís me chamou a atenção, embora eu tenha falado bastante dos Tupinambá do Maranhão”, observa o pesquisador. Os Tupinambá, assim como os Tembé e os Guarani, pertencem à família linguística Tupi-Guarani, a maior em número e extensão geográfica do tronco linguístico Tupi.

Diferenças e semelhanças – Os Tupinambá maranhenses, uma etnia já extinta, não são o objeto principal do livro, mas estão presentes porque têm muito em comum com os Guarani do Sul a respeito da observação do céu. Germano conta que Tupinambá e Guarani têm técnicas muito parecidas, baseando-se no trabalho de Claude d’Abbeville, monge capuchinho que passou quatro meses no Maranhão em 1612. Seu livro ‘Histoire de la mission de Peres capucins en l’Isle de Maragnan ET terres circonvoisines’ é considerado uma das fontes mais importantes da etnografia dos Tupi.

“É interessante identificar o mesmo conhecimento com mais três de mil quilômetros de distância e 400 anos de diferença, embora Guarani e Tupinambá pertençam ao mesmo tronco linguístico”, pontua Germano, lembrando que a semelhança de idiomas isso facilitou que o conhecimento fosse repassado. Germano tem origem indígena e até os 17 anos de idade viveu numa aldeia Guarani.

O livro, originalmente uma cartilha, poderia ser complementar a ‘O Céu dos Índios Tembé’, que rendeu a Germano o Prêmio Jabuti de 2000. “Os Tembé são remanescentes dos Tupinambá, pela divisa do Pará com Maranhão, e eles também mantêm esse mesmo sistema astronômico”, conta. Após o livro dos Tembé, ele e Paulo Souza Silva ganharam uma bolsa de pesquisa do CNPq para trabalhar com os Guarani de Dourados, no projeto citado acima.

“Mas sabemos que esse trabalho é adaptável para todos os grupos da família Tupi-Guarani. Por isso fizemos um livro geral para professores, para eles aplicarem e modificarem de acordo com a cultura local. Um Guarani do Rio Grande do Sul não vê o céu da mesma maneira que um do Espírito Santo. A base é a mesma, mas o céu é diferente”, detalha Germano.

“Você tem que despertar o interesse da liderança, resgatar essa cultura”, opina Silva sobre a importância do livro e do projeto. Ele lembra que o indígena é marginalizado em cidades como Dourados, onde a cultura está se perdendo entre os jovens índios. “Muitos nem falam guarani”, lamenta.

Intercâmbio com a astronomia – A investigação desse conhecimento de grupos étnicos ou culturais que não utilizam a chamada ‘astronomia ocidental’ (ou oficial), caso dos povos indígenas do Brasil, deu origem à disciplina etnoastronomia ou astronomia antropológica. Ela requer especialistas em áreas como astronomia, antropologia, biologia e história. Germano conta que vê pouca colaboração entre a etnoastronomia e a astronomia.

“Não vejo troca nenhuma, exatamente por preconceito e falta de informação da astronomia ‘oficial’, pelo desconhecimento dos povos indígenas do próprio Brasil. A gente conhece a cultura dos maias, dos astecas e até dos aborígenes da Austrália, mas aqui temos muito desconhecimento”, lamenta, dizendo que busca a aceitação não apenas da academia, mas também do público leigo. Ele gostaria que o reconhecimento acontecesse conforme ocorreu na botânica e farmácia, disciplinas que aproveitaram muito o conhecimento tradicional desses povos. Para Silva, o preconceito diminuiu um pouco, apesar de haver quem diga que a etnoastronomia “é cultura e não ciência”.  “Como cientistas, temos que estar abertos ao que outros têm a oferecer”, opina o físico.

Atualmente, Germano está em Manaus e pretende passar seis meses em São Gabriel da Cachoeira, noroeste do Amazonas,” onde 95% da população são indígenas, com 27 etnias”. A ideia é fazer outro livro similar, levando em conta as diferenças regionais. “Enquanto no Sul é a temperatura que manda no clima, lá é a chuva. Vamos observar os períodos de chuva e as enchentes dos rios, aspectos climáticos que regem a fauna e flora”, detalha. Já Silva pretende fazer um livro sobre os mitos indígenas do céu, com questões como a formação do mundo.

Os autores desejam que esse conhecimento chegue aos bancos das escolas de todo o País, não apenas as que ensinam cultura indígena. “A mitologia indígena, comparada com a Greco-Romana [usada na astronomia], é muito mais fácil de visualizar no céu”, exemplifica Silva. “Nós explicamos, de uma maneira empírica, assim como os índios fazem, as estações do ano, os pontos cardeais, as fases da lua, as marés e os eclipses, só por meio da observação da natureza. Qualquer criança pode começar a entender isso sem a complicação matemática, então é uma maneira alternativa e prazerosa para ensinar também os não índios, antes de se aplicar a ciência formal”, conclui Germano.

Pai de gêmeos, um negro e outro branco (Extra)

Bruno Cunha

Fonte Extra

Finalmente eles foram reconhecidos no futebol. Enquanto um é zagueiro, tem cabelos crespos e adora doce, o outro é atacante, tem fios louros e prefere salgado. Com as diferenças, ficava difícil perceber que David Evangelista de Oliveira, o branco, e Nícolas, o negro, são irmãos gêmeos.

— Os pais dos coleguinhas do futebol achavam que só um era meu filho e que o outro era um amiguinho dele. E olha que os dois já treinam há um ano e meio. Mas só agora descobriram que são irmãos gêmeos — conta o montador de peças de laboratório Luis Carlos de Oliveira Silva, de 42 anos, pai das crianças.

Fama no bairro

Morador de Campo Grande, Luis tomou um susto quando soube da dupla gravidez da mulher, Audicelia Evangelista, de 45 anos. E outro após o nascimento dos filhos, um negro, como o pai, e outro branco, como a mãe.

— Na época, os colegas brincavam: “ah, esse aí não é seu filho, não!”. Uma vez entrei numa maternidade e o David me chamou de pai. O segurança cochichou: “não é filho dele.” Mas eu penso: os dois puxaram ao pai e à mãe — afirma Luis.

Na porta do quarto, a frase “gêmeos em ação”
Na porta do quarto, a frase “gêmeos em ação” Foto: Nina Lima / Extra

Famosos no sub-bairro Santa Rosa, Nícolas e David, aos 9 anos, já começam a colher os frutos da fama que os levou a um programa de TV ainda recém-nascidos. Outro dia mesmo foram seguidos por duas meninas que descobriram onde moravam.

— Cheguei do trabalho umas 19h30m e peguei o Nícolas passando gel no cabelo e o Davi se arrumando. Logo em seguida, duas meninas gritaram o nome deles aqui no portão. Elas estavam tomando coragem para chamá-los para sair — explica o pai, que se diverte ao saber que os filhos já estão se interessando pelas meninas.

Os gêmeos
Os gêmeos Foto: Acervo pessoal / Divulgação

Estimativa: menos de 1% de chance de incidência

O nascimento de irmãos gêmeos, um negro e outro branco, ainda surpreende. Em 2006, por exemplo, o EXTRA mostrou o caso dos irmãos Pedro e Nathan Henrique Rodrigues, que intrigou Costa Barros.

Um ano depois, o cabeleireiro Carlos Henrique Fonseca, o pai, na época com 26 anos, contou que muitas pessoas ainda estranhavam quando viam Pedro, negro como ele, ao lado de Nathan, branco como mãe, a então frentista Valéria Gomes, de 22 anos.

Diferentes, mas torcem pelo mesmo time
Diferentes, mas torcem pelo mesmo time Foto: Nina Lima / Extra

Miscigenação

A cegonha também foi generosa, em Botafogo, onde vivem as gêmeas Beatriz e Maria Gaia Gerstner, hoje com 8 anos. Uma é morena como a mãe e a outra é branca como o pai, um alemão.

— Quando estou com a branca não acham que é minha filha. E quando o pai está com a morena é a mesma coisa — conta a mãe, Janaína Gaia, de 35 anos, hoje separada do pai delas.

A diretora do Centro Vida — Reprodução Humana Assistida, na Barra, na Zona Oeste, Maria Cecília Erthal, estima que há menos de 1% de chance do nascimento de gêmeos diferentes.

— É a miscigenação que faz com que os genes de pais negros e brancos se encontrem — explica.

*   *   *

Jemima Pompeu enviou o seguinte comentário:

Gêmeos com cores de pele diferentes surpreendem pais, mas não os cientistas. Veja alguns casos no link abaixo:

Renaissance Women Fought Men, and Won (Science Daily)

ScienceDaily (Aug. 14, 2012) — A three-year study into a set of manuscripts compiled and written by one of Britain’s earliest feminist figures has revealed new insights into how women challenged male authority in the 17th century.

Dr Jessica Malay has painstakingly transcribed Lady Anne Clifford’s 600,000-word Great Books of Record, which documents the trials and triumphs of the female aristocrat’s family dynasty over six centuries and her bitter battle to inherit castles and villages across northern England.

Lady Anne, who lived from 1590 to 1676, was, in her childhood, a favourite of Queen Elizabeth I. Her father died when she was 15 but contrary to an agreement that stretched back to the time of Edward II — that the Clifford’s vast estates in Cumbria and Yorkshire should pass to the eldest heir whether male or female ­- the lands were handed over to her uncle.

Following an epic legal struggle in which she defied her father, both her husbands, King James I and Oliver Cromwell, Lady Anne finally took possession of the estates, which included the five castles of Skipton, where she was born, Brougham, Brough, Pendragon and Appleby, aged 53.

Malay, a Reader in English Literature at the University of Huddersfield, is set to publish a new, complete edition of Lady Anne’s Great Books of Record, which contains rich narrative evidence of how women circumvented male authority in order to participate more fully in society.

Malay said: “Lady Anne’s Great Books of Record challenge the notion that women in the 16th and 17th centuries lacked any power or control over their own lives.

“There is this misplaced idea that the feminist movement is predominantly a 1960s invention but debates and campaigns over women’s rights and equality stretch back to the Middle Ages.”

The Great Books of Record comprise three volumes, the last of which came up for auction in 2003. The Cumbria Archives bought the third set and now house all three. In 2010, Malay secured a £158,000 grant from the Leverhulme Trust to study the texts.

Malay said: “Virginia Woolf argued that a woman with Shakespeare’s gifts during the Renaissance Period would have been denied the opportunity to develop her talents due to the social barriers restricting women.

“But Lady Anne is regarded as a literary figure in her own right and when I started studying the Great Books of Record I realised there is a lot more to her writing than we were led to believe.

“I was struck by how much they revealed about the role of women, the importance of family networks and the interaction between lords and tenants over 500 years of social and political life in Britain.”

In her Great Books of Record, Lady Anne presents the case for women to be accepted as inheritors of wealth, by drawing on both documentary evidence and biographies of her female ancestors to reveal that the Clifford lands of the North were brought to them through marriage.

She argued that since many men in the 16th and 17th centuries had inherited their titles of honour from their mothers or grandmothers, it was only right that titles of honour could be passed down to female heirs.

She also contended that women were well suited to the title of Baron since a key duty of office was to provide counsel in Parliament, where women were not allowed. While men were better at fighting wars, women excelled in giving measured advice, she wrote.

Malay said: “Lady Anne appropriates historical texts, arranging and intervening in these in such a way as to prove her inevitable and just rights as heir.

“Her foregrounding of the key contributions of the female to the success of the Clifford dynasty work to support both her own claims to the lands of her inheritance and her decision to resist cultural imperatives that demanded female subservience to male authority.

“Elizabeth I was a strong role model for Lady Anne in her youth. While she was monarch, women had a level of access to the royal court that men could only dream of, which spawned a new sense of confidence among aristocratic women.”

Malay’s research into the Great Books of Record, which contain material from the early 12th century to the early 18th century, reveals the importance of family alliances in forming influential political networks.

It shows that women were integral to the construction of these networks, both regionally and nationally.

Malay said: “The Great Books explain the legal avenues open to women. Married women could call on male friends to act on their behalf. As part of marriage settlements many women had trusts set up to allow them access to their own money which they could in turn use in a variety of business enterprises or to help develop a wide network of social contacts.

“Men would often rely on their wives to access wider familial networks, leading to wives gaining higher prestige in the family.”

Lady Anne was married twice and widowed twice. After her second husband died she moved back to the North and, as hereditary High Sherriff of Westmorland, set about restoring dilapidated castles, almshouses and churches.

Malay said: “Widows enjoyed the same legal rights as men. While the husband was alive then the wife would require his permission to do anything. Widows were free to act on their own without any male guardianship.”

The Great Books also provide a valuable insight into Medieval and Renaissance society, with one document describing a six-year-old girl from the Clifford family being carried to the chapel at Skipton on her wedding day.

Lady Anne also recounted her father’s voyages to the Caribbean and she kept a diary of her own life, which includes summaries of each year from her birth until her death at the age of 86 in 1676.

Malay said: “The books are full of all sorts of life over 600 years, which is what is so exciting about them.”

Malay’s Anne Clifford Project, the Great Books of Record was the catalyst for an exhibition of the Great Books of Record, which are, for the first time, being exhibited in public alongsideThe Great Picture at the Abbot Hall Art Gallery in Kendal.

The Great Picture is a huge (so huge a window of the gallery had to be removed to accommodate its arrival) triptych that marks Lady Anne’s succession to her inheritance.

The left panel depicts Lady Anne at 15, when she was disinherited. The right panel shows Lady Anne in middle age when she finally regained the Clifford estates. The central panel depicts Lady Anne’s parents with her older brothers shortly after Lady Anne had been conceived.