Arquivo da tag: Mediação tecnológica

Rise of the internet has reduced voter turnout (Science Daily)

Date:
September 16, 2016
Source:
University of Bristol
Summary:
During the initial phase of the internet, a “crowding-out” of political information occurred, which has affected voter turnout, new research shows.

The internet has transformed the way in which voters access and receive political information. It has allowed politicians to directly communicate their message to voters, circumventing the mainstream media which would traditionally filter information.

Writing in IZA World of Labor, Dr Heblich from the Department of Economics, presents research from a number of countries, comparing voter behaviour of municipalities with internet access to the ones without in the early 2000s. It shows municipalities with broadband internet access faced a decrease in voter turnout, due to voters suddenly facing an overwhelmingly large pool of information and not knowing how to filter relevant knowledge efficiently. Similarly, the internet seemed to have crowded out other media at the expense of information quality.

However, the introduction of interactive social media and “user-defined” content appears to have reversed this. It helped voters to collect information more efficiently. Barack Obama’s successful election campaign in 2008 set the path for this development. In the so-called “Facebook election,” Obama successfully employed Chris Hughes, a Facebook co-founder, to lead his highly effective election campaign.

Using a combination of social networks, podcasts, and mobile messages, Obama connected directly with (young) American voters. In doing so, he gained nearly 70 per cent of the votes among Americans under the age of 25.

But there is a downside: voters can now be personally identified and strategically influenced by targeted information. What if politicians use this information in election campaigns to target voters that are easy to mobilize?

Dr Heblich’s research shows there is a thin line between desirable benefits of more efficient information dissemination and undesirable possibilities of voter manipulation. Therefore, policymakers need to consider introducing measures to educate voters to become more discriminating in their use of the internet.

Dr Heblich said: “To the extent that online consumption replaces the consumption of other media (newspapers, radio, or television) with a higher information content, there may be no information gains for the average voter and, in the worst case, even a crowding- out of information.

“One potential risk relates to the increasing possibilities to collect personal information known as ‘big data’. This development could result in situations in which individual rights are violated, since the personal information could be used, for example, to selectively disseminate information in election campaigns and in influence voters strategically.”

See the report at: http://wol.iza.org/articles/effect-of-internet-on-voting-behavior

Ensayan un dron que ‘siembra’ nubes para provocar lluvia (El Mundo)

El dron Savant pesa 24 kilos y tiene 3 metros de envergadura. KEVIN CLIFFORD

El vuelo experimental ha sido en Nevada (EEUU), azotada por la sequía

EUROPA PRESS

25/05/2016 19:46

Un avión no tripulado ha probado por primera vez con éxito la conocida como ‘siembra’ de nubes, con la que los científicos pretenden provocar lluvia en épocas de sequía. El vuelo experimental, de Desert Research Institute (DRI) se ha llevado a cabo en Nevada (Estados Unidos).

Este dron, conocido como Savant, alcanzó una altitud de más de 120 metros y voló durante aproximadamente 18 minutos. “Es un gran logro”, ha apuntado el científico principal del proyecto, Adam Watts, experto en aplicaciones ecológicas y de recursos naturales.

Este proyecto, primero en su tipo, está ayudando al Estado de Nevada abordar los impactos continuos de sequía y a explorar soluciones innovadoras para luchar contra la ausencia de recursos, tales como aumentar el abastecimiento de agua regionales.

El equipo de investigación lleva más de 30 años de investigación y experiencia en la modificación del clima con experiencia probada en operaciones de fabricación aeroespacial y de vuelo de aviones no tripulados, según apunta el DRI en su página web.

“Hemos alcanzado otro hito importante en nuestro esfuerzo por reducir los riesgos y los costes en la industria de la siembra de nubes y ayudar a mitigar los desastres naturales causados por la sequía, el granizo y la niebla extrema“, ha señalado el CEO de la asociación de aviones no tripulados de América, Mike Richards.

“Con una envergadura de 3 metros de ancho y unos 24 kilos de peso, Savant es el vehículo perfecto para llevar a cabo este tipo de operaciones, debido a su perfil de vuelo superior, el tiempo que permanece en el aire y su resistencia al viento y a otras condiciones climáticas adversas”, ha apuntado Richards.


¿Quién está disolviendo las nubes en Andalucía?

Miguel del Pino, de Asaja Granada, muestra una foto de una de las avionetas. M. RODRÍGUEZ

La patronal agraria Asaja denuncia la ‘siembra’ de yoduro de plata

Piden que su actividad esté regulada por ley para evitar los daños

RAMÓN RAMOS, Granada

07/04/2016 19:31

No es leyenda urbana ni ciencia ficción: las avionetas ‘rompenubes’ existen y su actividad es dañina para los cultivos en las zonas en las que actúan. El último episodio tiene lugar fecha y hora. Fue detectado el pasado lunes día 4 a las 15,50 horas en la comarca granadina del Marquesado. Ese día el pronóstico del tiempo anunciaba lluvias de hasta 30 litros por metro cuadrado y las nubes negras que presidían los cielos parecían certificar el augurio. A la hora citada apareció por el norte una avioneta, sobrevoló la comarca de Este a Oeste y desapareció. Las nubes cambiaron de color, del blanco al negro, y sus efectos de lluvia se quedaron en solo seis litros por metro cuadrado, apunta Luis Ramírez, un agricultor de Huéneja afectado por la actividad de estos vuelos ‘fantasma’.

El efecto cromático en las nubes y su consecuente disminución en la descarga de unas lluvias muy esperadas en la comarca tiene una explicación para los agricultores: la ‘siembra’ entre las nubes de yoduro de plata, una sustancia química actúa cristalizando el agua condensada en las nubes.

Asaja, organización patronal agraria, ha estallado contra esta práctica, que no es exclusiva de la provincia de Granada y se enmarca en los posibles intereses de empresas de energía solar y grandes extensiones agrarias, habitualmente instaladas en las zonas donde actúan las avionetas: el Levante español y también Soria.

La organización ha iniciado una recogida de firmas que aspira a reunir las 500.000 necesarias para promover una iniciativa legislativa que prohíba por ley estas intervenciones ‘rompenubes’ que alteran los ciclos hidrológicos, agravando la sequía y dañando los cultivos.

Los pastos para animales, afectados

En esta línea se constituyó el pasado año la Plataforma para la Defensa del Medio Ambiente y la Naturaleza de la Comarca del Marquesado y del Río Nacimiento, donde la acción de las avionetas ‘rompenubes’ está afectando a los cultivos de cereales y almendros, perjudicando además al crecimiento de los pastos para alimentación del ganado.

Asaja advierte de que la posible intervención en la fase atmosférica del ciclo integral del agua está recogida en la Ley de Aguas y en el Reglamento del Dominio Público Hidráulico con la finalidad de evitar precipitaciones en forma de granizo o pedrisco que causen daños.

En los llanos del Marquesado y otras zonas limítrofes como Guadix, Gor, Los Montes Orientales y río Nacimiento, Almería, una extensión de terrenos cultivables que abarca más de 30.000 hectáreas, están acostumbradas al ruido de avionetas de baja altitud ocultas entre las nubes cuando hay aviso de tormenta, “y es un hecho que desde hace cinco años allí no cae apenas agua”, relata el presidente provincial de Asaja, Manuel del Pino.

En esa zona, el cultivo del cereal ha desaparecido porque cosecha era cero e intentan salvar la actividad agrícola transformando las hectáreas baldías en almendro, más resistente y con mejores posibilidades técnicas de producción, y la ganadería extensiva también se resiente por la ausencia de pastos. Son tierras áridas, pero con la intervención artificial en el régimen de lluvias que se está practicando en ellas, “legal o no”, se están desertizando aun más.

El vuelo de las avionetas ‘rompenubes’ fue detectado en el norte de la provincia de Granada a mediados de los años 90, en plena sequía. Su actividad se ha reanudado en los últimos cinco años. La denuncia de los agricultores ante la Guardia Civil no ha dado fruto porque no es obligatorio comunicar los vuelos a menos de 3.000 metros de altura y se trata, además, de una práctica permitida y regulada en las leyes españolas con la finalidad de evitar precipitaciones en forma de granizo o pedrisco que causen daños.

Asaja asegura que los gobiernos conocen esta práctica pero “no aclaran ciertas cuestiones, como de dónde proceden, quién está detrás y qué intereses se buscan, sean compañías de seguros que pretenden evitar indemnizaciones, grandes corporaciones que quieren proteger sus cultivos, empresas de energía solar, la industria farmacéutica o incluso temas de seguridad”.

Curtailing global warming with bioengineering? Iron fertilization won’t work in much of Pacific (Science Daily)

Earth’s own experiments during ice ages showed little effect

Date:
May 16, 2016
Source:
The Earth Institute at Columbia University
Summary:
Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

With the right mix of nutrients, phytoplankton grow quickly, creating blooms visible from space. This image, created from MODIS data, shows a phytoplankton bloom off New Zealand. Credit: Robert Simmon and Jesse Allen/NASA

Over the past half-million years, the equatorial Pacific Ocean has seen five spikes in the amount of iron-laden dust blown in from the continents. In theory, those bursts should have turbo-charged the growth of the ocean’s carbon-capturing algae — algae need iron to grow — but a new study shows that the excess iron had little to no effect.

The results are important today, because as groups search for ways to combat climate change, some are exploring fertilizing the oceans with iron as a solution.

Algae absorb carbon dioxide (CO2), a greenhouse gas that contributes to global warming. Proponents of iron fertilization argue that adding iron to the oceans would fuel the growth of algae, which would absorb more CO2 and sink it to the ocean floor. The most promising ocean regions are those high in nutrients but low in chlorophyll, a sign that algae aren’t as productive as they could be. The Southern Ocean, the North Pacific, and the equatorial Pacific all fit that description. What’s missing, proponents say, is enough iron.

The new study, published this week in the Proceedings of the National Academy of Sciences, adds to growing evidence, however, that iron fertilization might not work in the equatorial Pacific as suggested.

Essentially, earth has already run its own large-scale iron fertilization experiments. During the ice ages, nearly three times more airborne iron blew into the equatorial Pacific than during non-glacial periods, but the new study shows that that increase didn’t affect biological productivity. At some points, as levels of iron-bearing dust increased, productivity actually decreased.

What matters instead in the equatorial Pacific is how iron and other nutrients are stirred up from below by upwelling fueled by ocean circulation, said lead author Gisela Winckler, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory. The study found seven to 100 times more iron was supplied from the equatorial undercurrent than from airborne dust at sites spread across the equatorial Pacific. The authors write that although all of the nutrients might not be used immediately, they are used up over time, so the biological pump is already operating at full efficiency.

“Capturing carbon dioxide is what it’s all about: does iron raining in with airborne dust drive the capture of atmospheric CO2? We found that it doesn’t, at least not in the equatorial Pacific,” Winckler said.

The new findings don’t rule out iron fertilization elsewhere. Winckler and coauthor Robert Anderson of Lamont-Doherty Earth Observatory are involved in ongoing research that is exploring the effects of iron from dust on the Southern Ocean, where airborne dust supplies a larger share of the iron reaching the surface.

The PNAS paper follows another paper Winckler and Anderson coauthored earlier this year in Nature with Lamont graduate student Kassandra Costa looking at the biological response to iron in the equatorial Pacific during just the last glacial maximum, some 20,000 years ago. The new paper expands that study from a snapshot in time to a time series across the past 500,000 years. It confirms that Costa’s finding, that iron fertilization had no effect then, fit a pattern that extends across the past five glacial periods.

To gauge how productive the algae were, the scientists in the PNAS paper used deep- sea sediment cores from three locations in the equatorial Pacific that captured 500,000 years of ocean history. They tested along those cores for barium, a measure of how much organic matter is exported to the sea floor at each point in time, and for opal, a silicate mineral that comes from diatoms. Measures of thorium-232 reflected the amount of dust that blew in from land at each point in time.

“Neither natural variability of iron sources in the past nor purposeful addition of iron to equatorial Pacific surface water today, proposed as a mechanism for mitigating the anthropogenic increase in atmospheric CO2 inventory, would have a significant impact,” the authors concluded.

Past experiments with iron fertilization have had mixed results. The European Iron Fertilization Experiment (EIFEX) in 2004, for example, added iron in the Southern Ocean and was able to produce a burst of diatoms, which captured CO2 in their organic tissue and sank to the ocean floor. However, the German-Indian LOHAFEX project in 2009 experimented in a nearby location in the South Atlantic and found few diatoms. Instead, most of its algae were eaten up by tiny marine creatures, passing CO2 into the food chain rather than sinking it. In the LOHAFEX case, the scientists determined that another nutrient that diatoms need — silicic acid — was lacking.

The Intergovernmental Panel on Climate Change (IPCC) cautiously discusses iron fertilization in its latest report on climate change mitigation. It warns of potential risks, including the impact that higher productivity in one area may have on nutrients needed by marine life downstream, and the potential for expanding low-oxygen zones, increasing acidification of the deep ocean, and increasing nitrous oxide, a greenhouse gas more potent than CO2.

“While it is well recognized that atmospheric dust plays a significant role in the climate system by changing planetary albedo, the study by Winckler et al. convincingly shows that dust and its associated iron content is not a key player in regulating the oceanic sequestration of CO2 in the equatorial Pacific on large spatial and temporal scales,” said Stephanie Kienast, a marine geologist and paleoceanographer at Dalhousie University who was not involved in the study. “The classic paradigm of ocean fertilization by iron during dustier glacials can thus be rejected for the equatorial Pacific, similar to the Northwest Pacific.”


Journal Reference:

  1. Gisela Winckler, Robert F. Anderson, Samuel L. Jaccard, and Franco Marcantonio. Ocean dynamics, not dust, have controlled equatorial Pacific productivity over the past 500,000 yearsPNAS, May 16, 2016 DOI: 10.1073/pnas.1600616113

Artificial intelligence replaces physicists (Science Daily)

Date:
May 16, 2016
Source:
Australian National University
Summary:
Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

The experiment, featuring the small red glow of a BEC trapped in infrared laser beams. Credit: Stuart Hay, ANU

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.


Journal Reference:

  1. P. B. Wigley, P. J. Everitt, A. van den Hengel, J. W. Bastian, M. A. Sooriyabandara, G. D. McDonald, K. S. Hardman, C. D. Quinlivan, P. Manju, C. C. N. Kuhn, I. R. Petersen, A. N. Luiten, J. J. Hope, N. P. Robins, M. R. Hush. Fast machine-learning online optimization of ultra-cold-atom experimentsScientific Reports, 2016; 6: 25890 DOI: 10.1038/srep25890

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

Theoretical tiger chases statistical sheep to probe immune system behavior (Science Daily)

Physicists update predator-prey model for more clues on how bacteria evade attack from killer cells

Date:
April 29, 2016
Source:
IOP Publishing
Summary:
Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Researchers have created a numerical model that explores this behavior in more detail.

Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Reporting their results in the Journal of Physics A: Mathematical and Theoretical, researchers in Europe have created a numerical model that explores this behaviour in more detail.

Using mathematical expressions, the group can examine the dynamics of a single predator hunting a herd of prey. The routine splits the hunter’s motion into a diffusive part and a ballistic part, which represent the search for prey and then the direct chase that follows.

“We would expect this to be a fairly good approximation for many animals,” explained Ralf Metzler, who led the work and is based at the University of Potsdam in Germany.

Obstructions included

To further improve its analysis, the group, which includes scientists from the National Institute of Chemistry in Slovenia, and Sorbonne University in France, has incorporated volume effects into the latest version of its model. The addition means that prey can now inadvertently get in each other’s way and endanger their survival by blocking potential escape routes.

Thanks to this update, the team can study not just animal behaviour, but also gain greater insight into the way that killer cells such as macrophages (large white blood cells patrolling the body) attack colonies of bacteria.

One of the key parameters determining the life expectancy of the prey is the so-called ‘sighting range’ — the distance at which the prey is able to spot the predator. Examining this in more detail, the researchers found that the hunter profits more from the poor eyesight of the prey than from the strength of its own vision.

Long tradition with a new dimension

The analysis of predator-prey systems has a long tradition in statistical physics and today offers many opportunities for cooperative research, particularly in fields such as biology, biochemistry and movement ecology.

“With the ever more detailed experimental study of systems ranging from molecular processes in living biological cells to the motion patterns of animal herds and humans, the need for cross-fertilisation between the life sciences and the quantitative mathematical approaches of the physical sciences has reached a new dimension,” Metzler comments.

To help support this cross-fertilisation, he heads up a new section of the Journal of Physics A: Mathematical and Theoretical that is dedicated to biological modelling and examines the use of numerical techniques to study problems in the interdisciplinary field connecting biology, biochemistry and physics.


Journal Reference:

  1. Maria Schwarzl, Aljaz Godec, Gleb Oshanin, Ralf Metzler. A single predator charging a herd of prey: effects of self volume and predator–prey decision-makingJournal of Physics A: Mathematical and Theoretical, 2016; 49 (22): 225601 DOI: 10.1088/1751-8113/49/22/225601

Weasel Apparently Shuts Down World’s Most Powerful Particle Collider (NPR)

April 29, 201611:04 AM ET

GEOFF BRUMFIEL

The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies.

The Large Hadron Collider uses superconducting magnets to smash sub-atomic particles together at enormous energies. CERN

A small mammal has sabotaged the world’s most powerful scientific instrument.

The Large Hadron Collider, a 17-mile superconducting machine designed to smash protons together at close to the speed of light, went offline overnight. Engineers investigating the mishap found the charred remains of a furry creature near a gnawed-through power cable.

A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider.A small mammal, possibly a weasel, gnawed-through a power cable at the Large Hadron Collider. Ashley Buttle/Flickr

“We had electrical problems, and we are pretty sure this was caused by a small animal,” says Arnaud Marsollier, head of press for CERN, the organization that runs the $7 billion particle collider in Switzerland. Although they had not conducted a thorough analysis of the remains, Marsollier says they believe the creature was “a weasel, probably.” (Update: An official briefing document from CERN indicates the creature may have been a marten.)

The shutdown comes as the LHC was preparing to collect new data on the Higgs Boson, a fundamental particle it discovered in 2012. The Higgs is believed to endow other particles with mass, and it is considered to be a cornerstone of the modern theory of particle physics.

Researchers have seen some hints in recent data that other, yet-undiscovered particles might also be generated inside the LHC. If those other particles exist, they could revolutionize researcher’s understanding of everything from the laws of gravity, to quantum mechanics.

Unfortunately, Marsollier says, scientists will have to wait while workers bring the machine back online. Repairs will take a few days, but getting the machine fully ready to smash might take another week or two. “It may be mid-May,” he says.

These sorts of mishaps are not unheard of, says Marsollier. The LHC is located outside of Geneva. “We are in the countryside, and of course we have wild animals everywhere.” There have been previous incidents, including one in 2009, when a bird is believed to have dropped a baguette onto critical electrical systems.

Nor are the problems exclusive to the LHC: In 2006, raccoons conducted a “coordinated” attack on a particle accelerator in Illinois.

It is unclear whether the animals are trying to stop humanity from unlocking the secrets of the universe.

Of course, small mammals cause problems in all sorts of organizations. Yesterday, a group of children took National Public Radio off the air for over a minute before engineers could restore the broadcast.

Répteis têm atividade cerebral típica de sonhos humanos, revela estudo (Folha de S.Paulo)

Dr. Stephan Junek, Max Planck Institute for Brain Research
Sleeping dragon (Pogona vitticeps). [Credit: Dr. Stephan Junek, Max Planck Institute for Brain Research]
Estudo mostra que lagartos atingem padrão de sono que, em humanos, permite o surgimento de sonhos

REINALDO JOSÉ LOPES
COLABORAÇÃO PARA A FOLHA

28/04/2016 14h56

Será que os lagartos sonham com ovelhas escamosas? Ninguém ainda foi capaz de enxergar detalhadamente o que acontece no cérebro de tais bichos para que seja possível responder a essa pergunta, mas um novo estudo revela que o padrão de atividade cerebral típico dos sonhos humanos também surge nesses répteis quando dormem.

Trata-se do chamado sono REM (sigla inglesa da expressão “movimento rápido dos olhos”), que antes parecia ser exclusividade de mamíferos como nós e das aves. No entanto, a análise da atividade cerebral de um lagarto australiano, o dragão-barbudo (Pogona vitticeps), indica que, ao longo da noite, o cérebro do animal fica se revezando entre o sono REM e o sono de ondas lentas (grosso modo, o sono profundo, sem sonhos), num padrão parecido, ainda que não idêntico, ao observado em seres humanos.

Liderado por Gilles Laurent, do Instituto Max Planck de Pesquisa sobre o Cérebro, na Alemanha, o estudo está saindo na revista especializada “Science”. “Laurent não brinca em serviço”, diz Sidarta Ribeiro, pesquisador da UFRN (Universidade Federal do Rio Grande do Norte) e um dos principais especialistas do mundo em neurobiologia do sono e dos sonhos. “Foi feita uma demonstração bem clara do fenômeno.”

A metodologia usada para verificar o que acontecia no cérebro reptiliano não era exatamente um dragão de sete cabeças. Cinco exemplares da espécie receberam implantes de eletrodos no cérebro e, na hora de dormir, seu comportamento foi monitorado com câmeras infravermelhas, ideais para “enxergar no escuro”. Os animais costumavam dormir entre seis e dez horas por noite, num ciclo que podia ser mais ou menos controlado pelos cientistas do Max Planck, já que eles é que apagavam e acendiam as luzes e regulavam a temperatura do recinto.

O que os pesquisadores estavam medindo era a variação de atividade elétrica no cérebro dos dragões-barbudos durante a noite. São essas oscilações que produzem o padrão de ondas já conhecido a partir do sono de humanos e demais mamíferos, por exemplo.

Só foi possível chegar aos achados relatados no novo estudo por causa de seu nível de detalhamento, diz Suzana Herculano-Houzel, neurocientista da UFRJ (Universidade Federal do Rio de Janeiro) e colunista da Folha. “Estudos anteriores menos minuciosos não tinham como detectar sono REM porque, nesses animais, a alternância entre os dois tipos de sono é extremamente rápida, a cada 80 segundos”, explica ela, que já tinha visto Laurent apresentar os dados num congresso científico. Em humanos, os ciclos são bem mais lentos, com duração média de 90 minutos.

Além da semelhança no padrão de atividade cerebral, o sono REM dos répteis também tem correlação clara com os movimentos oculares que lhe dão o nome (os quais lembram vagamente a maneira como uma pessoa desperta mexe os olhos), conforme mostraram as imagens em infravermelho.

DORMIR, TALVEZ SONHAR

A primeira implicação das descobertas é evolutiva. Embora dormir seja um comportamento aparentemente universal no reino animal, o sono REM (e talvez os sonhos) pareciam exclusividade de espécies com cérebro supostamente mais complexo. “Para quem estuda os mecanismos do sono, é um estudo fundamental”, afirma Suzana.

Acontece que tanto mamíferos quanto aves descendem de grupos primitivos associados aos répteis, só que em momentos bem diferentes da história do planeta – mamíferos já caminhavam pela Terra havia dezenas de milhões de anos quando um grupo de pequenos dinossauros carnívoros deu origem às aves. Ou seja, em tese, mamíferos e aves precisariam ter “aprendido a sonhar” de forma totalmente independente. O achado “resolve esse paradoxo”, diz Ribeiro: o sono REM já estaria presente no ancestral comum de todos esses vertebrados.

O trabalho do pesquisador brasileiro e o de outros especialistas mundo afora tem mostrado que ambos os tipos de sono são fundamentais para “esculpir” memórias no cérebro, ao mesmo tempo fortalecendo o que é relevante e jogando fora o que não é importante. Sem os ciclos alternados de atividade cerebral, a capacidade de aprendizado de animais e humanos ficaria seriamente prejudicada.

Tanto Ribeiro quanto Suzana, porém, dizem que ainda não dá para cravar que lagartos ou outros animais sonham como nós. “Talvez um dia alguém faça ressonância magnética em lagartos adormecidos e veja se eles mostram a mesma reativação de áreas sensoriais que se vê em humanos em sono REM”, diz ela. “Claro que os donos de cachorro têm certeza que suas mascotes sonham, mas o ideal seria fazer a decodificação do sinal neural”, uma técnica que permite saber o que uma pessoa imagina estar vendo quando sonha e já foi aplicada com sucesso por cientistas japoneses.

Brasil e Japão assinam acordo para aprimorar sistema de prevenção de desastres naturais (MCTI)

JC 5374, 15 de março de 2016

Objetivo é produzir alertas mais precisos e reduzir o tempo das respostas nas situações de risco. Projeto piloto será implementado nas cidades de Blumenau (SC), Nova Friburgo (RJ) e Petrópolis (RJ)

Brasil e Japão assinaram nesta segunda-feira (14) um acordo de cooperação na área de prevenção de desastres naturais para melhorar a precisão dos alertas e reduzir o tempo gasto nas respostas. O documento valida condutas e procedimentos definidos por técnicos dos dois países para a instalação de projetos piloto nas cidades de Blumenau (SC), Nova Friburgo (RJ) e Petrópolis (RJ) – todas sofreram com deslizamentos de terra nos últimos anos. O Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden/MCTI) participa da iniciativa, que faz parte do Projeto de Fortalecimento da Estratégia Nacional de Gestão Integrada de Riscos em Desastres Naturais (Gides).

“Isso vai ser um novo experimento em relação à coleta de informações e como se disponibiliza essas informações de forma rápida e integrada com vários órgãos do governo”, explicou o secretário de Políticas e Programas de Pesquisa e Desenvolvimento do MCTI, Jailson de Andrade.

Segundo o pesquisador da área de geodinâmica do Cemaden Angelo Consoni, o aprimoramento do protocolo dos alertas é fundamental para que eles sejam emitidos com mais eficiência para a população. Quanto mais preciso e rápido, menor o risco de calamidades.

“A finalidade do piloto é, principalmente, a precisão dos alertas e o tempo gasto nessa atividade. Então, otimizando fluxos de elaboração de emissão de alertas, juntamente com os municípios e com os estados, nós podemos melhorar significativamente a qualidade dos alertas que disponibilizamos para a população em situações de risco”, afirmou.

O acordo de cooperação também foi assinado pelo Ministério das Cidades, Ministério da Integração Nacional, Agência Brasileira de Cooperação (ABC) e Agência de Cooperação Internacional do Japão (Jica, na sigla em inglês).

Parceria

A parceria entre Brasil e Japão é baseada na troca de experiências entre recursos humanos das duas nações. Desde 2014, duas turmas de brasileiros já receberam capacitação de especialistas japoneses. Além disso, os asiáticos também vêm ao País para o intercâmbio de informações sobre a prevenção de desastres naturais.

“O Japão é uma referência. E essa cooperação tem sido muito boa para nós no sentido de formação de pessoal”, destacou Consoni.

MCTI

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)

PUBLIC RELEASE: 1-FEB-2016

UNIVERSITY OF SOUTHAMPTON

IMAGE

IMAGE: A STUDY LED BY THE UNIVERSITY OF SOUTHAMPTON HAS FOUND THERE IS NO DIFFERENCE IN THE TIME IT TAKES PEOPLE FROM DIFFERENT COUNTRIES TO READ AND PROCESS DIFFERENT LANGUAGES. view more  CREDIT: UNIVERSITY OF SOUTHAMPTON

A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.

###

Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at: http://eprints.soton.ac.uk/382899/1/Liversedge,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf

 

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)

1-FEB-2016

Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages

SANTA FE INSTITUTE

We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Impact of human activity on local climate mapped (Science Daily)

Date: January 20, 2016

Source: Concordia University

Summary: A new study pinpoints the temperature increases caused by carbon dioxide emissions in different regions around the world.


This is a map of climate change. Credit: Nature Climate Change

Earth’s temperature has increased by 1°C over the past century, and most of this warming has been caused by carbon dioxide emissions. But what does that mean locally?

A new study published in Nature Climate Change pinpoints the temperature increases caused by CO2 emissions in different regions around the world.

Using simulation results from 12 global climate models, Damon Matthews, a professor in Concordia’s Department of Geography, Planning and Environment, along with post-doctoral researcher Martin Leduc, produced a map that shows how the climate changes in response to cumulative carbon emissions around the world.

They found that temperature increases in most parts of the world respond linearly to cumulative emissions.

“This provides a simple and powerful link between total global emissions of carbon dioxide and local climate warming,” says Matthews. “This approach can be used to show how much human emissions are to blame for local changes.”

Leduc and Matthews, along with co-author Ramon de Elia from Ouranos, a Montreal-based consortium on regional climatology, analyzed the results of simulations in which CO2 emissions caused the concentration of CO2 in the atmosphere to increase by 1 per cent each year until it reached four times the levels recorded prior to the Industrial Revolution.

Globally, the researchers saw an average temperature increase of 1.7 ±0.4°C per trillion tonnes of carbon in CO2 emissions (TtC), which is consistent with reports from the Intergovernmental Panel on Climate Change.

But the scientists went beyond these globally averaged temperature rises, to calculate climate change at a local scale.

At a glance, here are the average increases per trillion tonnes of carbon that we emit, separated geographically:

  • Western North America 2.4 ± 0.6°C
  • Central North America 2.3 ± 0.4°C
  • Eastern North America 2.4 ± 0.5°C
  • Alaska 3.6 ± 1.4°C
  • Greenland and Northern Canada 3.1 ± 0.9°C
  • North Asia 3.1 ± 0.9°C
  • Southeast Asia 1.5 ± 0.3°C
  • Central America 1.8 ± 0.4°C
  • Eastern Africa 1.9 ± 0.4°C

“As these numbers show, equatorial regions warm the slowest, while the Arctic warms the fastest. Of course, this is what we’ve already seen happen — rapid changes in the Arctic are outpacing the rest of the planet,” says Matthews.

There are also marked differences between land and ocean, with the temperature increase for the oceans averaging 1.4 ± 0.3°C TtC, compared to 2.2 ± 0.5°C for land areas.

“To date, humans have emitted almost 600 billion tonnes of carbon,” says Matthews. “This means that land areas on average have already warmed by 1.3°C because of these emissions. At current emission rates, we will have emitted enough CO¬2 to warm land areas by 2°C within 3 decades.”


Journal Reference:

  1. Martin Leduc, H. Damon Matthews, Ramón de Elía. Regional estimates of the transient climate response to cumulative CO2 emissionsNature Climate Change, 2016; DOI: 10.1038/nclimate2913

Social media technology, rather than anonymity, is the problem (Science Daily)

Date: January 20, 2016

Source: University of Kent

Summary: Problems of anti-social behavior, privacy, and free speech on social media are not caused by anonymity but instead result from the way technology changes our presence. That’s the startling conclusion of a new book by an expert on the information society and developing media.


Problems of anti-social behaviour, privacy, and free speech on social media are not caused by anonymity but instead result from the way technology changes our presence.

That’s the startling conclusion of a new book by Dr Vincent Miller, a sociologist at the University of Kent and an expert on the information society and developing media.

In contending that the cause of issues such as online anti-social behaviour is the design/software of social media itself, Dr Miller suggests that social media architecture needs to be managed and planned in the same way as physical architecture. In the book, entitled The Crisis of Presence in Contemporary Culture: Ethics, Privacy and Speech in Mediated Social Life, Dr Miller examines the relationship between the freedom provided by the contemporary online world and the control, surveillance and censorship that operate in this environment.

The book questions the origins and sincerity of moral panics about use — and abuse — in the contemporary online environment and offers an analysis of ethics, privacy and free speech in this setting.

Investigating the ethical challenges that confront our increasingly digital culture, Dr Miller suggests a number of revisions to our ethical, legal and technological regimes to meet these challenges.

These including changing what he describes as ‘dehumanizing’ social media software, expanding the notion of our ‘selves’ or ‘bodies’ to include our digital traces, and the re-introduction of ‘time’ into social media through the creation of ‘expiry dates’ on social media communications.

Dr Miller is a Senior Lecturer in Sociology and Cultural Studies within the University’s School of Social Research, Sociology and Social Policy. The Crisis of Presence in Contemporary Culture: Ethics, Privacy and Speech in Mediated Social Life, is published by Sage.

More information can be found at: https://uk.sagepub.com/en-gb/eur/the-crisis-of-presence-in-contemporary-culture/book244328

‘Na África, indaguei rei da minha etnia por que nos venderam como escravos’ (BBC Brasil)

14 janeiro 2016

Zulu Araújo | Foto: Divulgação

Image captionA convite de produtora, arquiteto fez exame genético e foi até Camarões para conhecer seus ancestrais

“Somos o único grupo populacional no Brasil que não sabe de onde vem”, queixa-se o arquiteto baiano Zulu Araújo, de 63 anos, em referência à população negra descendente dos 4,8 milhões de africanos escravizados recebidos pelo país entre os séculos 16 e 19.

Araújo foi um dos 150 brasileiros convidados pela produtora Cine Group para fazer um exame de DNA e identificar suas origens africanas.

Ele descobriu ser descendente do povo tikar, de Camarões, e, como parte da série televisiva Brasil: DNA África, visitou o local para conhecer a terra de seus antepassados.

“A viagem me completou enquanto cidadão”, diz Araújo. Leia, abaixo, seu depoimento à BBC Brasil:

“Sempre tive a consciência de que um dos maiores crimes contra a população negra não foi nem a tortura, nem a violência: foi retirar a possibilidade de que conhecêssemos nossas origens. Somos o único grupo populacional no Brasil que não sabe de onde vem.

Meu sobrenome, Mendes de Araújo, é português. Carrego o nome da família que escravizou meus ancestrais, pois o ‘de’ indica posse. Também carrego o nome de um povo africano, Zulu.

 

Momento em que o Zulu confronta o rei tikar sobre a venda de seus antepassados

Ganhei o apelido porque meus amigos me acharam parecido com um rei zulu retratado num documentário. Virou meu nome.

Nasci no Solar do Unhão, uma colônia de pescadores no centro de Salvador, local de desembarque e leilão de escravos até o final do século 19. Comecei a trabalhar clandestinamente aos 9 anos numa gráfica da Igreja Católica. Trabalhava de forma profana para produzir livros sagrados.

Bom aluno, consegui passar no vestibular para arquitetura. Éramos dois negros numa turma de 600 estudantes – isso numa cidade onde 85% da população tem origem africana. Salvador é uma das cidades mais racistas que eu conheço no mundo.

Ao participar do projeto Brasil: DNA África e descobrir que era do grupo étnico tikar, fiquei surpreso. Na Bahia, todos nós especulamos que temos ou origem angolana ou iorubá. Eu imaginava que era iorubano. Mas os exames de DNA mostram que vieram ao Brasil muito mais etnias do que sabemos.

Zulu Araújo | Foto: Divulgação

“Era como se eu estivesse no meu bairro, na Bahia, e ao mesmo tempo tivesse voltado 500 anos no tempo”, diz Zulu sobre chegada a Camarões

Zulu Araújo | Foto: Divulgação

Pergunta sobre escravidão a rei camaronense foi tratada como “assunto delicado” e foi respondida apenas no dia seguinte

Quando cheguei ao centro do reino tikar, a eletricidade tinha caído, e o pessoal usava candeeiros e faróis dos carros para a iluminação. Mais de 2 mil pessoas me aguardavam. O que senti naquele momento não dá para descrever, de tão chocante e singular.

As pessoas gritavam. Eu não entendia uma palavra do que diziam, mas entendia tudo. Era como se eu estivesse no meu bairro, na Bahia, e ao mesmo tempo tivesse voltado 500 anos no tempo.

O povão me encarava como uma novidade: eu era o primeiro brasileiro de origem tikar a pisar ali. Mas também fiquei chocado com a pobreza. As pessoas me faziam inúmeros pedidos nas ruas, de camisetas de futebol a ajuda para gravar um disco. Não por acaso, ali perto o grupo fundamentalista Boko Haram (originário da vizinha Nigéria) tem uma de suas bases e conta com grande apoio popular.

De manhã, fui me encontrar com o rei, um homem alto e forte de 56 anos, casado com 20 mulheres e pai de mais de 40 filhos. Ele se vestia como um muçulmano do deserto, com uma túnica com estamparias e tecidos belíssimos.

Depois do café da manhã, tive uma audiência com ele numa das salas do palácio. Ele estava emocionado e curioso, pois sabia que muitos do povo Tikar haviam ido para as Américas, mas não para o Brasil.

Fiz uma pergunta que me angustiava: perguntei por que eles tinham permitido ou participado da venda dos meus ancestrais para o Brasil. O tradutor conferiu duas vezes se eu queria mesmo fazer aquela pergunta e disse que o assunto era muito sensível. Eu insisti.

Ficou um silêncio total na sala. Então o rei cochichou no ouvido de um conselheiro, que me disse que ele pedia desculpas, mas que o assunto era muito delicado e só poderia me responder no dia seguinte. O tema da escravidão é um tabu no continente africano, porque é evidente que houve um conluio da elite africana com a europeia para que o processo durasse tanto tempo e alcançasse tanta gente.

No dia seguinte, o rei finalmente me respondeu. Ele pediu desculpas e disse que foi melhor terem nos vendido, caso contrário todos teríamos sido mortos. E disse que, por termos sobrevivido, nós, da diáspora, agora poderíamos ajudá-los. Disse ainda que me adotaria como seu primeiro filho, o que me daria o direito a regalias e o acesso a bens materiais.

Foi uma resposta política, mas acho que foi sincera. Sei que eles não imaginavam que a escravidão ganharia a dimensão que ganhou, nem que a Europa a transformaria no maior negócio de todos os tempos. Houve um momento em que os africanos perderam o controle.

Zulu Araújo | Foto: Divulgação

“Se qualquer pessoa me perguntar de onde sou, agora já sei responder. Só quem é negro pode entender a dimensão que isso possui.”

Um intelectual senegalês me disse que, enquanto não superarmos a escravidão, não teremos paz – nem os escravizados, nem os escravizadores. É a pura verdade. Não dá para tratar uma questão de 500 anos com um sentimento de ódio ou vingança.

A viagem me completou enquanto cidadão. Se qualquer pessoa me perguntar de onde sou, agora já sei responder. Só quem é negro pode entender a dimensão que isso possui.

Acho que os exames de DNA deveriam ser reconhecidos pelo governo, pelas instituições acadêmicas brasileiras como um caminho para que possamos refazer e recontar a história dos 52% dos brasileiros que têm raízes africanas. Só conhecendo nossas origens poderemos entender quem somos de verdade.”

God of Thunder (NPR)

October 17, 201411:09 AM ET

In 1904, Charles Hatfield claimed he could turn around the Southern California drought. Little did he know, he was going to get much, much more water than he bargained for.

GLYNN WASHINGTON, HOST:

From PRX and NPR, welcome back to SNAP JUDGMENT the Presto episode. Today we’re calling on mysterious forces and we’re going to strap on the SNAP JUDGMENT time machine. Our own Eliza Smith takes the controls and spins the dial back 100 years into the past.

ELIZA SMITH, BYLINE: California, 1904. In the fields, oranges dry in their rinds. In the ‘burbs, lawns yellow. Poppies wilt on the hillsides. Meanwhile, Charles Hatfield sits at a desk in his father’s Los Angeles sewing machine business. His dad wants him to take over someday, but Charlie doesn’t want to spend the rest of his life knocking on doors and convincing housewives to buy his bobbins and thread. Charlie doesn’t look like the kind of guy who changes the world. He’s impossibly thin with a vanishing patch of mousy hair. He always wears the same drab tweed suit. But he thinks to himself just maybe he can quench the Southland’s thirst. So when he punches out his timecard, he doesn’t go home for dinner. Instead, he sneaks off to the Los Angeles Public Library and pores over stacks of books. He reads about shamans who believed that fumes from a pyre of herbs and alcohols could force rain from the sky. He reads modern texts too, about the pseudoscience of pluvo culture – rainmaking, the theory that explosives and pyrotechnics could crack the clouds. Charlie conducts his first weather experiment on his family ranch, just northeast of Los Angeles in the city of Pasadena. One night he pulls his youngest brother, Paul, out of bed to keep watch with a shotgun as he climbs atop a windmill, pours a cocktail of chemicals into a shallow pan and then waits.

He doesn’t have a burner or a fan or some hybrid, no – he just waits for the chemicals to evaporate into the clouds. Paul slumped into a slumber long ago and is now leaning against the foundation of the windmill, when the first droplet hits Charlie’s cheek. Then another. And another.

Charlie pulls out his rain gauge and measures .65 inches. It’s enough to convince him he can make rain.

That’s right, Charlie has the power. Word spreads in local papers and one by one, small towns Hemet, Volta, Gustine, Newman, Crows Landing, Patterson come to him begging for rain. And wherever Charlie goes, rain seems to follow. After he gives their town seven more inches of water than his contract stipulated, the Hemet News raves, Mr. Hatfield is proving beyond doubt that rain can be produced.

Within weeks he’s signing contracts with towns from the Pacific Coast to the Mississippi. Of course, there are doubters who claim that he tracks the weather, who claim he’s a fool chasing his luck.

But then Charlie gets an invitation to prove himself. San Diego, a major city, is starting to talk water rations and they call on him. Of course, most of the city councilmen are dubious of Charlie’s charlatan claims. But still, cows are keeling over in their pastures and farmers are worrying over dying crops. It won’t hurt to hire him. They reason if Charlie Hatfield can fill San Diego’s biggest reservoir, Morena Dam, with 10 billion gallons of water, he’ll earn himself $10,000. If he can’t, well then he’ll just walk away and the city will laugh the whole thing off.

One councilman jokes…

UNIDENTIFIED MAN #1: It’s heads – the city wins. Tails – Hatfield loses.

SMITH: Charlie and Paul set up camp in the remote hills surrounding the Morena Reservoir. This time they work for weeks building several towers. This is to be Charlie’s biggest rain yet. When visitors come to observe his experiments, Charlie turns his back to them, hiding his notebooks and chemicals and Paul fingers the trigger on his trusty rifle. And soon enough it’s pouring. Winds reach record speeds of over 60 miles per hour. But that isn’t good enough – Charlie needs the legitimacy a satisfied San Diego can grant him. And so he works non-stop dodging lightning bolts, relishing thunderclaps. He doesn’t care that he’s soaked to the bone – he can wield weather. The water downs power lines, floods streets, rips up rail tracks.

A Mission Valley man who had to be rescued by a row boat as he clung to a scrap of lumber wraps himself in a towel and shivers as he suggests…

UNIDENTIFIED MAN #2: Let’s pay Hatfield $100,000 to quit.

SMITH: But Charlie isn’t quitting. The rain comes down harder and harder. Dams and reservoirs across the county explode and the flood devastates every farm, every house in its wake. One winemaker is surfacing from the protection of his cellar when he spies a wave twice the height of a telephone pole tearing down his street. He grabs his wife and they run as fast as they can, only to turn and watch their house washed downstream.

And yet, Charlie smiles as he surveys his success. The Morena Reservoir is full. He grabs Paul and the two leave their camp to march the 50 odd miles to City Hall. He expects the indebted populist to kiss his mud-covered shoes. Instead, he’s met with glares and threats. By the time Charlie and Paul reach San Diego’s city center, they’ve stopped answering to the name Hatfield. They call themselves Benson to avoid bodily harm.

Still, when he stands before the city councilman, Charlie declares his operations successful and demands his payment. The men glower at him.

San Diego is in ruins and worst of all – they’ve got blood on their hands. The flood drowned more than 50 people. It also destroyed homes, farms, telephone lines, railroads, streets, highways and bridges. San Diegans file millions of dollars in claims but Charlie doesn’t budge. He folds his arms across his chest, holds his head high and proclaims, the time is coming when drought will overtake this portion of the state. It will be then that you call for my services again.

So the city councilman tells Charlie that if he’s sure he made it rain, they’ll give him his $10,000 – he’ll just have to take full responsibility for the flood. Charlie grits his teeth and tells them, it was coincidence. It rained because Mother Nature made it so. I am no rainmaker.

And then Charlie disappears. He goes on selling sewing machines and keeping quiet.

WASHINGTON: I’ll tell you what, California these days could use a little Charlie Hatfield. Big thanks to Eliza Smith for sharing that story and thanks as well to Leon Morimoto for sound design. Mischief managed – you’ve just gotten to the other side by means of other ways.

If you missed any part of this show, no need for a rampage – head on over to snapjudgment.org. There you’ll find the award-winning podcast – Mark, what award did we win? Movies, pictures, stuff. Amazing stories await. Get in on the conversation. SNAP JUDGMENT’s on Facebook, Twitter @snapjudgment.

Did you ever wind up in the slithering sitting room when you’re supposed to be in Gryffindor’s parlor? Well, me neither, but I’m sure it’s nothing like wandering the halls of the Corporation for Public Broadcasting. Completely different, but many thanks to them. PRX, Public Radio Exchange, hosts a similar annual Quidditch championships but instead of brooms they ride radios. Not quite the same visual effect, but it’s good clean fun all the same – prx.org.

WBEZ in Chicago has tricks up their sleeve and you may have reckoned that this is not the news. No way is this the news. In fact, if you’d just thrown that book with Voldemort trapped in it, thrown it in the fire, been done with the nonsense – and you would still not be as far away from the news as this is. But this is NPR.

Hit Steyerl | Politics of Post-Representation (Dis Blog)

[Accessed Nov 23, 2015]

In conversation with Marvin Jordan

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today. As a documentary filmmaker, she has created multiple works addressing the widespread proliferation of images in contemporary media, deepening her engagement with the technological conditions of globalization. Steyerl’s work has been exhibited in numerous solo and group exhibitions including documenta 12, Taipei Biennial 2010, and 7th Shanghai Biennial. She currently teaches New Media Art at Berlin University of the Arts.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Marvin Jordan I’d like to open our dialogue by acknowledging the central theme for which your work is well known — broadly speaking, the socio-technological conditions of visual culture — and move toward specific concepts that underlie your research (representation, identification, the relationship between art and capital, etc). In your essay titled “Is a Museum a Factory?” you describe a kind of ‘political economy’ of seeing that is structured in contemporary art spaces, and you emphasize that a social imbalance — an exploitation of affective labor — takes place between the projection of cinematic art and its audience. This analysis leads you to coin the term “post-representational” in service of experimenting with new modes of politics and aesthetics. What are the shortcomings of thinking in “representational” terms today, and what can we hope to gain from transitioning to a “post-representational” paradigm of art practices, if we haven’t arrived there already?

Hito Steyerl Let me give you one example. A while ago I met an extremely interesting developer in Holland. He was working on smart phone camera technology. A representational mode of thinking photography is: there is something out there and it will be represented by means of optical technology ideally via indexical link. But the technology for the phone camera is quite different. As the lenses are tiny and basically crap, about half of the data captured by the sensor are noise. The trick is to create the algorithm to clean the picture from the noise, or rather to define the picture from within noise. But how does the camera know this? Very simple. It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It looks through the pictures you already made, or those that are networked to you and tries to match faces and shapes. In short: it creates the picture based on earlier pictures, on your/its memory. It does not only know what you saw but also what you might like to see based on your previous choices. In other words, it speculates on your preferences and offers an interpretation of data based on affinities to other data. The link to the thing in front of the lens is still there, but there are also links to past pictures that help create the picture. You don’t really photograph the present, as the past is woven into it.

The result might be a picture that never existed in reality, but that the phone thinks you might like to see. It is a bet, a gamble, some combination between repeating those things you have already seen and coming up with new versions of these, a mixture of conservatism and fabulation. The paradigm of representation stands to the present condition as traditional lens-based photography does to an algorithmic, networked photography that works with probabilities and bets on inertia. Consequently, it makes seeing unforeseen things more difficult. The noise will increase and random interpretation too. We might think that the phone sees what we want, but actually we will see what the phone thinks it knows about us. A complicated relationship — like a very neurotic marriage. I haven’t even mentioned external interference into what your phone is recording. All sorts of applications are able to remotely shut your camera on or off: companies, governments, the military. It could be disabled for whole regions. One could, for example, disable recording functions close to military installations, or conversely, live broadcast whatever you are up to. Similarly, the phone might be programmed to auto-pixellate secret or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW content or auto-modify pubic hair, stretch or omit bodies, exchange or collage context or insert AR advertisement and pop up windows or live feeds. Now lets apply this shift to the question of representative politics or democracy. The representational paradigm assumes that you vote for someone who will represent you. Thus the interests of the population will be proportionally represented. But current democracies work rather like smartphone photography by algorithmically clearing the noise and boosting some data over other. It is a system in which the unforeseen has a hard time happening because it is not yet in the database. It is about what to define as noise — something Jacques Ranciere has defined as the crucial act in separating political subjects from domestic slaves, women and workers. Now this act is hardwired into technology, but instead of the traditional division of people and rabble, the results are post-representative militias, brands, customer loyalty schemes, open source insurgents and tumblrs.

Additionally, Ranciere’s democratic solution: there is no noise, it is all speech. Everyone has to be seen and heard, and has to be realized online as some sort of meta noise in which everyone is monologuing incessantly, and no one is listening. Aesthetically, one might describe this condition as opacity in broad daylight: you could see anything, but what exactly and why is quite unclear. There are a lot of brightly lit glossy surfaces, yet they don’t reveal anything but themselves as surface. Whatever there is — it’s all there to see but in the form of an incomprehensible, Kafkaesque glossiness, written in extraterrestrial code, perhaps subject to secret legislation. It certainly expresses something: a format, a protocol or executive order, but effectively obfuscates its meaning. This is a far cry from a situation in which something—an image, a person, a notion — stood in for another and presumably acted in its interest. Today it stands in, but its relation to whatever it stands in for is cryptic, shiny, unstable; the link flickers on and off. Art could relish in this shiny instability — it does already. It could also be less baffled and mesmerised and see it as what the gloss mostly is about – the not-so-discreet consumer friendly veneer of new and old oligarchies, and plutotechnocracies.

MJ In your insightful essay, “The Spam of the Earth: Withdrawal from Representation”, you extend your critique of representation by focusing on an irreducible excess at the core of image spam, a residue of unattainability, or the “dark matter” of which it’s composed. It seems as though an unintelligible horizon circumscribes image spam by image spam itself, a force of un-identifiability, which you detect by saying that it is “an accurate portrayal of what humanity is actually not… a negative image.” Do you think this vacuous core of image spam — a distinctly negative property — serves as an adequate ground for a general theory of representation today? How do you see today’s visual culture affecting people’s behavior toward identification with images?

HS Think of Twitter bots for example. Bots are entities supposed to be mistaken for humans on social media web sites. But they have become formidable political armies too — in brilliant examples of how representative politics have mutated nowadays. Bot armies distort discussion on twitter hashtags by spamming them with advertisement, tourist pictures or whatever. Bot armies have been active in Mexico, Syria, Russia and Turkey, where most political parties, above all the ruling AKP are said to control 18,000 fake twitter accounts using photos of Robbie Williams, Megan Fox and gay porn stars. A recent article revealed that, “in order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.” It is ever more difficult to identify bots – partly because humans are being paid to enter CAPTCHAs on their behalf (1,000 CAPTCHAs equals 50 USD cents). So what is a bot army? And how and whom does it represent if anyone? Who is an AKP bot that wears the face of a gay porn star and quotes Hobbes’ Leviathan — extolling the need of transforming the rule of militias into statehood in order to escape the war of everyone against everyone else? Bot armies are a contemporary vox pop, the voice of the people, the voice of what the people are today. It can be a Facebook militia, your low cost personalized mob, your digital mercenaries. Imagine your photo is being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump cutting between glamour, sectarianism, porn, corruption and Post-Baath Party ideology. Think of the meaning of the word “affirmative action” after twitter bots and like farms! What does it represent?

MJ You have provided a compelling account of the depersonalization of the status of the image: a new process of de-identification that favors materialist participation in the circulation of images today.  Within the contemporary technological landscape, you write that “if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.” How does this shift from personal identification to material circulation — that is, to cybernetic participation — affect your notion of representation? If an image is merely “a thing like you and me,” does this amount to saying that identity is no more, no less than a .jpeg file?

HS Social media makes the shift from representation to participation very clear: people participate in the launch and life span of images, and indeed their life span, spread and potential is defined by participation. Think of the image not as surface but as all the tiny light impulses running through fiber at any one point in time. Some images will look like deep sea swarms, some like cities from space, some are utter darkness. We could see the energy imparted to images by capital or quantified participation very literally, we could probably measure its popular energy in lumen. By partaking in circulation, people participate in this energy and create it.
What this means is a different question though — by now this type of circulation seems a little like the petting zoo of plutotechnocracies. It’s where kids are allowed to make a mess — but just a little one — and if anyone organizes serious dissent, the seemingly anarchic sphere of circulation quickly reveals itself as a pedantic police apparatus aggregating relational metadata. It turns out to be an almost Althusserian ISA (Internet State Apparatus), hardwired behind a surface of ‘kawaii’ apps and online malls. As to identity, Heartbleed and more deliberate governmental hacking exploits certainly showed that identity goes far beyond a relationship with images: it entails a set of private keys, passwords, etc., that can be expropriated and detourned. More generally, identity is the name of the battlefield over your code — be it genetic, informational, pictorial. It is also an option that might provide protection if you fall beyond any sort of modernist infrastructure. It might offer sustenance, food banks, medical service, where common services either fail or don’t exist. If the Hezbollah paradigm is so successful it is because it provides an infrastructure to go with the Twitter handle, and as long as there is no alternative many people need this kind of container for material survival. Huge religious and quasi-religious structures have sprung up in recent decades to take up the tasks abandoned by states, providing protection and survival in a reversal of the move described in Leviathan. Identity happens when the Leviathan falls apart and nothing is left of the commons but a set of policed relational metadata, Emoji and hijacked hashtags. This is the reason why the gay AKP pornstar bots are desperately quoting Hobbes’ book: they are already sick of the war of Robbie Williams (Israel Defense Forces) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP) and are hoping for just any entity to organize day care and affordable dentistry.

heartbleed

But beyond all the portentous vocabulary relating to identity, I believe that a widespread standard of the contemporary condition is exhaustion. The interesting thing about Heartbleed — to come back to one of the current threats to identity (as privacy) — is that it is produced by exhaustion and not effort. It is a bug introduced by open source developers not being paid for something that is used by software giants worldwide. Nor were there apparently enough resources to audit the code in the big corporations that just copy-pasted it into their applications and passed on the bug, fully relying on free volunteer labour to produce their proprietary products. Heartbleed records exhaustion by trying to stay true to an ethics of commonality and exchange that has long since been exploited and privatized. So, that exhaustion found its way back into systems. For many people and for many reasons — and on many levels — identity is just that: shared exhaustion.

MJ This is an opportune moment to address the labor conditions of social media practice in the context of the art space. You write that “an art space is a factory, which is simultaneously a supermarket — a casino and a place of worship whose reproductive work is performed by cleaning ladies and cellphone-video bloggers alike.” Incidentally, DIS launched a website called ArtSelfie just over a year ago, which encourages social media users to participate quite literally in “cellphone-video blogging” by aggregating their Instagram #artselfies in a separately integrated web archive. Given our uncanny coincidence, how can we grasp the relationship between social media blogging and the possibility of participatory co-curating on equal terms? Is there an irreconcilable antagonism between exploited affective labor and a genuinely networked art practice? Or can we move beyond — to use a phrase of yours — a museum crowd “struggling between passivity and overstimulation?”

HS I wrote this in relation to something my friend Carles Guerra noticed already around early 2009; big museums like the Tate were actively expanding their online marketing tools, encouraging people to basically build the museum experience for them by sharing, etc. It was clear to us that audience participation on this level was a tool of extraction and outsourcing, following a logic that has turned online consumers into involuntary data providers overall. Like in the previous example – Heartbleed – the paradigm of participation and generous contribution towards a commons tilts quickly into an asymmetrical relation, where only a minority of participants benefits from everyone’s input, the digital 1 percent reaping the attention value generated by the 99 percent rest.

Brian Kuan Wood put it very beautifully recently: Love is debt, an economy of love and sharing is what you end up with when left to your own devices. However, an economy based on love ends up being an economy of exhaustion – after all, love is utterly exhausting — of deregulation, extraction and lawlessness. And I don’t even want to mention likes, notes and shares, which are the child-friendly, sanitized versions of affect as currency.
All is fair in love and war. It doesn’t mean that love isn’t true or passionate, but just that love is usually uneven, utterly unfair and asymmetric, just as capital tends to be distributed nowadays. It would be great to have a little bit less love, a little more infrastructure.

MJ Long before Edward Snowden’s NSA revelations reshaped our discussions of mass surveillance, you wrote that “social media and cell-phone cameras have created a zone of mutual mass-surveillance, which adds to the ubiquitous urban networks of control,” underscoring the voluntary, localized, and bottom-up mutuality intrinsic to contemporary systems of control. You go on to say that “hegemony is increasingly internalized, along with the pressure to conform and perform, as is the pressure to represent and be represented.” But now mass government surveillance is common knowledge on a global scale — ‘externalized’, if you will — while social media representation practices remain as revealing as they were before. Do these recent developments, as well as the lack of change in social media behavior, contradict or reinforce your previous statements? In other words, how do you react to the irony that, in the same year as the unprecedented NSA revelations, “selfie” was deemed word of the year by Oxford Dictionaries?

HS Haha — good question!

Essentially I think it makes sense to compare our moment with the end of the twenties in the Soviet Union, when euphoria about electrification, NEP (New Economic Policy), and montage gives way to bureaucracy, secret directives and paranoia. Today this corresponds to the sheer exhilaration of having a World Wide Web being replaced by the drudgery of corporate apps, waterboarding, and “normcore”. I am not trying to say that Stalinism might happen again – this would be plain silly – but trying to acknowledge emerging authoritarian paradigms, some forms of algorithmic consensual governance techniques developed within neoliberal authoritarianism, heavily relying on conformism, “family” values and positive feedback, and backed up by all-out torture and secret legislation if necessary. On the other hand things are also falling apart into uncontrollable love. One also has to remember that people did really love Stalin. People love algorithmic governance too, if it comes with watching unlimited amounts of Game of Thrones. But anyone slightly interested in digital politics and technology is by now acquiring at least basic skills in disappearance and subterfuge.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

MJ In “Politics of Art: Contemporary Art and the Transition to Post-Democracy,” you point out that the contemporary art industry “sustains itself on the time and energy of unpaid interns and self-exploiting actors on pretty much every level and in almost every function,” while maintaining that “we have to face up to the fact that there is no automatically available road to resistance and organization for artistic labor.” Bourdieu theorized qualitatively different dynamics in the composition of cultural capital vs. that of economic capital, arguing that the former is constituted by the struggle for distinction, whose value is irreducible to financial compensation. This basically translates to: everyone wants a piece of the art-historical pie, and is willing to go through economic self-humiliation in the process. If striving for distinction is antithetical to solidarity, do you see a possibility of reconciling it with collective political empowerment on behalf of those economically exploited by the contemporary art industry?

HS In Art and Money, William Goetzmann, Luc Renneboog, and Christophe Spaenjers conclude that income inequality correlates to art prices. The bigger the difference between top income and no income, the higher prices are paid for some art works. This means that the art market will benefit not only if less people have more money but also if more people have no money. This also means that increasing the amount of zero incomes is likely, especially under current circumstances, to raise the price of some art works. The poorer many people are (and the richer a few), the better the art market does; the more unpaid interns, the more expensive the art. But the art market itself may be following a similar pattern of inequality, basically creating a divide between the 0,01 percent if not less of artworks that are able to concentrate the bulk of sales and the 99,99 percent rest. There is no short term solution for this feedback loop, except of course not to accept this situation, individually or preferably collectively on all levels of the industry. This also means from the point of view of employers. There is a long term benefit to this, not only to interns and artists but to everyone. Cultural industries, which are too exclusively profit oriented lose their appeal. If you want exciting things to happen you need a bunch of young and inspiring people creating a dynamics by doing risky, messy and confusing things. If they cannot afford to do this, they will do it somewhere else eventually. There needs to be space and resources for experimentation, even failure, otherwise things go stale. If these people move on to more accommodating sectors the art sector will mentally shut down even more and become somewhat North-Korean in its outlook — just like contemporary blockbuster CGI industries. Let me explain: there is a managerial sleekness and awe inspiring military perfection to every pixel in these productions, like in North Korean pixel parades, where thousands of soldiers wave color posters to form ever new pixel patterns. The result is quite something but this something is definitely not inspiring nor exciting. If the art world keeps going down the way of raising art prices via starvation of it’s workers – and there is no reason to believe it will not continue to do this – it will become the Disney version of Kim Jong Un’s pixel parades. 12K starving interns waving pixels for giant CGI renderings of Marina Abramovic! Imagine the price it will fetch!

kim jon hito

kim hito jon

Preventing famine with mobile phones (Science Daily)

Date: November 19, 2015

Source: Vienna University of Technology, TU Vienna

Summary: With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition, say experts. By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. The method has now been tested in the Central African Republic.


Does drought lead to famine? A mobile app helps to collect information. Credit: Image courtesy of Vienna University of Technology, TU Vienna

With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition. The method has now been tested in the Central African Republic.

There are different possible causes for famine and malnutrition — not all of which are easy to foresee. Drought and crop failure can often be predicted by monitoring the weather and measuring soil moisture. But other risk factors, such as socio-economic problems or violent conflicts, can endanger food security too. For organizations such as Doctors without Borders / Médecins Sans Frontières (MSF), it is crucial to obtain information about vulnerable regions as soon as possible, so that they have a chance to provide help before it is too late.

Scientists from TU Wien in Vienna, Austria and the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria have now developed a way to monitor food security using a smartphone app, which combines weather and soil moisture data from satellites with crowd-sourced data on the vulnerability of the population, e.g. malnutrition and other relevant socioeconomic data. Tests in the Central African Republic have yielded promising results, which have now been published in the journal PLOS ONE.

Step One: Satellite Data

“For years, we have been working on methods of measuring soil moisture using satellite data,” says Markus Enenkel (TU Wien). By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. “This method works well and it provides us with very important information, but information about soil moisture deficits is not enough to estimate the danger of malnutrition,” says IIASA researcher Linda See. “We also need information about other factors that can affect the local food supply.” For example, political unrest may prevent people from farming, even if weather conditions are fine. Such problems can of course not be monitored from satellites, so the researchers had to find a way of collecting data directly in the most vulnerable regions.

“Today, smartphones are available even in developing countries, and so we decided to develop an app, which we called SATIDA COLLECT, to help us collect the necessary data,” says IIASA-based app developer Mathias Karner. For a first test, the researchers chose the Central African Republic- one of the world’s most vulnerable countries, suffering from chronic poverty, violent conflicts, and weak disaster resilience. Local MSF staff was trained for a day and collected data, conducting hundreds of interviews.

“How often do people eat? What are the current rates of malnutrition? Have any family members left the region recently, has anybody died? — We use the answers to these questions to statistically determine whether the region is in danger,” says Candela Lanusse, nutrition advisor from Doctors without Borders. “Sometimes all that people have left to eat is unripe fruit or the seeds they had stored for next year. Sometimes they have to sell their cattle, which may increase the chance of nutritional problems. This kind of behavior may indicate future problems, months before a large-scale crisis breaks out.”

A Map of Malnutrition Danger

The digital questionnaire of SATIDA COLLECT can be adapted to local eating habits, as the answers and the GPS coordinates of every assessment are stored locally on the phone. When an internet connection is available, the collected data are uploaded to a server and can be analyzed along with satellite-derived information about drought risk. In the end a map could be created, highlighting areas where the danger of malnutrition is high. For Doctors without Borders, such maps are extremely valuable. They help to plan future activities and provide help as soon as it is needed.

“Testing this tool in the Central African Republic was not easy,” says Markus Enenkel. “The political situation there is complicated. However, even under these circumstances we could show that our technology works. We were able to gather valuable information.” SATIDA COLLECT has the potential to become a powerful early warning tool. It may not be able to prevent crises, but it will at least help NGOs to mitigate their impacts via early intervention.


Story Source:

The above post is reprinted from materials provided by Vienna University of Technology, TU ViennaNote: Materials may be edited for content and length.


Journal Reference:

  1. Markus Enenkel, Linda See, Mathias Karner, Mònica Álvarez, Edith Rogenhofer, Carme Baraldès-Vallverdú, Candela Lanusse, Núria Salse. Food Security Monitoring via Mobile Data Collection and Remote Sensing: Results from the Central African RepublicPLOS ONE, 2015; 10 (11): e0142030 DOI: 10.1371/journal.pone.0142030

Indígena de 81 anos aprende a usar computador e cria dicionário para salvar seu idioma da extinção (QGA)

Marie Wilcox é a última pessoa no mundo fluente no idioma Wukchumi

Conheça Marie Wilcox, uma bisavó de 81 anos e a última pessoa no mundo fluente no idioma Wukchumi. O povo Wukchumi costumava ter uma população de 50.000 pessoas antes de terem contato com os colonizadores, mas agora são somente 200 pessoas vivendo no Vale de São Joaquim, na Califórnia. Sua linguagem foi morrendo aos poucos a cada nova geração, mas Marie se comprometeu com a tarefa de revivê-la, aprendendo a usar um computador para que conseguisse começar a escrever o primeiro dicionário Wukchumni. O processo levou sete anos, e agora que terminou ela não pretende parar seu trabalho de imortalizar sua língua nativa.

O documentário “Marie’s Dictionary”, disponível no Youtube, nos mostra a motivação de Marie e seu trabalho árduo para trazer de volta e registrar um idioma que foi quase totalmente apagado pela colonização, racismo institucionalizado e opressão.

No vídeo, Marie admite ter dúvidas sobre a gigantesca tarefa que ela se comprometeu: “Eu tenho dúvidas sobre minha língua, e sobre quem quer mantê-la viva. Ninguém parece querer aprender. É estranho que eu seja a última… Tudo vai estar perdido algum dia desses, não sei”.

Mas com sorte, esse dia ainda vai demorar. Marie e sua filha Jennifer agora dão aulas para membros da tribo, e trabalham num dicionário em áudio para acompanhar o dicionário escrito que ela já criou.

Veja o vídeo (em inglês).

(QGA)

No escaping the Blue Marble (The Conversation)

August 20, 2015 6.46pm EDT

The Earth seen from Apollo, a photo now known as the “Blue Marble”. NASA

It is often said that the first full image of the Earth, “Blue Marble”, taken by the Apollo 17 space mission in December 1972, revealed Earth to be precious, fragile and protected only by a wafer-thin atmospheric layer. It reinforced the imperative for better stewardship of our “only home”.

But there was another way of seeing the Earth revealed by those photographs. For some the image showed the Earth as a total object, a knowable system, and validated the belief that the planet is there to be used for our own ends.

In this way, the “Blue Marble” image was not a break from technological thinking but its affirmation. A few years earlier, reflecting on the spiritual consequences of space flight, the theologian Paul Tillich wrote of how the possibility of looking down at the Earth gives rise to “a kind of estrangement between man and earth” so that the Earth is seen as a totally calculable material body.

For some, by objectifying the planet this way the Apollo 17 photograph legitimised the Earth as a domain of technological manipulation, a domain from which any unknowable and unanalysable element has been banished. It prompts the idea that the Earth as a whole could be subject to regulation.

This metaphysical possibility is today a physical reality in work now being carried out on geoengineering – technologies aimed at deliberate, large-scale intervention in the climate system designed to counter global warming or offset some of its effects.

While some proposed schemes are modest and relatively benign, the more ambitious ones – each now with a substantial scientific-commercial constituency – would see humanity mobilising its technological power to seize control of the climate system. And because the climate system cannot be separated from the rest of the Earth System, that means regulating the planet, probably in perpetuity.

Dreams of escape

Geoengineering is often referred to as Plan B, one we should be ready to deploy because Plan A, cutting global greenhouse gas emissions, seems unlikely to be implemented in time. Others are now working on what might be called Plan C. It was announced last year in The Times:

British scientists and architects are working on plans for a “living spaceship” like an interstellar Noah’s Ark that will launch in 100 years’ time to carry humans away from a dying Earth.

This version of Plan C is known as Project Persephone, which is curious as Persephone in Greek mythology was the queen of the dead. The project’s goal is to build “prototype exovivaria – closed ecosystems inside satellites, to be maintained from Earth telebotically, and democratically governed by a global community.”

NASA and DARPA, the US Defense Department’s advanced technologies agency, are also developing a “worldship” designed to take a multi-generational community of humans beyond the solar system.

Paul Tillich noticed the intoxicating appeal that space travel holds for certain kinds of people. Those first space flights became symbols of a new ideal of human existence, “the image of the man who looks down at the earth, not from heaven, but from a cosmic sphere above the earth”. A more common reaction to Project Persephone is summed up by a reader of the Daily Mail: “Only the ‘elite’ will go. The rest of us will be left to die.”

Perhaps being left to die on the home planet would be a more welcome fate. Imagine being trapped on this “exovivarium”, a self-contained world in which exported nature becomes a tool for human survival; a world where there is no night and day; no seasons; no mountains, streams, oceans or bald eagles; no ice, storms or winds; no sky; no sunrise; a closed world whose occupants would work to keep alive by simulation the archetypal habits of life on Earth.

Into the endless void

What kind of person imagines himself or herself living in such a world? What kind of being, after some decades, would such a post-terrestrial realm create? What kind of children would be bred there?

According to Project Persephone’s sociologist, Steve Fuller: “If the Earth ends up a no-go zone for human beings [sic] due to climate change or nuclear or biological warfare, we have to preserve human civilisation.”

Why would we have to preserve human civilisation? What is the value of a civilisation if not to raise human beings to a higher level of intellectual sophistication and moral responsibility? What is a civilisation worth if it cannot protect the natural conditions that gave birth to it?

Those who blast off leaving behind a ruined Earth would carry into space a fallen civilisation. As the Earth receded into the all-consuming blackness those who looked back on it would be the beings who had shirked their most primordial responsibility, beings corroded by nostalgia and survivor guilt.

He’s now mostly forgotten, but in the 1950s and 1960s the Swedish poet Harry Martinson was famous for his haunting epic poem Aniara, which told the story of a spaceship carrying a community of several thousand humans out into space escaping an Earth devastated by nuclear conflagration. At the end of the epic the spaceship’s controller laments the failure to create a new Eden:

“I had meant to make them an Edenic place,

but since we left the one we had destroyed

our only home became the night of space

where no god heard us in the endless void.”

So from the cruel fantasy of Plan C we are obliged to return to Plan A, and do all we can to slow the geological clock that has ticked over into the Anthropocene. If, on this Earthen beast provoked, a return to the halcyon days of an undisturbed climate is no longer possible, at least we can resolve to calm the agitations of “the wakened giant” and so make this new and unwanted epoch one in which humans can survive.

Laymert Garcia dos Santos: ‘Hoje, xamanismo é alta tecnologia de acesso’ (O Globo)

Doutor pela Sorbonne e estudioso dos ianomâmis, paulista que montou ópera com cosmologia indígena em Munique veio ao Rio para aula na EAV Parque Lage

POR ARNALDO BLOCH

Na oca do Parque Lage, Laymert capta energias ianomami Foto: Marcelo Carnaval / Agência O Globo

“Nasci numa cidade que mal conheço, Itápolis, mistura de pedra, (‘ita’), do guarani, e cidade (‘polis’), do grego: um pouco a essência brasileira. Estudei no Rio e passei décadas na França. Lecionei muitos anos no Brasil, trabalhando relações entre tecnologia, cultura, ambiente e arte. Sou casado, tenho um filho patologista”

Conte algo que não sei. 

Hoje, o que a gente considerava o sobrenatural indígena, o xamanismo, é uma alta tecnologia de acesso o a mundos virtuais, com lógicas que não são ocidentais, mas no final acabam chegando, cada vez mais, a uma espécie de cruzamento com a perspectiva tecnocientífica racional.

Em que ponto se dá esse cruzamento?

A ciência já sabe que existe, na Amazônia, um apocalipse anunciado, se a devastação persistir. Há um milênio os ianomâmis falam de um apocalipse mítico: quando não houver mais xamãs, o céu vai cair… pois são eles que seguram o céu, junto com os espíritos auxiliares humanimais.

As profecias convergem para a ecologia de ponta… 

Sim. E na ópera que fizemos essas duas perspectivas acabam convergindo para um final catastrófico. Na perspectiva ianomâmi, o homem branco é inumano, um vetor de destruição, e produz a xawara, espécie de fumaça canibal, que vai devorando florestas, espalhando as doenças e epidemias, contaminando rios.

Deve ser complexo transpor uma cosmologia dessas para os palcos de Munique…

Ficamos um tempo na aldeia Demini, semi-isolada, e trabalhamos com os xamãs, em parceria com o Instituto Goethe, o Sesc São Paulo, o ZKM (maior centro de arte e tecnologia da Europa), a Bienal de Teatro Música de Munique e gente da comunidade científica.

Como levar o espírito da aldeia a uma cena de ópera? 

Depois de todo o trabalho conceitual na aldeia, chegamos a uma encenação do conflito entre a xawara e o xamã. O público assistia circulando no próprio palco, um labirinto. O xamã era representado pelo cantor suíço Christian Zehnder, que já trabalhou na África e na Ásia e é um dos raros no mundo a usar a técnica do voice over, que permite emitir duas vozes ao mesmo tempo, recurso gutural. Quem fazia a xawara era um grande cantor de idade já, o inglês Phil Minton, cantor de jazz.

E tal da tecnologia,era só coadjuvante da tragédia? 

Num espaço comprid se dispunha uma sequência de telas, e eram projetadas imagens e luzes que traduziam os fenômenos da selvada através de algoritmos. O público ficava perdido na “floresta,” o xamã numa ponta, xawara na outra, além de um político, um missionário e um cientista.

Os ianomâmis assistiram?

A maioria, não. A ópera não foi feita para eles. Mesmo assim, foram a Munique o chefe Davi Kopenawa e dois xamãs.

E como reagiram? 

Primeiro, ficaram satisfeitos com o fato de um público tomar conhecimento, de maneira séria, do que são a cosmologia e o pensamento deles. O caráter estratégico. Mas da a apresentação em si eles riram: acham que arte é coisa de criança, não é o sentido profundo do fenômeno. Que aquela ópera era uma brincadeira perto do xamanismo. Um professor de filosofia percebeu aí uma simetria: os brancos acham os índios infantis por suas crenças, e eles nos acham infantis por nossas representações de sua realidade.

O que a experiência trouxe a você como pessoa?

Fui muitas vezes. Nos começo dos anos 2000 presidi uma ONG que lutou pela defesa e preservação do território ianomami. Estar com eles ajuda a gente a entender não só o que é o outro, mas o que somos. É um tipo de privilégio. Pena que pouca gente teve ou tem um contato de pura positividade com esse mundo que, para nós, é quase sempre vivido na esfera do negativo. Pela educação que a gente tem, pela tradição histórica do modo como os brasileiros tratam os índios. No Japão, seriam seres preciosos, sagrados.

Stop burning fossil fuels now: there is no CO2 ‘technofix’, scientists warn (The Guardian)

Researchers have demonstrated that even if a geoengineering solution to CO2 emissions could be found, it wouldn’t be enough to save the oceans

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber Photograph: Doug Perrine/Design Pics/Corbis

German researchers have demonstrated once again that the best way to limit climate change is to stop burning fossil fuels now.

In a “thought experiment” they tried another option: the future dramatic removal of huge volumes of carbon dioxide from the atmosphere. This would, they concluded, return the atmosphere to the greenhouse gas concentrations that existed for most of human history – but it wouldn’t save the oceans.

That is, the oceans would stay warmer, and more acidic, for thousands of years, and the consequences for marine life could be catastrophic.

The research, published in Nature Climate Change today delivers yet another demonstration that there is so far no feasible “technofix” that would allow humans to go on mining and drilling for coal, oil and gas (known as the “business as usual” scenario), and then geoengineer a solution when climate change becomes calamitous.

Sabine Mathesius (of the Helmholtz Centre for Ocean Research in Kiel and the Potsdam Institute for Climate Impact Research) and colleagues decided to model what could be done with an as-yet-unproven technology called carbon dioxide removal. One example would be to grow huge numbers of trees, burn them, trap the carbon dioxide, compress it and bury it somewhere. Nobody knows if this can be done, but Dr Mathesius and her fellow scientists didn’t worry about that.

They calculated that it might plausibly be possible to remove carbon dioxide from the atmosphere at the rate of 90 billion tons a year. This is twice what is spilled into the air from factory chimneys and motor exhausts right now.

The scientists hypothesised a world that went on burning fossil fuels at an accelerating rate – and then adopted an as-yet-unproven high technology carbon dioxide removal technique.

“Interestingly, it turns out that after ‘business as usual’ until 2150, even taking such enormous amounts of CO2 from the atmosphere wouldn’t help the deep ocean that much – after the acidified water has been transported by large-scale ocean circulation to great depths, it is out of reach for many centuries, no matter how much CO2 is removed from the atmosphere,” said a co-author, Ken Caldeira, who is normally based at the Carnegie Institution in the US.

The oceans cover 70% of the globe. By 2500, ocean surface temperatures would have increased by 5C (41F) and the chemistry of the ocean waters would have shifted towards levels of acidity that would make it difficult for fish and shellfish to flourish. Warmer waters hold less dissolved oxygen. Ocean currents, too, would probably change.

But while change happens in the atmosphere over tens of years, change in the ocean surface takes centuries, and in the deep oceans, millennia. So even if atmospheric temperatures were restored to pre-Industrial Revolution levels, the oceans would continue to experience climatic catastrophe.

“In the deep ocean, the chemical echo of this century’s CO2 pollution will reverberate for thousands of years,” said co-author Hans Joachim Schellnhuber, who directs the Potsdam Institute. “If we do not implement emissions reductions measures in line with the 2C (35.6F) target in time, we will not be able to preserve ocean life as we know it.”

Nova técnica estima multidões analisando atividade de celulares (BBC Brasil)

3 junho 2015

Multidão em aeroporto | Foto: Getty

Pesquisadores buscam maneiras mais eficientes de medir tamanho de multidões sem depender de imagens

Um estudo de uma universidade britânica desenvolveu um novo meio de estimar multidões em protestos ou outros eventos de massa: através da análise de dados geográficos de celulares e Twitter.

Pesquisadores da Warwick University, na Inglaterra, analisaram a geolocalização de celulares e de mensagens no Twitter durante um período de dois meses em Milão, na Itália.

Em dois locais com números de visitantes conhecidos – um estádio de futebol e um aeroporto – a atividade nas redes sociais e nos celulares aumentou e diminuiu de maneira semelhante ao fluxo de pessoas.

A equipe disse que, utilizando esta técnica, pode fazer medições em eventos como protestos.

Outros pesquisadores enfatizaram o fato de que há limitações neste tipo de dados – por exemplo, somente uma parte da população usa smartphones e Twitter e nem todas as áreas em um espaço estão bem servidos de torres telefônicas.

Mas os autores do estudo dizem que os resultados foram “um excelente ponto de partida” para mais estimativas do tipo – com mais precisão – no futuro.

“Estes números são exemplos de calibração nos quais podemos nos basear”, disse o coautor do estudo, Tobias Preis.

“Obviamente seria melhor termos exemplos em outros países, outros ambientes, outros momentos. O comportamento humano não é uniforme em todo o mundo, mas está é uma base muito boa para conseguir estimativas iniciais.”

O estudo, divulgado na publicação científica Royal Society Open Science, é parte de um campo de pesquisa em expansão que explora o que a atividade online pode revelar sobre o comportamento humano e outros fenômenos reais.

Foto: F. Botta et al

Cientistas compararam dados oficiais de visitantes em aeroporto e estádio com atividade no Twitter e no celular

Federico Botta, estudante de PhD que liderou a análise, afirmou que a metodologia baseada em celulares tem vantagens importantes sobre outros métodos para estimar o tamanho de multidões – que costumam se basear em observações no local ou em imagens.

“Este método é muito rápido e não depende do julgamento humano. Ele só depende dos dados que vêm dos telefones celulares ou da atividade no Twitter”, disse à BBC.

Margem de erro

Com dois meses de dados de celulares fornecidos pela Telecom Italia, Botta e seus colegas se concentraram no aeroporto de Linate e no estádio de futebol San Siro, em Milão.

Eles compararam o número de pessoas que se sabia estarem naqueles locais a cada momento – baseado em horários de voos e na venda de ingressos para os jogos de futebol – com três tipos de atividade em telefones celulares: o número de chamadas feitas e de mensagens de texto enviadas, a quantidade de internet utilizada e o volume de tuítes feitos.

“O que vimos é que estas atividades realmente tinham um comportamento muito semelhante ao número de pessoas no local”, afirma Botta.

Isso pode não parecer tão surpreendente, mas, especialmente no estádio de futebol, os padrões observados pela equipe eram tão confiáveis que eles conseguiam até fazer previsões.

Houve dez jogos de futebol no período em que o experimento foi feito. Com base nos dados de nove jogos, foi possível estimar quantas pessoas estariam no décimo jogo usando apenas os dados dos celulares.

“Nossa porcentagem absoluta média de erro é cerca de 13%. Isso significa que nossas estimativas e o número real de pessoas têm uma diferença entre si, em valores absolutos, de cerca de 13%”, diz Botta.

De acordo com os pesquisadores, esta margem de erro é boa em comparação com as técnicas tradicionais baseadas em imagens e no julgamento humano.

Eles deram o exemplo do manifestação em Washington, capital americana, conhecida como “Million Man March” (Passeata do milhão, em tradução livre) em 1995, em que mesmo as análises mais criteriosas conseguiram produzir estimativas com 20% de erro – depois que medições iniciais variaram entre 400 mil e dois milhões de pessoas.

Multidão em estádio italiano | Foto: Getty

Precisão de dados coletados em estádio de futebol surpreendeu até mesmo a equipe de pesquisadores

Segundo Ed Manley, do Centro para Análise Espacial Avançada do University College London, a técnica tem potencial e as pessoas devem sentir-se “otimistas, mas cautelosas” em relação ao uso de dados de celulares nestas estimativas.

“Temos essas bases de dados enormes e há muito o que pode ser feito com elas… Mas precisamos ter cuidado com o quanto vamos exigir dos dados”, afirmou.

Ele também chama a atenção para o fato de que tais informações não refletem igualitariamente uma população.

“Há vieses importantes aqui. Quem exatamente estamos medindo com essas bases de dados?”, o Twitter, por exemplo, diz Manley, tem uma base de usuários relativamente jovem e de classe alta.

Além destas dificuldades, há o fato de que é preciso escolher com cuidado as atividades que serão medidas, porque as pessoas usam seus telefones de maneira diferente em diferentes lugares – mais chamadas no aeroporto e mais tuítes no futebol, por exemplo.

Outra ressalva importante é o fato de que toda a metodologia de análise defendida por Botta depende do sinal de telefone e internet – que varia muito de lugar para lugar, quando está disponível.

“Se estamos nos baseando nesses dados para saber onde as pessoas estão, o que acontece quando temos um problema com a maneira como os dados são coletados?”, indaga Manley.

How Facebook’s Algorithm Suppresses Content Diversity (Modestly) and How the Newsfeed Rules Your Clicks (The Message)

Zeynep Tufekci on May 7, 2015

Today, three researchers at Facebook published an article in Science on how Facebook’s newsfeed algorithm suppresses the amount of “cross-cutting” (i.e. likely to cause disagreement) news articles a person sees. I read a lot of academic research, and usually, the researchers are at a pains to highlight their findings. This one buries them as deep as it could, using a mix of convoluted language and irrelevant comparisons. So, first order of business is spelling out what they found. Also, for another important evaluation — with some overlap to this one — go read this post by University of Michigan professor Christian Sandvig.

The most important finding, if you ask me, is buried in an appendix. Here’s the chart showing that the higher an item is in the newsfeed, the more likely it is clicked on.

Notice how steep the curve is. The higher the link, more (a lot more) likely it will be clicked on. You live and die by placement, determined by the newsfeed algorithm. (The effect, as Sean J. Taylor correctly notes, is a combination of placement, and the fact that the algorithm is guessing what you would like). This was already known, mostly, but it’s great to have it confirmed by Facebook researchers (the study was solely authored by Facebook employees).

The most important caveat that is buried is that this study is not about all of Facebook users, despite language at the end that’s quite misleading. The researchers end their paper with: “Finally, we conclusively establish that on average in the context of Facebook…” No. The research was conducted on a small, skewed subset of Facebook users who chose to self-identify their political affiliation on Facebook and regularly log on to Facebook, about ~4% of the population available for the study. This is super important because this sampling confounds the dependent variable.

The gold standard of sampling is random, where every unit has equal chance of selection, which allows us to do amazing things like predict elections with tiny samples of thousands. Sometimes, researchers use convenience samples — whomever they can find easily — and those can be okay, or not, depending on how typical the sample ends up being compared to the universe. Sometimes, in cases like this, the sampling affects behavior: people who self-identify their politics are almost certainly going to behave quite differently, on average, than people who do not, when it comes to the behavior in question which is sharing and clicking through ideologically challenging content. So, everything in this study applies only to that small subsample of unusual people. (Here’s a post by the always excellent Eszter Hargittai unpacking the sampling issue further.) The study is still interesting, and important, but it is not a study that can generalize to Facebook users. Hopefully that can be a future study.

What does the study actually say?

  • Here’s the key finding: Facebook researchers conclusively show that Facebook’s newsfeed algorithm decreases ideologically diverse, cross-cutting content people see from their social networks on Facebook by a measurable amount. The researchers report that exposure to diverse content is suppressed by Facebook’s algorithm by 8% for self-identified liberals and by 5% for self-identified conservatives. Or, as Christian Sandvig puts it, “the algorithm filters out 1 in 20 cross-cutting hard news stories that a self-identified conservative sees (or 5%) and 1 in 13cross-cutting hard news stories that a self-identified liberal sees (8%).” You are seeing fewer news items that you’d disagree with which are shared by your friends because the algorithm is not showing them to you.
  • Now, here’s the part which will likely confuse everyone, but it should not. The researchers also report a separate finding that individual choice to limit exposure through clicking behavior results in exposure to 6% less diverse content for liberals and 17% less diverse content for conservatives.

Are you with me? One novel finding is that the newsfeed algorithm (modestly) suppresses diverse content, and another crucial and also novel finding is that placement in the feed is (strongly) influential of click-through rates.

Researchers then replicate and confirm a well-known, uncontested and long-established finding which is that people have a tendency to avoid content that challenges their beliefs. Then, confusingly, the researchers compare whether algorithm suppression effect size is stronger than people choosing what to click, and have a lot of language that leads Christian Sandvig to call this the “it’s not our fault” study. I cannot remember a worse apples to oranges comparison I’ve seen recently, especially since these two dynamics, algorithmic suppression and individual choice, have cumulative effects.

Comparing the individual choice to algorithmic suppression is like asking about the amount of trans fatty acids in french fries, a newly-added ingredient to the menu, and being told that hamburgers, which have long been on the menu, also have trans-fatty acids — an undisputed, scientifically uncontested and non-controversial fact. Individual self-selection in news sources long predates the Internet, and is a well-known, long-identified and well-studied phenomenon. Its scientific standing has never been in question. However, the role of Facebook’s algorithm in this process is a new — and important — issue. Just as the medical profession would be concerned about the amount of trans-fatty acids in the new item, french fries, as well as in the existing hamburgers, researchers should obviously be interested in algorithmic effects in suppressing diversity, in addition to long-standing research on individual choice, since the effects are cumulative. An addition, not a comparison, is warranted.

Imagine this (imperfect) analogy where many people were complaining, say, a washing machine has a faulty mechanism that sometimes destroys clothes. Now imagine washing machine company research paper which finds this claim is correct for a small subsample of these washing machines, and quantifies that effect, but also looks into how many people throw out their clothes before they are totally worn out, a well-established, undisputed fact in the scientific literature. The correct headline would not be “people throwing out used clothes damages more dresses than the the faulty washing machine mechanism.” And if this subsample was drawn from one small factory located everywhere else than all the other factories that manufacture the same brand, and produced only 4% of the devices, the headline would not refer to all washing machines, and the paper would not (should not) conclude with a claim about the average washing machine.

Also, in passing the paper’s conclusion appears misstated. Even though the comparison between personal choice and algorithmic effects is not very relevant, the result is mixed, rather than “conclusively establish[ing] that on average in the context of Facebook individual choices more than algorithms limit exposure to attitude-challenging content”. For self-identified liberals, the algorithm was a stronger suppressor of diversity (8% vs. 6%) while for self-identified conservatives, it was a weaker one (5% vs 17%).)

Also, as Christian Sandvig states in this post, and Nathan Jurgenson in this important post here, and David Lazer in the introduction to the piece in Science explore deeply, the Facebook researchers are not studying some neutral phenomenon that exists outside of Facebook’s control. The algorithm is designed by Facebook, and is occasionally re-arranged, sometimes to the devastation of groups who cannot pay-to-play for that all important positioning. I’m glad that Facebook is choosing to publish such findings, but I cannot but shake my head about how the real findings are buried, and irrelevant comparisons take up the conclusion. Overall, from all aspects, this study confirms that for this slice of politically-engaged sub-population, Facebook’s algorithm is a modest suppressor of diversity of content people see on Facebook, and that newsfeed placement is a profoundly powerful gatekeeper for click-through rates. This, not all the roundabout conversation about people’s choices, is the news.

Late Addition: Contrary to some people’s impressions, I am not arguing against all uses of algorithms in making choices in what we see online. The questions that concern me are how these algorithms work, what their effects are, who controls them, and what are the values that go into the design choices. At a personal level, I’d love to have the choice to set my newsfeed algorithm to “please show me more content I’d likely disagree with” — something the researchers prove that Facebook is able to do.

Software tool allows scientists to correct climate ‘misinformation’ from major media outlets (ClimateWire)

ClimateWire, April 13, 2015.

Manon Verchot, E&E reporter
Published: Monday, April 13, 2015
After years of misinformation about climate change and climate science in the media, more than two dozen climate scientists are developing a Web browser plugin to right the wrongs in climate reporting.

The plugin, called Climate Feedback and developed by Hypothes.is, a nonprofit software developer, allows researchers to annotate articles in major media publications and correct errors made by journalists.

“People’s views about climate science depend far too much on their politics and what their favorite politicians are saying,” said Aaron Huertas, science communications officer at the Union of Concerned Scientists. “Misinformation hurts our ability to make rational decisions. It’s up to journalists to tell the public what we really know, though it can be difficult to make time to do that, especially when covering breaking news.”

An analysis from the Union of Concerned Scientists found that levels of inaccuracy surrounding climate change vary dramatically depending on the news outlet. In 2013, 72 percent of climate-related coverage on Fox News contained misleading statements, compared to 30 percent on CNN and 8 percent on MSNBC.

Through Climate Feedback, researchers can comment on inaccurate statements and rate the credibility of articles. The group focuses on annotating articles from news outlets it considers influential — like The Wall Street Journal or The New York Times — rather than blogs.

“When you read an article it’s not just about it being wrong or right — it’s much more complicated than that,” said Emmanuel Vincent, a climate scientist at the University of California, Merced’s Center for Climate Communication, who developed the idea behind Climate Feedback. “People still get confused about the basics of climate change.”

‘It’s crucial in a democracy’

According to Vincent, one of the things journalists struggle with most is articulating the effect of climate change on extreme weather events. Though hurricanes or other major storms cannot be directly attributed to climate change, scientists expect warmer ocean temperatures and higher levels of water vapor in the atmosphere to make storms more intense. Factors like sea-level rise are expected to make hurricanes more devastating as higher sea levels allow storm surges to pass over existing infrastructure.

“Trying to connect a weather event with climate change is not the best approach,” Vincent said.

Climate Feedback hopes to clarify issues like these. The group’s first task was annotating an article published inThe Wall Street Journal in September 2014.

In the piece, the newspaper reported that sea-level rise experienced today is the same as sea-level rise experienced 70 years ago. But in the annotated version of the story, Vincent pointed to research from Columbia University that directly contradicted that idea.

“The rate of sea level rise has actually quadrupled since preindustrial times,” wrote Vincent in the margins.

Vincent hopes that tools like Climate Feedback can help journalists learn to better communicate climate research and can make members of the public confident that the information they are receiving is credible.

Researchers who want to contribute to Climate Feedback are required to have published at least one climate-related article that passed a peer review. Many say these tools are particularly important in the Internet era, when masses of information make it difficult for the public to wade through the vast quantities of articles and reports.

“There are big decisions that need to be made about climate change,” Vincent said. “It’s crucial in a democracy for people to know about these issues.”