Arquivo da tag: Tecnologia

Há um limite para avanços tecnológicos? (OESP)

16 Maio 2016 | 03h 00

Está se tornando popular entre políticos e governos a ideia que a estagnação da economia mundial se deve ao fato de que o “século de ouro” da inovação científica e tecnológica acabou. Este “século de ouro” é usualmente definido como o período de 1870 a 1970, no qual os fundamentos da era tecnológica em que vivemos foram estabelecidos.

De fato, nesse período se verificaram grandes avanços no nosso conhecimento, que vão desde a Teoria da Evolução, de Darwin, até a descoberta das leis do eletromagnetismo, que levou à produção de eletricidade em larga escala, e telecomunicações, incluindo rádio e televisão, com os benefícios resultantes para o bem-estar das populações. Outros avanços, na área de medicina, como vacinas e antibióticos, estenderam a vida média dos seres humanos. A descoberta e o uso do petróleo e do gás natural estão dentro desse período.

São muitos os que argumentam que em nenhum outro período de um século – ao longo dos 10 mil anos da História da humanidade – tantos progressos foram alcançados. Essa visão da História, porém, pode e tem sido questionada. No século anterior, de 1770 a 1870, por exemplo, houve também grandes progressos, decorrentes do desenvolvimento dos motores que usavam o carvão como combustível, os quais permitiram construir locomotivas e deram início à Revolução Industrial.

Apesar disso, os saudosistas acreditam que o “período dourado” de inovações se tenha esgotado e, em decorrência, os governos adotam hoje medidas de caráter puramente econômico para fazer reviver o “progresso”: subsídios a setores específicos, redução de impostos e políticas sociais para reduzir as desigualdades, entre outras, negligenciando o apoio à ciência e tecnologia.

Algumas dessas políticas poderiam ajudar, mas não tocam no aspecto fundamental do problema, que é tentar manter vivo o avanço da ciência e da tecnologia, que resolveu problemas no passado e poderá ajudar a resolver problemas no futuro.

Para analisar melhor a questão é preciso lembrar que não é o número de novas descobertas que garante a sua relevância. O avanço da tecnologia lembra um pouco o que acontece às vezes com a seleção natural dos seres vivos: algumas espécies são tão bem adaptadas ao meio ambiente em que vivem que deixam de “evoluir”: esse é o caso dos besouros que existiam na época do apogeu do Egito, 5 mil anos atrás, e continuam lá até hoje; ou de espécies “fósseis” de peixes que evoluíram pouco em milhões de anos.

Outros exemplos são produtos da tecnologia moderna, como os magníficos aviões DC-3, produzidos há mais de 50 anos e que ainda representam uma parte importante do tráfego aéreo mundial.

Mesmo em áreas mais sofisticadas, como a informática, isso parece estar ocorrendo. A base dos avanços nessa área foi a “miniaturização” dos chips eletrônicos, onde estão os transistores. Em 1971 os chips produzidos pela Intel (empresa líder na área) tinham 2.300 transistores numa placa de 12 milímetros quadrados. Os chips de hoje são pouco maiores, mas têm 5 bilhões de transistores. Foi isso que permitiu a produção de computadores personalizados, telefones celulares e inúmeros outros produtos. E é por essa razão que a telefonia fixa está sendo abandonada e a comunicação via Skype é praticamente gratuita e revolucionou o mundo das comunicações.

Há agora indicações que essa miniaturização atingiu seus limites, o que causa uma certa depressão entre os “sacerdotes” desse setor. Essa é uma visão equivocada. O nível de sucesso foi tal que mais progressos nessa direção são realmente desnecessários, que é o que aconteceu com inúmeros seres vivos no passado.

O que parece ser a solução dos problemas do crescimento econômico no longo prazo é o avanço da tecnologia em outras áreas que não têm recebido a atenção necessária: novos materiais, inteligência artificial, robôs industriais, engenharia genética, prevenção de doenças e, mais do que tudo, entender o cérebro humano, o produto mais sofisticado da evolução da vida na Terra.

Entender como uma combinação de átomos e moléculas pode gerar um órgão tão criativo como o cérebro, capaz de possuir uma consciência e criatividade para compor sinfonias como as de Beethoven – e ao mesmo tempo promover o extermínio de milhões de seres humanos –, será provavelmente o avanço mais extraordinário que o Homo sapiens poderá atingir.

Avanços nessas áreas poderiam criar uma vaga de inovações e progresso material superior em quantidade e qualidade ao que se produziu no “século de ouro”. Mais ainda enfrentamos hoje um problema global, novo aqui, que é a degradação ambiental, resultante em parte do sucesso dos avanços da tecnologia do século 20. Apenas a tarefa de reduzir as emissões de gases que provocam o aquecimento global (resultante da queima de combustíveis fósseis) será uma tarefa hercúlea.

Antes disso, e num plano muito mais pedestre, os avanços que estão sendo feitos na melhoria da eficiência no uso de recursos naturais é extraordinário e não tem tido o crédito e o reconhecimento que merecem.

Só para dar um exemplo, em 1950 os americanos gastavam, em média, 30% da sua renda em alimentos. No ano de 2013 essa porcentagem havia caído para 10%. Os gastos com energia também caíram, graças à melhoria da eficiência dos automóveis e outros fins, como iluminação e aquecimento, o que, aliás, explica por que o preço do barril de petróleo caiu de US$ 150 para menos de US$ 30. É que simplesmente existe petróleo demais no mundo, como também existe capacidade ociosa de aço e cimento.

Um exemplo de um país que está seguindo esse caminho é o Japão, cuja economia não está crescendo muito, mas sua população tem um nível de vida elevado e continua a beneficiar-se gradualmente dos avanços da tecnologia moderna.

*José Goldemberg é professor emérito da Universidade de São Paulo (USP) e é presidente da Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp)

Anúncios

If The UAE Builds A Mountain Will It Actually Bring More Rain? (Vocativ)

You’re not the only one who thinks constructing a rain-inducing mountain in the desert is a bonkers idea

May 03, 2016 at 6:22 PM ET

Photo Illustration: R. A. Di ISO

The United Arab Emirates wants to build a mountain so the nation can control the weather—but some experts are skeptical about the effectiveness of this project, which may sound more like a James Bond villain’s diabolical plan than a solution to drought.

The actual construction of a mountain isn’t beyond the engineering prowess of the UAE. The small country on the Arabian Peninsula has pulled off grandiose environmental projects before, like the artificial Palm Islands off the coast of Dubai and an indoor ski hill in the Mall of the Emirates. But the scientific purpose of the mountain is questionable.

The UAE’s National Center for Meteorology and Seismology (NCMS) is currently collaborating with the U.S.-based University Corporation for Atmospheric Research (UCAR) for the first planning phase of the ambitious project, according to Arabian Business. The UAE government gave the two groups $400,000 in funding to determine whether they can bring more rain to the region by constructing a mountain that will foster better cloud-seeding.

Last week the NCMS revealed that the UAE spent $588,000 on cloud-seeding in 2015. Throughout the year, 186 flights dispersed potassium chloride, sodium chloride and magnesium into clouds—a process that can trigger precipitation. Now, the UAE is hoping they can enhance the chemical process by forcing air up around the artificial mountain, creating clouds that can be seeded more easily and efficiently.

“What we are looking at is basically evaluating the effects on weather through the type of mountain, how high it should be and how the slopes should be,” NCAR lead researcher Roelof Bruintjes told Arabian Business. “We will have a report of the first phase this summer as an initial step.”

But some scientists don’t expect NCAR’s research will lead to a rain-inducing alp. “I really doubt that it would work,” Raymond Pierrehumbert, a professor of physics at the University of Oxford told Vocativ. “You’d need to build a long ridge, not just a cone, otherwise the air would just go around. Even if you could do that, mountains cause local enhanced rain on the upslope side, but not much persistent cloud downwind, and if you need cloud seeding to get even the upslope rain, it’s really unlikely to work as there is very little evidence that cloud seeding produces much rainfall.”

Pierrehumbert, who specializes in geophysics and climate change, believes the regional environment would make the project especially difficult. “UAE is a desert because of the wind patterns arising from global atmospheric circulations, and any mountain they build is not going to alter those,” he said. 

Pierrehumbert concedes that NCAR is a respectable organization that will be able to use the “small amount of money to research the problem.” He thinks some good scientific study will come of the effort—perhaps helping to determine why a hot, humid area bordered by the ocean receives so little rainfall.

But he believes the minimal sum should go into another project: “They’d be way better off putting the money into solar-powered desalination plants.”

If the project doesn’t work out, at least wealthy Emirates have a 125,000-square-foot indoor snow park to look forward to in 2018.

Hit Steyerl | Politics of Post-Representation (Dis Blog)

[Accessed Nov 23, 2015]

In conversation with Marvin Jordan

From the militarization of social media to the corporatization of the art world, Hito Steyerl’s writings represent some of the most influential bodies of work in contemporary cultural criticism today. As a documentary filmmaker, she has created multiple works addressing the widespread proliferation of images in contemporary media, deepening her engagement with the technological conditions of globalization. Steyerl’s work has been exhibited in numerous solo and group exhibitions including documenta 12, Taipei Biennial 2010, and 7th Shanghai Biennial. She currently teaches New Media Art at Berlin University of the Arts.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Marvin Jordan I’d like to open our dialogue by acknowledging the central theme for which your work is well known — broadly speaking, the socio-technological conditions of visual culture — and move toward specific concepts that underlie your research (representation, identification, the relationship between art and capital, etc). In your essay titled “Is a Museum a Factory?” you describe a kind of ‘political economy’ of seeing that is structured in contemporary art spaces, and you emphasize that a social imbalance — an exploitation of affective labor — takes place between the projection of cinematic art and its audience. This analysis leads you to coin the term “post-representational” in service of experimenting with new modes of politics and aesthetics. What are the shortcomings of thinking in “representational” terms today, and what can we hope to gain from transitioning to a “post-representational” paradigm of art practices, if we haven’t arrived there already?

Hito Steyerl Let me give you one example. A while ago I met an extremely interesting developer in Holland. He was working on smart phone camera technology. A representational mode of thinking photography is: there is something out there and it will be represented by means of optical technology ideally via indexical link. But the technology for the phone camera is quite different. As the lenses are tiny and basically crap, about half of the data captured by the sensor are noise. The trick is to create the algorithm to clean the picture from the noise, or rather to define the picture from within noise. But how does the camera know this? Very simple. It scans all other pictures stored on the phone or on your social media networks and sifts through your contacts. It looks through the pictures you already made, or those that are networked to you and tries to match faces and shapes. In short: it creates the picture based on earlier pictures, on your/its memory. It does not only know what you saw but also what you might like to see based on your previous choices. In other words, it speculates on your preferences and offers an interpretation of data based on affinities to other data. The link to the thing in front of the lens is still there, but there are also links to past pictures that help create the picture. You don’t really photograph the present, as the past is woven into it.

The result might be a picture that never existed in reality, but that the phone thinks you might like to see. It is a bet, a gamble, some combination between repeating those things you have already seen and coming up with new versions of these, a mixture of conservatism and fabulation. The paradigm of representation stands to the present condition as traditional lens-based photography does to an algorithmic, networked photography that works with probabilities and bets on inertia. Consequently, it makes seeing unforeseen things more difficult. The noise will increase and random interpretation too. We might think that the phone sees what we want, but actually we will see what the phone thinks it knows about us. A complicated relationship — like a very neurotic marriage. I haven’t even mentioned external interference into what your phone is recording. All sorts of applications are able to remotely shut your camera on or off: companies, governments, the military. It could be disabled for whole regions. One could, for example, disable recording functions close to military installations, or conversely, live broadcast whatever you are up to. Similarly, the phone might be programmed to auto-pixellate secret or sexual content. It might be fitted with a so-called dick algorithm to screen out NSFW content or auto-modify pubic hair, stretch or omit bodies, exchange or collage context or insert AR advertisement and pop up windows or live feeds. Now lets apply this shift to the question of representative politics or democracy. The representational paradigm assumes that you vote for someone who will represent you. Thus the interests of the population will be proportionally represented. But current democracies work rather like smartphone photography by algorithmically clearing the noise and boosting some data over other. It is a system in which the unforeseen has a hard time happening because it is not yet in the database. It is about what to define as noise — something Jacques Ranciere has defined as the crucial act in separating political subjects from domestic slaves, women and workers. Now this act is hardwired into technology, but instead of the traditional division of people and rabble, the results are post-representative militias, brands, customer loyalty schemes, open source insurgents and tumblrs.

Additionally, Ranciere’s democratic solution: there is no noise, it is all speech. Everyone has to be seen and heard, and has to be realized online as some sort of meta noise in which everyone is monologuing incessantly, and no one is listening. Aesthetically, one might describe this condition as opacity in broad daylight: you could see anything, but what exactly and why is quite unclear. There are a lot of brightly lit glossy surfaces, yet they don’t reveal anything but themselves as surface. Whatever there is — it’s all there to see but in the form of an incomprehensible, Kafkaesque glossiness, written in extraterrestrial code, perhaps subject to secret legislation. It certainly expresses something: a format, a protocol or executive order, but effectively obfuscates its meaning. This is a far cry from a situation in which something—an image, a person, a notion — stood in for another and presumably acted in its interest. Today it stands in, but its relation to whatever it stands in for is cryptic, shiny, unstable; the link flickers on and off. Art could relish in this shiny instability — it does already. It could also be less baffled and mesmerised and see it as what the gloss mostly is about – the not-so-discreet consumer friendly veneer of new and old oligarchies, and plutotechnocracies.

MJ In your insightful essay, “The Spam of the Earth: Withdrawal from Representation”, you extend your critique of representation by focusing on an irreducible excess at the core of image spam, a residue of unattainability, or the “dark matter” of which it’s composed. It seems as though an unintelligible horizon circumscribes image spam by image spam itself, a force of un-identifiability, which you detect by saying that it is “an accurate portrayal of what humanity is actually not… a negative image.” Do you think this vacuous core of image spam — a distinctly negative property — serves as an adequate ground for a general theory of representation today? How do you see today’s visual culture affecting people’s behavior toward identification with images?

HS Think of Twitter bots for example. Bots are entities supposed to be mistaken for humans on social media web sites. But they have become formidable political armies too — in brilliant examples of how representative politics have mutated nowadays. Bot armies distort discussion on twitter hashtags by spamming them with advertisement, tourist pictures or whatever. Bot armies have been active in Mexico, Syria, Russia and Turkey, where most political parties, above all the ruling AKP are said to control 18,000 fake twitter accounts using photos of Robbie Williams, Megan Fox and gay porn stars. A recent article revealed that, “in order to appear authentic, the accounts don’t just tweet out AKP hashtags; they also quote philosophers such as Thomas Hobbes and movies like PS: I Love You.” It is ever more difficult to identify bots – partly because humans are being paid to enter CAPTCHAs on their behalf (1,000 CAPTCHAs equals 50 USD cents). So what is a bot army? And how and whom does it represent if anyone? Who is an AKP bot that wears the face of a gay porn star and quotes Hobbes’ Leviathan — extolling the need of transforming the rule of militias into statehood in order to escape the war of everyone against everyone else? Bot armies are a contemporary vox pop, the voice of the people, the voice of what the people are today. It can be a Facebook militia, your low cost personalized mob, your digital mercenaries. Imagine your photo is being used for one of these bots. It is the moment when your picture becomes quite autonomous, active, even militant. Bot armies are celebrity militias, wildly jump cutting between glamour, sectarianism, porn, corruption and Post-Baath Party ideology. Think of the meaning of the word “affirmative action” after twitter bots and like farms! What does it represent?

MJ You have provided a compelling account of the depersonalization of the status of the image: a new process of de-identification that favors materialist participation in the circulation of images today.  Within the contemporary technological landscape, you write that “if identification is to go anywhere, it has to be with this material aspect of the image, with the image as thing, not as representation. And then it perhaps ceases to be identification, and instead becomes participation.” How does this shift from personal identification to material circulation — that is, to cybernetic participation — affect your notion of representation? If an image is merely “a thing like you and me,” does this amount to saying that identity is no more, no less than a .jpeg file?

HS Social media makes the shift from representation to participation very clear: people participate in the launch and life span of images, and indeed their life span, spread and potential is defined by participation. Think of the image not as surface but as all the tiny light impulses running through fiber at any one point in time. Some images will look like deep sea swarms, some like cities from space, some are utter darkness. We could see the energy imparted to images by capital or quantified participation very literally, we could probably measure its popular energy in lumen. By partaking in circulation, people participate in this energy and create it.
What this means is a different question though — by now this type of circulation seems a little like the petting zoo of plutotechnocracies. It’s where kids are allowed to make a mess — but just a little one — and if anyone organizes serious dissent, the seemingly anarchic sphere of circulation quickly reveals itself as a pedantic police apparatus aggregating relational metadata. It turns out to be an almost Althusserian ISA (Internet State Apparatus), hardwired behind a surface of ‘kawaii’ apps and online malls. As to identity, Heartbleed and more deliberate governmental hacking exploits certainly showed that identity goes far beyond a relationship with images: it entails a set of private keys, passwords, etc., that can be expropriated and detourned. More generally, identity is the name of the battlefield over your code — be it genetic, informational, pictorial. It is also an option that might provide protection if you fall beyond any sort of modernist infrastructure. It might offer sustenance, food banks, medical service, where common services either fail or don’t exist. If the Hezbollah paradigm is so successful it is because it provides an infrastructure to go with the Twitter handle, and as long as there is no alternative many people need this kind of container for material survival. Huge religious and quasi-religious structures have sprung up in recent decades to take up the tasks abandoned by states, providing protection and survival in a reversal of the move described in Leviathan. Identity happens when the Leviathan falls apart and nothing is left of the commons but a set of policed relational metadata, Emoji and hijacked hashtags. This is the reason why the gay AKP pornstar bots are desperately quoting Hobbes’ book: they are already sick of the war of Robbie Williams (Israel Defense Forces) against Robbie Williams (Electronic Syrian Army) against Robbie Williams (PRI/AAP) and are hoping for just any entity to organize day care and affordable dentistry.

heartbleed

But beyond all the portentous vocabulary relating to identity, I believe that a widespread standard of the contemporary condition is exhaustion. The interesting thing about Heartbleed — to come back to one of the current threats to identity (as privacy) — is that it is produced by exhaustion and not effort. It is a bug introduced by open source developers not being paid for something that is used by software giants worldwide. Nor were there apparently enough resources to audit the code in the big corporations that just copy-pasted it into their applications and passed on the bug, fully relying on free volunteer labour to produce their proprietary products. Heartbleed records exhaustion by trying to stay true to an ethics of commonality and exchange that has long since been exploited and privatized. So, that exhaustion found its way back into systems. For many people and for many reasons — and on many levels — identity is just that: shared exhaustion.

MJ This is an opportune moment to address the labor conditions of social media practice in the context of the art space. You write that “an art space is a factory, which is simultaneously a supermarket — a casino and a place of worship whose reproductive work is performed by cleaning ladies and cellphone-video bloggers alike.” Incidentally, DIS launched a website called ArtSelfie just over a year ago, which encourages social media users to participate quite literally in “cellphone-video blogging” by aggregating their Instagram #artselfies in a separately integrated web archive. Given our uncanny coincidence, how can we grasp the relationship between social media blogging and the possibility of participatory co-curating on equal terms? Is there an irreconcilable antagonism between exploited affective labor and a genuinely networked art practice? Or can we move beyond — to use a phrase of yours — a museum crowd “struggling between passivity and overstimulation?”

HS I wrote this in relation to something my friend Carles Guerra noticed already around early 2009; big museums like the Tate were actively expanding their online marketing tools, encouraging people to basically build the museum experience for them by sharing, etc. It was clear to us that audience participation on this level was a tool of extraction and outsourcing, following a logic that has turned online consumers into involuntary data providers overall. Like in the previous example – Heartbleed – the paradigm of participation and generous contribution towards a commons tilts quickly into an asymmetrical relation, where only a minority of participants benefits from everyone’s input, the digital 1 percent reaping the attention value generated by the 99 percent rest.

Brian Kuan Wood put it very beautifully recently: Love is debt, an economy of love and sharing is what you end up with when left to your own devices. However, an economy based on love ends up being an economy of exhaustion – after all, love is utterly exhausting — of deregulation, extraction and lawlessness. And I don’t even want to mention likes, notes and shares, which are the child-friendly, sanitized versions of affect as currency.
All is fair in love and war. It doesn’t mean that love isn’t true or passionate, but just that love is usually uneven, utterly unfair and asymmetric, just as capital tends to be distributed nowadays. It would be great to have a little bit less love, a little more infrastructure.

MJ Long before Edward Snowden’s NSA revelations reshaped our discussions of mass surveillance, you wrote that “social media and cell-phone cameras have created a zone of mutual mass-surveillance, which adds to the ubiquitous urban networks of control,” underscoring the voluntary, localized, and bottom-up mutuality intrinsic to contemporary systems of control. You go on to say that “hegemony is increasingly internalized, along with the pressure to conform and perform, as is the pressure to represent and be represented.” But now mass government surveillance is common knowledge on a global scale — ‘externalized’, if you will — while social media representation practices remain as revealing as they were before. Do these recent developments, as well as the lack of change in social media behavior, contradict or reinforce your previous statements? In other words, how do you react to the irony that, in the same year as the unprecedented NSA revelations, “selfie” was deemed word of the year by Oxford Dictionaries?

HS Haha — good question!

Essentially I think it makes sense to compare our moment with the end of the twenties in the Soviet Union, when euphoria about electrification, NEP (New Economic Policy), and montage gives way to bureaucracy, secret directives and paranoia. Today this corresponds to the sheer exhilaration of having a World Wide Web being replaced by the drudgery of corporate apps, waterboarding, and “normcore”. I am not trying to say that Stalinism might happen again – this would be plain silly – but trying to acknowledge emerging authoritarian paradigms, some forms of algorithmic consensual governance techniques developed within neoliberal authoritarianism, heavily relying on conformism, “family” values and positive feedback, and backed up by all-out torture and secret legislation if necessary. On the other hand things are also falling apart into uncontrollable love. One also has to remember that people did really love Stalin. People love algorithmic governance too, if it comes with watching unlimited amounts of Game of Thrones. But anyone slightly interested in digital politics and technology is by now acquiring at least basic skills in disappearance and subterfuge.

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

Hito Steyerl, How Not To Be Seen: A Fucking Didactic Educational .MOV File (2013)

MJ In “Politics of Art: Contemporary Art and the Transition to Post-Democracy,” you point out that the contemporary art industry “sustains itself on the time and energy of unpaid interns and self-exploiting actors on pretty much every level and in almost every function,” while maintaining that “we have to face up to the fact that there is no automatically available road to resistance and organization for artistic labor.” Bourdieu theorized qualitatively different dynamics in the composition of cultural capital vs. that of economic capital, arguing that the former is constituted by the struggle for distinction, whose value is irreducible to financial compensation. This basically translates to: everyone wants a piece of the art-historical pie, and is willing to go through economic self-humiliation in the process. If striving for distinction is antithetical to solidarity, do you see a possibility of reconciling it with collective political empowerment on behalf of those economically exploited by the contemporary art industry?

HS In Art and Money, William Goetzmann, Luc Renneboog, and Christophe Spaenjers conclude that income inequality correlates to art prices. The bigger the difference between top income and no income, the higher prices are paid for some art works. This means that the art market will benefit not only if less people have more money but also if more people have no money. This also means that increasing the amount of zero incomes is likely, especially under current circumstances, to raise the price of some art works. The poorer many people are (and the richer a few), the better the art market does; the more unpaid interns, the more expensive the art. But the art market itself may be following a similar pattern of inequality, basically creating a divide between the 0,01 percent if not less of artworks that are able to concentrate the bulk of sales and the 99,99 percent rest. There is no short term solution for this feedback loop, except of course not to accept this situation, individually or preferably collectively on all levels of the industry. This also means from the point of view of employers. There is a long term benefit to this, not only to interns and artists but to everyone. Cultural industries, which are too exclusively profit oriented lose their appeal. If you want exciting things to happen you need a bunch of young and inspiring people creating a dynamics by doing risky, messy and confusing things. If they cannot afford to do this, they will do it somewhere else eventually. There needs to be space and resources for experimentation, even failure, otherwise things go stale. If these people move on to more accommodating sectors the art sector will mentally shut down even more and become somewhat North-Korean in its outlook — just like contemporary blockbuster CGI industries. Let me explain: there is a managerial sleekness and awe inspiring military perfection to every pixel in these productions, like in North Korean pixel parades, where thousands of soldiers wave color posters to form ever new pixel patterns. The result is quite something but this something is definitely not inspiring nor exciting. If the art world keeps going down the way of raising art prices via starvation of it’s workers – and there is no reason to believe it will not continue to do this – it will become the Disney version of Kim Jong Un’s pixel parades. 12K starving interns waving pixels for giant CGI renderings of Marina Abramovic! Imagine the price it will fetch!

kim jon hito

kim hito jon

Chimpanzés caçadores dão pistas sobre os primeiros humanos (El País)

Primatas que usam lanças podem fornecer indícios sobre origem das sociedades humanas

 12 MAY 2015 – 18:14 BRT

Um velho chimpanzé bebe água em um lago, em Fongoli, no Senegal. / FRANS LANTING

Na quente savana senegalesa se encontra o único grupo de chimpanzés que usa lanças para caçar animais com os quais se alimenta. Um ou outro grupo de chimpanzés foi visto portando ferramentas para a captura de pequenos mamíferos, mas esses, na comunidade de Fongoli, caçam regularmente usando ramos afiados. Esse modo de conseguir alimento é um uso cultural consolidado para esse grupo de chimpanzés.

Além dessa inovação tecnológica, em Fongoli ocorre também uma novidade social que os distingue dos demais chimpanzés estudados na África: há mais tolerância, maior paridade dos sexos na caça e os machos mais corpulentos não passam com tanta frequência por cima dos interesses dos demais, valendo-se de sua força. Para os pesquisadores que vêm observando esse comportamento há uma década esses usos poderiam, além disso, oferecer pistas sobre a evolução dos ancestrais humanos.

“São a única população não humana conhecida que caça vertebrados com ferramentas de forma sistemática, por isso constituem uma fonte importante para a hipótese sobre o comportamento dos primeiros hominídeos, com base na analogia”, explicam os pesquisadores do estudo no qual formularam suas conclusões depois de dez anos observando as caçadas de Fongoli. Esse grupo, liderado pela antropóloga Jill Pruetz, considera que esses animais são um bom exemplo do que pode ser a origem dos primeiros primatas eretos sobre duas patas.

Os machos mais fortes dessa comunidade respeitam as fêmeas na caça

Na sociedade Fongoli as fêmeas realizam exatamente a metade das caçadas com lança. Graças à inovação tecnológica que representa a conversão de galhos em pequenas lanças com as quais se ajudam para caçar galagos – pequenos macacos muito comuns nesse entorno –, as fêmeas conseguem certa independência alimentar. Na comunidade de Gombe, que durante muitos anos foi estudada por Jane Goodall, os machos arcam com cerca de 90% do total das presas; em Fongoli, somente 70%. Além disso, em outros grupos de chimpanzés os machos mais fortes roubam uma de cada quatro presas caçadas pelas fêmeas (sem ferramentas): em Fongoli, apenas 5%.

Uma fêmea de chimpanzé apanha e examina um galho que usará para capturar sua presa. / J. PRUETZ

“Em Fongoli, quando uma fêmea ou um macho de baixo escalão captura uma presa, permitem que ele fique com ela e a coma. Em outros lugares, o macho alfa ou outro macho dominante costuma tomar-lhe a presa. Assim, as fêmeas obtêm pouco benefício da caça, se outro chimpanzé lhe tira sua presa”, afirma Pruetz. Ou seja, o respeito dos machos de Fongoli pelas presas obtidas por suas companheiras serviria de incentivo para que elas se decidam a ir à caça com mais frequência do que as de outras comunidades. Durante esses anos de observação, praticamente todos os chimpanzés do grupo – cerca de 30 indivíduos – caçaram com ferramentas,

O clima seco faz com que os macacos mais acessíveis em Fongoli sejam os pequenos galagos, e não os colobos vermelhos – os preferidos dos chimpanzés em outros lugares da África –, que são maiores e difíceis de capturar por outros que não sejam os machos mais rápidos e corpulentos. Quase todos os episódios de caça com lanças observados (três centenas) se deram nos meses úmidos, nos quais outras fontes de alimento são escassas.

A savana senegalesa, com poucas árvores, é um ecossistema que tem uma importante semelhança com o cenário em que evoluíram os ancestrais humanos. Ao contrário de outras comunidades africanas, os chimpanzés de Fongoli passam a maior parte do tempo no chão, e não entre os galhos. A excepcional forma de caça de Fongoli leva os pesquisadores a sugerir em seu estudo que os primeiros hominídeos provavelmente intensificaram o uso de ferramentas tecnológicas para superar as pressões ambientais, e que eram até mesmo “suficientemente sofisticados a ponto de aperfeiçoar ferramentas de caça”.

“Sabemos que o entorno tem um impacto importante no comportamento dos chimpanzés”, afirma o primatólogo Joseph Call, do Instituto Max Planck. “A distribuição das árvores determina o tipo de caça: onde a vegetação é mais frondosa, a caçada é mais cooperativa em relação a outros entornos nos quais é mais fácil seguir a presa, e eles são mais individualistas”, assinala Call.

No entanto, Call põe em dúvida que essas práticas de Fongoli possam ser consideradas caçadas com lança propriamente ditas, já que para ele lembram mais a captura de formigas e cupins usando palitos, algo mais comum entre os primatas. “A definição de caça que os pesquisadores estabelecem em seu estudo não se distingue muito do que fazem colocando um raminho em um orifício para conseguir insetos para comer”, diz Call. Os chimpanzés de Fongoli cutucam com paus os galagos quando eles se escondem em cavidades das árvores para forçá-los a sair e, uma vez fora, lhes arrancam a cabeça com uma mordida. “É algo que fica entre uma coisa e a outra”, argumenta.

Esses antropólogos acreditam que o achado permite pensar que os primeiros hominídeos eretos também usavam lanças

Pruetz responde a esse tipo de crítica dizendo que se trata de uma estratégia para evitar que o macaco os morda ou escape, uma situação muito diferente daquela de colocar um galho em um orifício para capturar bichos. Se for o mesmo, argumentam Pruetz e seus colegas, a pergunta é “por que os chimpanzés de outros grupos não caçam mais”.

Além do caso particular, nem sequer está encerrado o debate sobre se os chimpanzés devem ser considerados modelos do que foram os ancestrais humanos. “Temos de levar em conta que o bonobo não faz nada disso e é tão próximo de nós como o chimpanzé”, defende Call. “Pegamos o chimpanzé por que nos cai bem para assinalar determinadas influências comuns. É preciso ter muito cuidado e não pesquisar a espécie dependendo do que queiramos encontrar”, propõe.

On Reverse Engineering (Anthropology and Algorithms)

Nick Seaver

Looking for the cultural work of engineers

The Atlantic welcomed 2014 with a major feature on web behemoth Netflix. If you didn’t know, Netflix has developed a system for tagging movies and for assembling those tags into phrases that look like hyper-specific genre names: Visually-striking Foreign Nostalgic Dramas, Critically-acclaimed Emotional Underdog Movies, Romantic Chinese Crime Movies, and so on. The sometimes absurd specificity of these names (or “altgenres,” as Netflix calls them) is one of the peculiar pleasures of the contemporary web, recalling the early days of website directories and Usenet newsgroups, when it seemed like the internet would be a grand hotel, providing a room for any conceivable niche.

Netflix’s weird genres piqued the interest of Atlantic editor Alexis Madrigal, who set about scraping the whole list. Working from the US in late 2013, his scraper bot turned up a startling 76,897 genre names — clearly the emanations of some unseen algorithmic force. How were they produced? What was their generative logic? What made them so good—plausible, specific, with some inexpressible touch of the human? Pursuing these mysteries brought Madrigal to the world of corpus analysis software and eventually to Netflix’s Silicon Valley offices.

The resulting article is an exemplary piece of contemporary web journalism — a collaboratively produced, tech-savvy 5,000-word “long read” that is both an exposé of one of the largest internet companies (by volume) and a reflection on what it is like to be human with machines. It is supported by a very entertaining altgenre-generating widget, built by professor and software carpenter Ian Bogost and illustrated by Twitter mystery darth. Madrigal pieces the story together with his signature curiosity and enthusiasm, and the result feels so now that future corpus analysts will be able to use it as a model to identify texts written in the United States from 2013–14. You really should read it.

A Māori eel trap. The design and construction of traps (or filters) like this are classic topics of interest for anthropologists of technology. cc-by-sa-3.0

As a cultural anthropologist in the middle of a long-term research project on algorithmic filtering systems, I am very interested in how people think about companies like Netflix, which take engineering practices and apply them to cultural materials. In the popular imagination, these do not go well together: engineering is about universalizable things like effectiveness, rationality, and algorithms, while culture is about subjective and particular things, like taste, creativity, and artistic expression. Technology and culture, we suppose, make an uneasy mix. When Felix Salmon, in his response to Madrigal’s feature, complains about “the systematization of the ineffable,” he is drawing on this common sense: engineers who try to wrangle with culture inevitably botch it up.

Yet, in spite of their reputations, we always seem to find technology and culture intertwined. The culturally-oriented engineering of companies like Netflix is a quite explicit case, but there are many others. Movies, for example, are a cultural form dependent on a complicated system of technical devices — cameras, editing equipment, distribution systems, and so on. Technologies that seem strictly practical — like the Māori eel trap pictured above—are influenced by ideas about effectiveness, desired outcomes, and interpretations of the natural world, all of which vary cross-culturally. We may talk about technology and culture as though they were independent domains, but in practice, they never stay where they belong. Technology’s straightforwardness and culture’s contingency bleed into each other.

This can make it hard to talk about what happens when engineers take on cultural objects. We might suppose that it is a kind of invasion: The rationalizers and quantifiers are over the ridge! They’re coming for our sensitive expressions of the human condition! But if technology and culture are already mixed up with each other, then this doesn’t make much sense. Aren’t the rationalizers expressing their own cultural ideas? Aren’t our sensitive expressions dependent on our tools? In the present moment, as companies like Netflix proliferate, stories trying to make sense of the relationship between culture and technology also proliferate. In my own research, I examine these stories, as told by people from a variety of positions relative to the technology in question. There are many such stories, and they can have far-reaching consequences for how technical systems are designed, built, evaluated, and understood.


The story Madrigal tells in The Atlantic is framed in terms of “reverse engineering.” The engineers of Netflix have not invaded cultural turf — they’ve reverse engineered it and figured out how it works. To report on this reverse engineering, Madrigal has done some of his own, trying to figure out the organizing principles behind the altgenre system. So, we have two uses of reverse engineering here: first, it is a way to describe what engineers do to cultural stuff; second, it is a way to figure out what engineers do.

So what does “reverse engineering” mean? What kind of things can be reverse engineered? What assumptions does reverse engineering make about its objects? Like any frame, reverse engineering constrains as well as enables the presentation of certain stories. I want to suggest here that, while reverse engineering might be a useful strategy for figuring out how an existing technology works, it is less useful for telling us how it came to work that way. Because reverse engineering starts from a finished technical object, it misses the accidents that happened along the way — the abandoned paths, the unusual stories behind features that made it to release, moments of interpretation, arbitrary choice, and failure. Decisions that seemed rather uncertain and subjective as they were being made come to appear necessary in retrospect. Engineering looks a lot different in reverse.

This is especially evident in the case of explicitly cultural technologies. Where “technology” brings to mind optimization, functionality, and necessity, “culture” seems to represent the opposite: variety, interpretation, and arbitrariness. Because it works from a narrowly technical view of what engineering entails, reverse engineering has a hard time telling us about the cultural work of engineers. It is telling that the word “culture” never appears in this piece about the contemporary state of the culture industry.

Inspired by Madrigal’s article, here are some notes on the consequences of reverse engineering for how we think about the cultural lives of engineers. As culture and technology continue to escape their designated places and intertwine, we need ways to talk about them that don’t assume they can be cleanly separated.


Ben Affleck, fact extractor.

There is a terrible movie about reverse engineering, based on a short story by Philip K. Dick. It is called Paycheck, stars Ben Affleck, and is not currently available for streaming on Netflix. In it, Affleck plays a professional reverse engineer (the “best in the business”), who is hired by companies to figure out the secrets of their competitors. After doing this, his memory of the experience is wiped and in return, he is compensated very well. Affleck is a sort of intellectual property conduit: he extracts secrets from devices, and having moved those secrets from one company to another, they are then extracted from him. As you might expect, things go wrong: Affleck wakes up one day to find that he has forfeited his payment in exchange for an envelope of apparently worthless trinkets and, even worse, his erstwhile employer now wants to kill him. The trinkets turn out to be important in unexpected ways as Affleck tries to recover the facts that have been stricken from his memory. The movie’s tagline is “Remember the Future”—you get the idea.

Paycheck illustrates a very popular way of thinking about engineering knowledge. To know about something is to know the facts about how it works. These facts are like physical objects — they can be hidden (inside of technologies, corporations, envelopes, or brains), and they can be retrieved and moved around. In this way of thinking about knowledge, facts that we don’t yet know are typically hidden on the other side of some barrier. To know through reverse engineering is to know by trying to pull those pre-existing facts out.

This is why reverse engineering is sometimes used as a metaphor in the sciences to talk about revealing the secrets of Nature. When biologists “reverse engineer” a cell, for example, they are trying to uncover its hidden functional principles. This kind of work is often described as “pulling back the curtain” on nature (or, in older times, as undressing a sexualized, female Nature — the kind of thing we in academia like to call “problematic”). Nature, if she were a person, holds the secrets her reverse engineers want.

In the more conventional sense of the term, reverse engineering is concerned with uncovering secrets held by engineers. Unlike its use in the natural sciences, here reverse engineering presupposes that someone already knows what we want to find out. Accessing this kind of information is often described as “pulling back the curtain” on a company. (This is likely the unfortunate naming logic behind Kimono, a new service for scraping websites and automatically generating APIs to access the scraped data.) Reverse engineering is not concerned with producing “new” knowledge, but with extracting facts from one place and relocating them to another.

Reverse engineering (and I guess this is obvious) is concerned with finished technologies, so it presumes that there is a straightforward fact of the matter to be worked out. Something happened to Ben Affleck before his memory was wiped, and eventually he will figure it out. This is not Rashomonwhich suggests there might be multiple interpretations of the same event (although that isn’t available for streaming either)The problem is that this narrow scope doesn’t capture everything we might care about: why this technology and not another one? If a technology is constantly changing, like the algorithms and data structures under the hood at Netflix, then why is it changing as it does? Reverse engineering, at best, can only tell you the what, not the why or the how. But it even has some trouble with the what.


“Fantastic powers at his command / And I’m sure that he will understand / He’s the Wiz and he lives in Oz”

Netflix, like most companies today, is surrounded by a curtain of non-disclosure agreements and intellectual property protections. This curtain animates Madrigal’s piece, hiding the secrets that his reverse engineering is aimed at. For people inside the curtain, nothing in his article is news. What is newsworthy, Madrigal writes, is that “no one outside the company has ever assembled this data before.” The existence of the curtain shapes what we imagine knowledge about Netflix to be: something possessed by people on the inside and lacked by people on the outside.

So, when Madrigal’s reverse engineering runs out of steam, the climax of the story comes and the curtain is pulled back to reveal the “Wizard of Oz, the man who made the machine”: Netflix’s VP of Product Innovation Todd Yellin. Here is the guy who holds the secrets behind the altgenres, the guy with the knowledge about how Netflix has tried to bridge the world of engineering and the world of cultural production. According to the logic of reverse engineering, Yellin should be able to tell us everything we want to know.

From Yellin, Madrigal learns about the extensiveness of the tagging that happens behind the curtain. He learns some things that he can’t share publicly, and he learns of the existence of even more secrets — the contents of the training manual which dictate how movies are to be entered into the system. But when it comes to how that massive data and intelligence infrastructure was put together, he learns this:

“It’s a real combination: machine-learned, algorithms, algorithmic syntax,” Yellin said, “and also a bunch of geeks who love this stuff going deep.”

This sentence says little more than “we did it with computers,” and it illustrates a problem for the reverse engineer: there is always another curtain to get behind. Scraping altgenres will only get you so far, and even when you get “behind the curtain,” companies like Netflix are only willing to sketch out their technical infrastructure in broad strokes. In more technically oriented venues or the academic research community, you may learn more, but you will never get all the way to the bottom of things. The Wizard of Oz always holds on to his best secrets.

But not everything we want to know is a trade secret. While reverse engineers may be frustrated by the first part of Yellin’s sentence — the vagueness of “algorithms, algorithmic syntax” — it’s the second part that hides the encounter between culture and technology: What does it look like when “geeks who love this stuff go deep”? How do the people who make the algorithms understand the “deepness” of cultural stuff? How do the loves of geeks inform the work of geeks? The answers to these questions are not hidden away as proprietary technical information; they’re often evident in the ways engineers talk about and work with their objects. But because reverse engineering focuses narrowly on revealing technical secrets, it fails to piece together how engineers imagine and engage with culture. For those of us interested in the cultural ramifications of algorithmic filtering, these imaginings and engagements—not usually secret, but often hard to access — are more consequential than the specifics of implementation, which are kept secret and frequently change.


“My first goal was: tear apart content!”


While Yellin may not have told us enough about the technical secrets of Netflix to create a competitor, he has given us some interesting insights into the way he thinks about movies and how to understand them. If you’re familiar with research on algorithmic recommenders, you’ll recognize the system he describes as an example of content-based recommendation. Where “classic” recommender systems rely on patterns in ratings data and have little need for other information, content-based systems try to understand the material they recommend, through various forms of human or algorithmic analysis. These analyses are a lot of work, but over the past decade, with the increasing availability of data and analytical tools, content-based recommendation has become more popular. Most big recommender systems today (including Netflix’s) are hybrids, drawing on both user ratings and data about the content of recommended items.

The “reverse engineering of Hollywood” is the content side of things: Netflix’s effort to parse movies into its database so that they can be recommended based on their content. By calling this parsing “reverse engineering,” Madrigal implies that there is a singular fact of the matter to be retrieved from these movies, and as a result, he focuses his description on Netflix’s thoroughness. What is tagged? “Everything. Everyone.” But the kind of parsing Yellin describes is not the only way to understand cultural objects; rather, it is a specific and recognizable mode of interpretation. It bears a strong resemblance to structuralism — a style of cultural analysis that had its heyday in the humanities and social sciences during the mid-20th century.


Structuralism, according to Roland Barthes, is a way of interpreting objects by decomposing them into parts and then recomposing those parts into new wholes. By breaking a text apart and putting it back together, the structuralist aims to understand its underlying structure: what order lurks under the surface of apparently idiosyncratic objects?

For example, the arch-structuralist anthropologist Claude Lévi-Strauss took such an approach in his study of myth. Take the Oedipus myth: there are many different ways to tell the same basic story, in which a baby is abandoned in the wilderness and then grows up to unknowingly kill his father, marry his mother, and blind himself when he finds out (among other things). But, across different tellings of the myth, there is a fairly persistent set of elements that make up the story. Lévi-Strauss called these elements “mythemes” (after linguistic “phonemes”). By breaking myths down into their constituent parts, you could see patterns that linked them together, not only across different tellings of the “same” myth, but even across apparently disparate myths from other cultures. Through decomposition and recomposition, structuralists sought what Barthes called the object’s “rules of functioning.” These rules, governing the combination of mythemes, were the object of Lévi-Strauss’s cultural analysis.

Todd Yellin is, by all appearances, a structuralist. He tells Madrigal that his goal was to “tear apart content” and create a “Netflix Quantum Theory,” under which movies could be broken down into their constituent parts — into “quanta” or the “little ‘packets of energy’ that compose each movie.” Those quanta eventually became “microtags,” which Madrigal tells us are used to describe everything in the movie. Large teams of human taggers are trained, using a 36-page secret manual, and they go to town, decomposing movies into microtags. Take those tags, recompose them, and you get the altgenres, a weird sort of structuralist production intended to help you find things in Netflix’s pool of movies. If Lévi-Strauss had lived to be 104 instead of just 100, he might have had some thoughts about this computerized structuralism: in his 1955 article on the structural study of myth, he suggested that further advances would require mathematicians and “I.B.M. equipment” to handle the complicated analysis. Structuralism and computers go way back.


Although structuralism sounds like a fairly technical way to analyze cultural material, it is not, strictly speaking, objective. When you break an object down into its parts and put it back together again, you have not simply copied it — you’ve made something new. A movie’s set of microtags, no matter how fine-grained, is not the same thing as the movie. It is, as Barthes writes, a “directed, interested simulacrum” of the movie, a re-creation made with particular goals in mind. If you had different goals — different ideas about what the significant parts of movies were, different imagined use-cases — you might decompose differently. There is more than one way to tear apart content.

This does not jive well with common sense ideas about what engineering is like. Instead of the cold, rational pursuit of optimal solutions, we have something a little more creative. We have options, a variety of choices which are all potentially valid, depending on a range of contextual factors not exhausted by obviously “technical” concerns. Barthes suggested that composing a structuralist analysis was like composing a poem, and engineering is likewise expressive. Netflix’s altgenres are in no way the final statement on the movies. They are, rather, one statement among many — a cultural production in their own right, influenced by local assumptions about meaning, relevance, and taste. “Reverse engineering” seems a poor name for this creative practice, because it implies a singular right answer — a fact of the matter that merely needs to be retrieved from the insides of the movies. We might instead, more accurately, call this work “interpretation.”


So, where does this leave us with reverse engineering? There are two questions at issue here:

  1. Does “reverse engineering” as a term adequately describe the work that engineers like those employed at Netflix do when they interact with cultural objects?
  2. Is reverse engineering a useful strategy for figuring out what engineers do?

The answer to both of these questions, I think, is a measured “no,” and for the same reason: reverse engineering, as both a descriptor and a research strategy, misses the things engineers do that do not fit into conventional ideas about engineering. In the ongoing mixture of culture and technology, reverse engineering sticks too closely to the idealized vision of technical work. Because it assumes engineers care strictly about functionality and efficiency, it is not very good at telling stories about accidents, interpretations, and arbitrary choices. It assumes that cultural objects or practices (like movies or engineering) can be reduced to singular, universally-intelligible logics. It takes corporate spokespeople at their word when they claim that there was a straight line from conception to execution.

As Nicholas Diakopoulos has written, reverse engineering can be a useful way to figure out what obscured technologies do, but it cannot get us answers to “the question of why.” As these obscured technologies — search engines, recommender systems, and other algorithmic filters — are constantly refined, we need better ways to talk about the whys and hows of engineering as a practice, not only the what of engineered objects that immediately change.

The risk of reverse engineering is that we come to imagine that the only things worth knowing about companies like Netflix are the technical details hidden behind the curtain. In my own research, I argue that the cultural lives and imaginations of the people behind the curtain are as important, if not more, for understanding how these systems come to exist and function as they do. Moreover, these details are not generally considered corporate secrets, so they are accessible if we look for them. Not everything worth knowing has been actively hidden, and transparency can conceal as much as it reveals.

All engineering mixes culture and technology. Even Madrigal’s “reverse engineering” does not stay put in technical bounds: he supplements the work of his bot by talking with people, drawing on their interpretations and offering his own, reading the altgenres, populated with serendipitous algorithmic accidents, as “a window unto the American soul.” Engineers, reverse and otherwise, have cultural lives, and these lives inform their technical work. To see these effects, we need to get beyond the idea that the technical and the cultural are necessarily distinct. But if we want to understand the work of companies like Netflix, it is not enough to simply conclude that culture and technology — humans and computers — are mixed. The question we need to answer is how.

‘Technological Disobedience’: How Cubans Manipulate Everyday Technologies For Survival (WLRN)

12:05  PM

MON JULY 1, 2013

In Cuban Spanish, there is a word for overcoming great obstacles with minimal resources: resolver.

Literally, it means to resolve, but to many Cubans on the island and living in South Florida, resolviendo is an enlightened reality born of necessity.

When the Soviet Union collapsed in 1991, Cuba entered a “Special Period in Times of Peace”, which saw unprecedented shortages of every day items. Previously, the Soviets had been Cuba’s principal traders, sending goods for low prices and buying staple export commodities like sugar at above market prices.

Rationing goods was a normal part of life for a long time, but Cubans found themselves in dire straights without Soviet support. As the crisis got worse and worse over time, the more creative people would have to get.

Verde Olivo, the publishing house for the Cuban Revolutionary Armed Forces, published a largely crowdsourced book shortly after the Special Period began. Titled Con Nuestros Propios Esfuerzos (With Our Own Efforts), the book detailed all the possible ways that household items could be manipulated and turned inside out in order to fulfill the needs of a starving population.

Included in the book is a famous recipe for turning grapefruit rind into makeshift beef steak (after heavy seasoning).

Cuban artist and designer Ernesto Oroza watched with amazement as uses sprang from everyday items, and he soon began collecting these items from this sad but ingeniously creative period of Cuban history.

A Cuban rikimbili-- the word for bicycles that have been converted into motorcycles. The engine of 100cc's or less typically is constructed out of motor-powered, misting backpacks or Russian tank AC generators.

A Cuban rikimbili– the word for bicycles that have been converted into motorcycles. The engine of 100cc’s or less typically is constructed out of motor-powered, misting backpacks or Russian tank AC generators. Credit rikimbili.com

“People think beyond the normal capacities of an object, and try to surpass the limitations that it imposes on itself”, Oraza explains in a recently published Motherboard documentary that originally aired in 2011.

Oraza coined the phrase “Technological Disobedience”, which he says summarizes how Cubans reacted to technology during this time.

After graduating from design school to an abysmal economy, Oraza and a friend began to travel the island and collect these unique items from every province.

These post-apocalyptic contraptions reflect a hunger for more, and a resilience to fatalism within the Cuban community.

“The same way a surgeon, after having opened so many bodies, becomes insensitive to blood, to the smell of blood and organs… It’s the same for a Cuban,” Oraza explains.

“Once he has opened a fan, he is used to seeing everything from the inside… All the symbols that unify an object, that make a unique entity– for a Cuban those don’t exist.”

When Exponential Progress Becomes Reality (Medium)

Niv Dror

“I used to say that this is the most important graph in all the technology business. I’m now of the opinion that this is the most important graph ever graphed.”

Steve Jurvetson

Moore’s Law

The expectation that your iPhone keeps getting thinner and faster every two years. Happy 50th anniversary.

Components get cheapercomputers get smallera lot of comparisontweets.

In 1965 Intel co-founder Gordon Moore made his original observation, noticing that over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately every two years. The prediction was specific to semiconductors and stretched out for a decade. Its demise has long been predicted, and eventually will come to an end, but continues to be valid to this day.

Expanding beyond semiconductors, and reshaping all kinds of businesses, including those not traditionally thought of as tech.

Yes, Box co-founder Aaron Levie is the official spokesperson for Moore’s Law, and we’re all perfectly okay with that. His cloud computing company would not be around without it. He’s grateful. We’re all grateful. In conversations Moore’s Law constantly gets referenced.

It has become both a prediction and an abstraction.

Expanding far beyond its origin as a transistor-centric metric.

But Moore’s Law of integrated circuits is only the most recent paradigm in a much longer and even more profound technological trend.

Humanity’s capacity to compute has been compounding for as long as we could measure it.

5 Computing Paradigms: Electromechanical computer build by IBM for the 1890 U.S. Census → Alan Turing’s relay based computer that cracked the Nazi Enigma → Vacuum-tube computer predicted Eisenhower’s win in 1952 → Transistor-based machines used in the first space launches → Integrated-circuit-based personal computer

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines Google’s Director of Engineering, futurist, and author Ray Kurzweil proposed “The Law of Accelerating Returns”, according to which the rate of change in a wide variety of evolutionary systems tends to increase exponentially. A specific paradigm, a method or approach to solving a problem (e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the paradigm exhausts its potential. When this happens, a paradigm shift, a fundamental change in the technological approach occurs, enabling the exponential growth to continue.

Kurzweil explains:

It is important to note that Moore’s Law of Integrated Circuits was not the first, but the fifth paradigm to provide accelerating price-performance. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to Turing’s relay-based machine that cracked the Nazi enigma code, to the vacuum tube computer that predicted Eisenhower’s win in 1952, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

This graph, which venture capitalist Steve Jurvetson describes as the most important concept ever to be graphed, is Kurzweil’s 110 year version of Moore’s Law. It spans across five paradigm shifts that have contributed to the exponential growth in computing.

Each dot represents the best computational price-performance device of the day, and when plotted on a logarithmic scale, they fit on the same double exponential curve that spans over a century. This is a very long lasting and predictable trend. It enables us to plan for a time beyond Moore’s Law, without knowing the specifics of the paradigm shift that’s ahead. The next paradigm will advance our ability to compute to such a massive scale, it will be beyond our current ability to comprehend.

The Power of Exponential Growth

Human perception is linear, technological progress is exponential. Our brains are hardwired to have linear expectations because that has always been the case. Technology today progresses so fast that the past no longer looks like the present, and the present is nowhere near the future ahead. Then seemingly out of nowhere, we find ourselves in a reality quite different than what we would expect.

Kurzweil uses the overall growth of the internet as an example. The bottom chart being linear, which makes the internet growth seem sudden and unexpected, whereas the the top chart with the same data graphed on a logarithmic scale tell a very predictable story. On the exponential graph internet growth doesn’t come out of nowhere; it’s just presented in a way that is more intuitive for us to comprehend.

We are still prone to underestimate the progress that is coming because it’s difficult to internalize this reality that we’re living in a world of exponential technological change. It is a fairly recent development. And it’s important to get an understanding for the massive scale of advancements that the technologies of the future will enable. Particularly now, as we’ve reachedwhat Kurzweil calls the “Second Half of the Chessboard.”

(In the end the emperor realizes that he’s been tricked, by exponents, and has the inventor beheaded. In another version of the story the inventor becomes the new emperor).

It’s important to note that as the emperor and inventor went through the first half of the chessboard things were fairly uneventful. The inventor was first given spoonfuls of rice, then bowls of rice, then barrels, and by the end of the first half of the chess board the inventor had accumulated one large field’s worth — 4 billion grains — which is when the emperor started to take notice. It was only as they progressed through the second half of the chessboard that the situation quickly deteriorated.

# of Grains on 1st half: 4,294,967,295

# of Grains on 2nd half: 18,446,744,069,414,600,000

Mind-bending nonlinear gains in computing are about to get a lot more realistic in our lifetime, as there have been slightly more than 32 doublings of performance since the first programmable computers were invented.

Kurzweil’s Predictions

Kurzweil is known for making mind-boggling predictions about the future. And his track record is pretty good.

“…Ray is the best person I know at predicting the future of artificial intelligence.” —Bill Gates

Ray’s prediction for the future may sound crazy (they do sound crazy), but it’s important to note that it’s not about the specific prediction or the exact year. What’s important to focus on is what the they represent. These predictions are based on an understanding of Moore’s Law and Ray’s Law of Accelerating Returns, an awareness for the power of exponential growth, and an appreciation that information technology follows an exponential trend. They may sound crazy, but they are not based out of thin air.

And with that being said…

Second Half of the Chessboard Predictions

“By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.”

“By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.”

To expand image → https://twitter.com/nivo0o0/status/564309273480409088

Not quite there yet…

“By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.”

These clones are cute.

“By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.”

Multiplying our intelligence a billionfold by linking our neocortex to a synthetic neocortex in the cloud — what does that actually mean?

In March 2014 Kurzweil gave an excellent talk at the TED Conference. It was appropriately called: Get ready for hybrid thinking.

Here is a summary:

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

These are the highlights:

Nanobots will connect our neocortex to a synthetic neocortex in the cloud, providing an extension of our neocortex.

Our thinking then will be a hybrid of biological and non-biological thinking(the non-biological portion is subject to the Law of Accelerating Returns and it will grow exponentially).

The frontal cortex and neocortex are not really qualitatively different, so it’s a quantitative expansion of the neocortex (like adding processing power).

The last time we expanded our neocortex was about two million years ago. That additional quantity of thinking was the enabling factor for us to take aqualitative leap and advance language, science, art, technology, etc.

We’re going to again expand our neocortex, only this time it won’t be limited by a fixed architecture of inclosure. It will be expanded without limits, by connecting our brain directly to the cloud.

We already carry a supercomputer in our pocket. We have unlimited access to all the world’s knowledge at our fingertips. Keeping in mind that we are prone to underestimate technological advancements (and that 2045 is not a hard deadline) is it really that far of a stretch to imagine a future where we’re always connected directly from our brain?

Progress is underway. We’ll be able to reverse engineering the neural cortex within five years. Kurzweil predicts that by 2030 we’ll be able to reverse engineer the entire brain. His latest book is called How to Create a Mind… This is the reason Google hired Kurzweil.

Hybrid Human Machines

To expand image → https://twitter.com/nivo0o0/status/568686671983570944

“We’re going to become increasingly non-biological…”

“We’ll also have non-biological bodies…”

“If the biological part went away it wouldn’t make any difference…”

They* will be as realistic as real reality.”

Impact on Society

technological singularity —“the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization” — is beyond the scope of this article, but these advancements will absolutely have an impact on society. Which way is yet to be determined.

There may be some regret

Politicians will not know who/what to regulate.

Evolution may take an unexpected twist.

The rich-poor gap will expand.

The unimaginable will become reality and society will change.

O que esperar da ciência em 2015 (Zero Hora)

Apostamos em cinco coisas que tendem a aparecer neste ano

19/01/2015 | 06h01

O que esperar da ciência em 2015 SpaceX/Youtube
Foto: SpaceX/Youtube

Em 2014, a ciência conseguiu pousar em um cometa, descobriu que estava errada sobre a evolução genética das aves, revelou os maiores fósseis da história. Miguel Nicolelis apresentou seu exoesqueleto na Copa do Mundo, o satélite brasileiro CBERS-4, em parceria com a China, foi ao espaço com sucesso, um brasileiro trouxe a principal medalha da matemática para casa.

Mas e em 2015, o que veremos? Apostamos em cinco coisas que poderão aparecer neste ano.

Foguetes reusáveis


Se queremos colonizar Marte, não adianta passagem só de ida. Esses foguetes, capazes de ir e voltar, são a promessa para transformar o futuro das viagens espaciais. Veremos se a empresa SpaceX, que já está nessa, consegue.

Robôs em casa


Os japoneses da Softbank começam a vender, em fevereiro, um robô humanoide chamado Pepper. Ele usa inteligência artificial para reconhecer o humor do dono e fala quatro línguas. Apesar de ser mais um ajudante do que um cara que faz, logo logo aprenderá novas funções.

Universo invisível


Grande Colisor de Hádrons vai voltar a funcionar em março e terá potência duas vezes maior de quebrar partículas. Uma das possibilidades é que ele ajude a descobrir novas superpartículas que, talvez, componham a matéria escura. Seria o primeiro novo estado da matéria descoberto em um século.

Cura para o ebola


Depois da crise de 2014, pode ser que as vacinas para o ebola comecem a funcionar e salvem muitas vidas na África. Vale o mesmo para a aids. O HIV está cercado, esperamos que a ciência finalmente o vença neste ano.

Discussões climáticas


2014 foi um dos mais quentes da história e, do jeito que a coisa vai, 2015 seguirá a mesma trilha. Em dezembro, o mundo vai discutir um acordo para tentar reverter o grau de emissões de gases em Paris. São medidas para ser implementadas a partir de 2020. Que sejam sensatos nossos líderes.

Citizen science network produces accurate maps of atmospheric dust (Science Daily)

Date: October 27, 2014

Source: Leiden University

Summary: Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The research team analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

iSPEX map compiled from all iSPEX measurements performed in the Netherlands on July 8, 2013, between 14:00 and 21:00. Each blue dot represents one of the 6007 measurements that were submitted on that day. At each location on the map, the 50 nearest iSPEX measurements were averaged and converted to Aerosol Optical Thickness, a measure for the total amount of atmospheric particles. This map can be compared to the AOT data from the MODIS Aqua satellite, which flew over the Netherlands at 16:12 local time. The relatively high AOT values were caused by smoke clouds from forest fires in North America, which were blown over the Netherlands at an altitude of 2-4 km. In the course of the day, winds from the North brought clearer air to the northern provinces. Credit: Image courtesy of Leiden, Universiteit

Measurements by thousands of citizen scientists in the Netherlands using their smartphones and the iSPEX add-on are delivering accurate data on dust particles in the atmosphere that add valuable information to professional measurements. The iSPEX team, led by Frans Snik of Leiden University, analyzed all measurements from three days in 2013 and combined them into unique maps of dust particles above the Netherlands. The results match and sometimes even exceed those of ground-based measurement networks and satellite instruments.

The iSPEX maps achieve a spatial resolution as small as 2 kilometers whereas satellite data are much courser. They also fill in blind spots of established ground-based atmospheric measurement networks. The scientific article that presents these first results of the iSPEX project is being published in Geophysical Research Letters.

The iSPEX team developed a new atmospheric measurement method in the form of a low-cost add-on for smartphone cameras. The iSPEX app instructs participants to scan the blue sky while the phone’s built-in camera takes pictures through the add-on. The photos record both the spectrum and the linear polarization of the sunlight that is scattered by suspended dust particles, and thus contain information about the properties of these particles. While such properties are difficult to measure, much better knowledge on atmospheric particles is needed to understand their effects on health, climate and air traffic.

Thousands of participants performed iSPEX measurements throughout the Netherlands on three cloud-free days in 2013. This large-scale citizen science experiment allowed the iSPEX team to verify the reliability of this new measurement method.

After a rigorous quality assessment of each submitted data point, measurements recorded in specific areas within a limited amount of time are averaged to obtain sufficient accuracy. Subsequently the data are converted to Aerosol Optical Thickness (AOT), which is a standardized quantity related to the total amount of atmospheric particles. The iSPEX AOT data match comparable data from satellites and the AERONET ground station at Cabauw, the Netherlands. In areas with sufficiently high measurement densities, the iSPEX maps can even discern smaller details than satellite data.

Team leader Snik: “This proves that our new measurement method works. But the great strength of iSPEX is the measurement philosophy: the establishment of a citizen science network of thousands of enthusiastic volunteers who actively carry out outdoor measurements. In this way, we can collect valuable information about atmospheric particles on locations and/or at times that are not covered by professional equipment. These results are even more accurate than we had hoped, and give rise to further research and development. We are currently investigating to what extent we can extract more information about atmospheric particles from the iSPEX data, like their sizes and compositions. And of course, we want to organize many more measurement days.”

With the help of a grant that supports public activities in Europe during the International Year of Light 2015, the iSPEX team is now preparing for the international expansion of the project. This expansion provides opportunities for national and international parties to join the project. Snik: “Our final goal is to establish a global network of citizen scientists who all contribute measurements to study the sources and societal effects of polluting atmospheric particles.”


Journal Reference:

  1. Frans Snik, Jeroen H. H. Rietjens, Arnoud Apituley, Hester Volten, Bas Mijling, Antonio Di Noia, Stephanie Heikamp, Ritse C. Heinsbroek, Otto P. Hasekamp, J. Martijn Smit, Jan Vonk, Daphne M. Stam, Gerard van Harten, Jozua de Boer, Christoph U. Keller. Mapping atmospheric aerosols with a citizen science network of smartphone spectropolarimeters. Geophysical Research Letters, 2014; DOI: 10.1002/2014GL061462

Why Do the Anarcho-Primitivists Want to Abolish Civilization? (io9)

George Dvorsky

Sept 12, 2014 11:28am

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Anarcho-primitivists are the ultimate Luddites — ideologues who favor complete technological relinquishment and a return to a hunter-gatherer lifestyle. We spoke to a leading proponent to learn more about this idea and why he believes civilization was our worst mistake.

Philosopher John Zerzan wants you to get rid of all your technology — your car, your mobile phone, your computer, your appliances — the whole lot. In his perfect world, you’d be stripped off all your technological creature comforts, reduced to a lifestyle that harkens back to when our hunter-gatherer ancestors romped around the African plains.

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Photo via Cast/John Zerzan/CC

You see, Zerzan is an outspoken advocate of anarcho-primitivism, a philosophical and political movement predicated under the assumption that the move from hunter-gatherer to agricultural subsistence was a stupendously awful mistake — an existential paradigm shift that subsequently gave rise to social stratification, coercion, alienation, and unchecked population growth. It’s only through the abandonment of technology, and a return to “non-civilized” ways of being — a process anarcho-primitivists call “wilding” — that we can eliminate the host of social ills that now plagues the human species.

As an anarchist, Zerzan is opposed to the state, along with all forms of hierarchical and authoritarian relations. The crux of his argument, one inspired by Karl Marx and Ivan Illich, is that the advent of technologies irrevocably altered the way humans interact with each other. There’s a huge difference, he argues, between simple tools that stay under the control of the user, and those technological systems that draw the user under the control of those who produce the tools. Zerzan says that technology has come under the control of an elite class, thus giving rise to alienation, domestication, and symbolic thought.

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Zerzan is not alone in his views. When the radical Luddite Ted “the Unabomber” Kasczinski was on trial for killing three people and injuring 23, Zerzan became his confidant, offering support for his ideas but condemning his actions (Zerzan recentlystated that he and Kasczinski are “not on terms anymore.”) Radicalized groups have also sprung up promoting similar views, including a Mexican group called the Individualists Tending Toward the Wild — a group with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course.” Back in 2011, this group sent several mail bombs to nanotechnology lab and researchers in Latin America, killing two people.

Looking ahead to the future, and considering the scary potential for advanced technologies such as artificial superintelligence and robotics, there’s the very real possibility that these sorts of groups will start to become more common — and more radicalized (similar to the radical anti-technology terrorist group Revolutionary Independence From Technology (RIFT) that was portrayed in the recent Hollywood film, Transcendence).

Why Do the Anarcho-Primitivists Want to Abolish Civilization?EXPAND

But Zerzan does not promote or condone violence. He’d rather see the rise of the “Future Primitive” come about voluntarily. To that end, he uses technology — like computers and phones — to get his particular message across (he considers it a necessary evil). That’s how I was able to conduct this interview with him, which we did over email.

io9: Anarcho-primitivism is as much a critique of modernity as is it a prescription for our perceived ills. Can you describe the kind of future you’re envisioning?

Zerzan: I want to see mass society radically decentralized into face-to-face communities. Only then can the individual be both responsible and autonomous. As Paul Shepard said, “Back to the Pleistocene!”

As an ideology, primitivism is fairly self-explanatory. But why add the ‘anarcho’ part to it? How can you be so sure there’s a link between more primitive states of being and the diminishment of power relations and hierarchies among complex primates?

The anarcho part refers to the fact that this question, this approach, arose mainly within an anarchist or anti-civilization milieu. Everyone I know in this context is an anarchist. There are no guarantees for the future, but we do know that egalitarian and anti-hierarchical relations were the norm with Homo for 1-2 million years. This is indisputable in the anthropological literature.

Then how do you distinguish between tools that are acceptable for use versus those that give rise to “anti-hierarchical relations”?

Those tools that involve the least division of labor or specialization involve or imply qualities such as intimacy, equality, flexibility. With increased division of labor we moved away from tools to systems of technology, where the dominant qualities or values are distancing, reliance on experts, inflexibility.

But tool use and symbolic language are indelible attributes of Homo sapiens — these are our distinguishing features. Aren’t you just advocating for biological primitivism — a kind of devolution of neurological characteristics?

Anthropologists (e.g. Thomas Wynn) seem to think that Homo had an intelligence equal to ours at least a million years ago. Thus neurology doesn’t to enter into it. Tool use, of course, has been around from before the beginning of Homo some 3 million years ago. As for language, it’s quite debatable as to when it emerged.

Early humans had a workable, non-destructive approach, that did not generally speaking involve much work, did not objectify women, and was anti-hierarchical. Does this sound backward to you?

You’ve got some provocative ideas about language and how it demeans or diminishes experience.

Every symbolic dimension — time, language, art, number — is a mediation between ourselves and reality. We lived more directly, immediately before these dimensions arrived, fairly recently. Freud, the arch-rationalist, thought that we once communicated telepathically, though I concede that my critique of language is the most speculative of my forays into the symbolic.

You argue that a hunter-gatherer lifestyle is as close to the ideal state of being as is possible. The Amish, on the other hand, have drawn the line at industrialization, and they’ve subsequently adopted an agrarian lifestyle. What is it about the advent of agriculture and domestication that’s so problematic?

In the 1980s Jared Diamond called the move to domestication or agriculture “the worst mistake humans ever made.” A fundamental shift away from taking what nature gives to the domination of nature. The inner logic of domestication of animals and plants is an unbroken progression, which always deepens and extends the ethos of control. Now of course control has reached the molecular level with nanotechnology, and the sphere of what I think is the very unhealthy fantasies of transhumanist neuroscience and AI.

In which ways can anarcho-primitivism be seen as the ultimate green movement? Do you see it that way?

We are destroying the biosphere at a fearful rate. Anarcho-primitivism seeks the end of the primary institutions that drive the destruction: domestication/civilization and industrialization. To accept “green” and “sustainable” illusions ignores the causes of the all-enveloping undermining of nature, including our inner nature. Anarcho-primitivism insists on a deeper questioning and helps identify the reasons for the overall crisis.

Tell us about the anarcho-primitivist position on science.

The reigning notion of what is science is an objectifying method, which magnifies the subject-object split. “Science” for hunter-gatherers is very basically different. It is based on participation with living nature, intimacy with it. Science in modernity mainly breaks reality down into now dead, inert fragments to “unlock” its “secrets.” Is that superior to a forager who knows a number of things from the way a blade of grass is bent?

Well, being trapped in an endless cycle of Darwinian processes doesn’t seem like the most enlightened or moral path for our species to take. Civilization and industrialization have most certainly introduced innumerable problems, but our ability to remove ourselves from the merciless “survival of the fittest” paradigm is a no-brainer. How could you ever convince people to relinquish the gifts of modernity — things like shelter, food on-demand, vaccines, pain relief, anesthesia, and ambulances at our beckon call?

It is reality that will “convince” people — or not. Conceivably, denial will continue to rule the day. But maybe only up to a point. If/when it can be seen that their reality is worsening qualitatively in every sphere a new perspective may emerge. One that questions the deep un-health of mass society and its foundations. Again, non-robust, de-skilled folks may keep going through the motions, stupefied by techno-consumerism and drugs of all kinds. Do you think that can last?

Most futurists would answer that things are getting better — and that through responsible foresight and planning we’ll be able to create the future we imagine.

“Things are getting better”? I find this astounding. The immiseration surrounds us: anxiety, depression, stress, insomnia, etc. on a mass scale, the rampage shootings now commonplace. The progressive ruin of the natural world. I wonder how anyone who even occasionally picks up a newspaper can be so in the dark. Of course I haven’t scratched the surface of how bad it is becoming. It is deeply irresponsible to promote such ignorance and projections.

That’s a very presentist view. Some left-leaning futurists argue, for example, that ongoing technological progress (both in robotics and artificial intelligence) will lead to an automation revolution — one that will free us from dangerous and demeaning work. It’s very possible that we’ll be able to invent our way out of the current labor model that you’re so opposed to.

Technological advances have only meant MORE work. That is the record. In light of this it is not quite cogent to promise that a more technological mass society will mean less work. Again, reality anyone??

Transhumanists advocate for the iterative improvement of the human species, things like enhanced intelligence and memory, the elimination of psychological disorders (including depression), radical life extension, and greater physical capacities. Tell us why you’re so opposed to these things.

Why I am opposed to these things? Let’s take them in order:

Enhanced intelligence and memory? I think it is now quite clear that advancing technology in fact makes people stupider and reduces memory. Attention span is lessened by Tweet-type modes, abbreviated, illiterate means of communicating. People are being trained to stare at screens at all times, a techno-haze that displaces life around them. I see zombies, not sharper, more tuned in people.

Elimination of psychological disorders? But narcissism, autism and all manner of such disabilities are on the rise in a more and more tech-oriented world.

Radical life extension? One achievement of modernity is increased longevity, granted. This has begun to slip a bit, however, in some categories. And one can ponder what is the quality of life? Chronic conditions are on the rise though people can often be kept alive longer. There’s no evidence favoring a radical life extension.

Greater physical capacities? Our senses were once acute and we were far more robust than we are now under the sign of technology. Look at all the flaccid, sedentary computer jockeys and extend that forward. It is not I who doesn’t want these thing; rather, the results are negative looking at the techno project, eh?

Do you foresee the day when a state of anarcho-primitivism can be achieved (even partially by a few enthusiasts)?

A few people cannot achieve such a future in isolation. The totality infects everything. It all must go and perhaps it will. Do you think people are happy with it?

Final Thoughts

Zerzan’s critique of civilization is certainly interesting and worthy of discussion. There’s no doubt that technology has taken humanity along a path that’s resulted in massive destruction and suffering, both to ourselves and to our planet and its animal inhabitants.

But there’s something deeply unsatisfying with the anarcho-primitivist prescription — that of erasing our technological achievements and returning to a state of nature. It’s fed by a cynical and defeatist world view that buys into the notion that everything will be okay once we regress back to a state where our ecological and sociological footprints are reduced to practically nil. It’s a way of eliminating our ability to make an impact on the world — and onto ourselves.

It’s also an ideological view that fetishizes our ancestral past. Despite Zerzan’s cocksure proclamations to the contrary, our paleolithic forebears were almost certainly hierarchical and socially stratified. There isn’t a single social species on this planet — whether they’re primates or elephants or cetaceans — that doesn’t organize its individuals according to capability, influence, or level of reproductive fitness. Feeling “alienated,” “frustrated,” and “controlled” is an indelible part of the human condition, regardless of whether we live in tribal arrangements or in the information age. The anarcho-primitivist fantasy of the free and unhindered noble savage is just that — a fantasy. Hunter-gatherers were far from free, coerced by the demands of biology and nature to mete out an existence under the harshest of circumstances.

Technology One Step Ahead of War Laws (Science Daily)

Jan. 6, 2014 — Today’s emerging military technologies — including unmanned aerial vehicles, directed-energy weapons, lethal autonomous robots, and cyber weapons like Stuxnet — raise the prospect of upheavals in military practices so fundamental that they challenge long-established laws of war. Weapons that make their own decisions about targeting and killing humans, for example, have ethical and legal implications obvious and frightening enough to have entered popular culture (for example, in the Terminator films).

The current international laws of war were developed over many centuries and long before the current era of fast-paced technological change. Military ethics and technology expert Braden Allenby says the proper response to the growing mismatch between long-established international law and emerging military technology “is neither the wholesale rejection of the laws of war nor the comfortable assumption that only minor tweaks to them are necessary.” Rather, he argues, the rules of engagement should be reconsidered through deliberate and focused international discussion that includes a wide range of cultural and institutional perspectives.

Allenby’s article anchors a special issue on the threat of emerging military technologies in the latest Bulletin of the Atomic Scientists (BOS), published by SAGE.

History is replete with paradigm shifts in warfare technology, from the introduction of gunpowder, which arguably gave rise to nation states, to the air-land-battle technologies used during the Desert Storm offensive in Kuwait and Iraq in 1991, which caused 20,000 to 30,000 Iraqi casualties and left only 200 US coalition troops dead. But today’s accelerating advances across the technological frontier and dramatic increases in the numbers of social institutions at play around the world are blurring boundaries between military and civil entities and state and non-state actors. And because the United States has an acknowledged primacy in terms of conventional forces, the nations and groups that compete with it increasingly think in terms of asymmetric warfare, raising issues that lie beyond established norms of military conduct and may require new legal thinking and institutions to address.

“The impact of emerging technologies on the laws of war might be viewed as a case study and an important learning opportunity for humankind as it struggles to adapt to the complexity that it has already wrought, but has yet to learn to manage,” Allenby writes.

Other articles in the Bulletin’s January/February special issue on emerging military technologies include “The enhanced warfighter” by Ken Ford, which looks at the ethics and practicalities of performance enhancement for military personnel, and Michael C. Horowitz’s overview of the near-term future of US war-fighting technology, “Coming next in military tech.” The issue also offers two views of the use of advanced robotics: “Stopping killer robots,” Mark Gubrud’s argument in favor of an international ban on lethal autonomous weapons, and “Robot to the rescue,” Gill Pratt’s account of a US Defense Department initiative aiming to develop robots that will improve response to disasters, like the Fukushima nuclear catastrophe, that involve highly toxic environments.

Journal Reference:

  1. Braden R. Allenby. Are new technologies undermining the laws of war? Bulletin of the Atomic Scientists, January/February 2014

Solar Cells Made Thin, Efficient and Flexible (Science Daily)

Dec. 9, 2013 — Converting sunshine into electricity is not difficult, but doing so efficiently and on a large scale is one of the reasons why people still rely on the electric grid and not a national solar cell network.

Debashis Chanda helped create large sheets of nanotextured, silicon micro-cell arrays that hold the promise of making solar cells lightweight, more efficient, bendable and easy to mass produce. (Credit: UCF)

But a team of researchers from the University of Illinois at Urbana-Champaign and the University of Central Florida in Orlando may be one step closer to tapping into the full potential of solar cells. The team found a way to create large sheets of nanotextured, silicon micro-cell arrays that hold the promise of making solar cells lightweight, more efficient, bendable and easy to mass produce.

The team used a light-trapping scheme based on a nanoimprinting technique where a polymeric stamp mechanically emboss the nano-scale pattern on to the solar cell without involving further complex lithographic steps. This approach has led to the flexibility researchers have been searching for, making the design ideal for mass manufacturing, said UCF assistant professor Debashis Chanda, lead researcher of the study.

The study’s findings are the subject of the November cover story of the journal Advanced Energy Materials.

Previously, scientists had suggested designs that showed greater absorption rates of sunlight, but how efficiently that sunlight was converted into electrical energy was unclear, Debashis said. This study demonstrates that the light-trapping scheme offers higher electrical efficiency in a lightweight, flexible module.

The team believes this technology could someday lead to solar-powered homes fueled by cells that are reliable and provide stored energy for hours without interruption.

Journal Reference:

  1. Ki Jun Yu, Li Gao, Jae Suk Park, Yu Ri Lee, Christopher J. Corcoran, Ralph G. Nuzzo, Debashis Chanda, John A. Rogers. Light Trapping: Light Trapping in Ultrathin Monocrystalline Silicon Solar Cells (Adv. Energy Mater. 11/2013)Advanced Energy Materials, 2013; 3 (11): 1528 DOI: 10.1002/aenm.201370046

Bonobo genius makes stone tools like early humans did (New Scientist)

13:09 21 August 2012 by Hannah Krakauer

Kanzi the bonobo continues to impress. Not content with learning sign language or making up “words” for things like banana or juice, he now seems capable of making stone tools on a par with the efforts of early humans.

Even a human could manage this <i>(Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))</i>

Even a human could manage this (Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))

Eviatar Nevo of the University of Haifa in Israel and his colleagues sealed food inside a log to mimic marrow locked inside long bones, and watched Kanzi, a 30-year-old male bonobo chimp, try to extract it. While a companion bonobo attempted the problem a handful of times, and succeeded only by smashing the log on the ground, Kanzi took a longer and arguably more sophisticated approach.

Both had been taught to knap flint flakes in the 1990s, holding a stone core in one hand and using another as a hammer. Kanzi used the tools he created to come at the log in a variety of ways: inserting sticks into seams in the log, throwing projectiles at it, and employing stone flints as choppers, drills, and scrapers. In the end, he got food out of 24 logs, while his companion managed just two.

Perhaps most remarkable about the tools Kanzi created is their resemblance to early hominid tools. Both bonobos made and used tools to obtain food – either by extracting it from logs or by digging it out of the ground. But only Kanzi’s met the criteria for both tool groups made by early Homo: wedges and choppers, and scrapers and drills.

Do Kanzi’s skills translate to all bonobos? It’s hard to say. The abilities of animals like Alex the parrot, who could purportedly count to six, and Betty the crow, who crafted a hook out of wire, sometimes prompt claims about the intelligence of an entire species. But since these animals are raised in unusual environments where they frequently interact with humans, their cases may be too singular to extrapolate their talents to their brethren.

The findings will fuel the ongoing debate over whether stone tools mark the beginning of modern human culture, or predate our Homo genus. They appear to suggest the latter – though critics will point out that Kanzi and his companion were taught how to make the tools. Whether the behaviour could arise in nature is unclear.

Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1212855109

Modern culture emerged in Africa 20,000 years earlier than thought (L.A.Times)

By Thomas H. Maugh II

July 30, 2012, 1:54 p.m.

Border Cave artifactsObjects found in the archaeological site called Border Cave include a) a wooden digging stick; b) a wooden poison applicator; c) a bone arrow point decorated with a spiral incision filled with red pigment; d) a bone object with four sets of notches; e) a lump of beeswax; and f) ostrich eggshell beads and marine shell beads used as personal ornaments. (Francesco d’Errico and Lucinda Backwell/ July 30, 2012)
Modern culture emerged in southern Africa at least 44,000 years ago, more than 20,000 years earlier than anthropologists had previously believed, researchers reported Monday.

That blossoming of technology and art occurred at roughly the same time that modern humans were migrating fromAfrica to Europe, where they soon displaced Neanderthals. Many of the characteristics of the ancient culture identified by anthropologists are still present in hunter-gatherer cultures of Africa today, such as the San culture of southern Africa, the researchers said.

The new evidence was provided by an international team of researchers excavating at an archaeological site called Border Cave in the foothills of the Lebombo Mountains on the border of KwaZulu-Natal in South Africa and Swaziland. The cave shows evidence of occupation by human ancestors going back more than 200,000 years, but the team reported in two papers in the Proceedings of the National Academy of Sciences that they were able to accurately date their discoveries to 42,000 to 44,000 years ago, a period known as the Later Stone Age or the Upper Paleolithic Period in Europe.

Among the organic — and thus datable — artifacts the team found in the cave were ostrich eggshell beads, thin bone arrowhead points, wooden digging sticks, a gummy substance called pitch that was used to attach bone and stone blades to wooden shafts, a lump of beeswax likely used for the same purpose, worked pig tusks that were probably use for planing wood, and notched bones used for counting.

“They adorned themselves with ostrich egg and marine shell beads, and notched bones for notational purposes,” said paleoanthropologist Lucinda Blackwell of the University of Witwatersrand in South Africa, a member of the team. “They fashioned fine bone points for use as awls and poisoned arrowheads. One point is decorated with a spiral groove filled with red ochre, which closely parallels similar marks that San make to identify their arrowheads when hunting.”

The very thin bone points are “very good evidence” for the use of bows and arrows, said co-author Paola Villa, a curator at the University of Colorado Museum of Natural History. Some of the bone points were apparently coated with ricinoleic acid, a poison made from the castor bean. “Such bone points could have penetrated thick hides, but the lack of ‘knock-down’ power means the use of poison probably was a requirement for successful kills,” she said.

The discovery also represents the first time pitch-making has been documented in South Africa, Villa said. The process requires burning peeled bark in the absence of air. The Stone Age residents probably dug holes in the ground, inserted the bark, lit it on fire, and covered the holes with stones, she said.

Science, Journalism, and the Hype Cycle: My piece in tomorrow’s Wall Street Journal (Discovery Magazine)

I think one of the biggest struggles a science writer faces is how to accurately describe the promise of new research. If we start promising that a preliminary experiment is going to lead to a cure for cancer, we are treating our readers cruelly–especially the readers who have cancer. On the other hand, scoffing at everything is not a sensible alternative, because sometimes preliminary experiments really do lead to great advances. In the 1950s, scientists discovered that bacteria can slice up virus DNA to avoid getting sick. That discovery led, some 30 years later, to biotechnology–to an industry that enabled, among other things, bacteria to produce human insulin.

This challenge was very much on my mind as I recently read two books, which I review in tomorrow’s Wall Street Journal. One is on gene therapy–a treatment that inspired wild expectations in the 1990s, then crashed, and now is coming back. The other is epigenetics, which seems to me to be in the early stages of the hype cycle. You can read the essay in full here. [see post below]

March 9th, 2012 5:33 PM by Carl Zimmer

Hope, Hype and Genetic Breakthroughs (Wall Street Journal)

By CARL ZIMMER

I talk to scientists for a living, and one of my most memorable conversations took place a couple of years ago with an engineer who put electrodes in bird brains. The electrodes were implanted into the song-generating region of the brain, and he could control them with a wireless remote. When he pressed a button, a bird singing in a cage across the lab would fall silent. Press again, and it would resume its song.

I could instantly see a future in which this technology brought happiness to millions of people. Imagine a girl blind from birth. You could implant a future version of these wireless electrodes in the back of her brain and then feed it images from a video camera.

As a journalist, I tried to get the engineer to explore what seemed to me to be the inevitable benefits of his research. To his great credit, he wouldn’t. He wasn’t even sure his design would ever see the inside of a human skull. There were just too many ways for it to go wrong. He wanted to be very sure that I understood that and that I wouldn’t claim otherwise. “False hope,” he warned me, “is a sinful thing.”

EPEGINE1

Stephen Voss. Gene therapy allowed this once-blind dog to see again.

Over the past two centuries, medical research has yielded some awesome treatments: smallpox wiped out with vaccines, deadly bacteria thwarted by antibiotics, face transplants. But when we look back across history, we forget the many years of failure and struggle behind each of these advances.

This foreshortened view distorts our expectations for research taking place today. We want to believe that every successful experiment means that another grand victory is weeks away. Big stories appear in the press about the next big thing. And then, as the years pass, the next big thing often fails to materialize. We are left with false hope, and the next big thing gets a reputation as the next big lie.

In 1995, a business analyst named Jackie Fenn captured this intellectual whiplash in a simple graph. Again and again, she had seen new advances burst on the scene and generate ridiculous excitement. Eventually they would reach what she dubbed the Peak of Inflated Expectations. Unable to satisfy their promise fast enough, many of them plunged into the Trough of Disillusionment. Their fall didn’t necessarily mean that these technologies were failures. The successful ones slowly emerged again and climbed the Slope of Enlightenment.

When Ms. Fenn drew the Hype Cycle, she had in mind dot-com-bubble technologies like cellphones and broadband. Yet it’s a good model for medical advances too. I could point to many examples of the medical hype cycle, but it’s hard to think of a better one than the subject of Ricki Lewis’s well-researched new book, “The Forever Fix”: gene therapy.

The concept of gene therapy is beguilingly simple. Many devastating disorders are the result of mutant genes. The disease phenylketonuria, for example, is caused by a mutation to a gene involved in breaking down a molecule called phenylalanine. The phenylalanine builds up in the bloodstream, causing brain damage. One solution is to eat a low-phenylalanine diet for your entire life. A much more appealing alternative would be to somehow fix the broken gene, restoring a person’s metabolism to normal.

In “The Forever Fix,” Ms. Lewis chronicles gene therapy’s climb toward the Peak of Inflated Expectations over the course of the 1990s. A geneticist and the author of a widely used textbook, she demonstrates a mastery of the history, even if her narrative sometimes meanders and becomes burdened by clichés. She explains how scientists learned how to identify the particular genes behind genetic disorders. They figured out how to load genes into viruses and then to use those viruses to insert the genes into human cells.

EPEGINE2

Stephen Voss. Alisha Bacoccini is tested on her ability to read letters, at UPenn Hospital, in Philadelphia, PA on Monday, June 23, 2008. Bacoccini is undergoing an experimental gene therapy trial to improve her sight.

By 1999, scientists had enjoyed some promising successes treating people—removing white blood cells from leukemia patients, for example, inserting working genes, and then returning the cells to their bodies. Gene therapy seemed as if it was on the verge of becoming standard medical practice. “Within the next decade, there will be an exponential increase in the use of gene therapy,” Helen M. Blau, the then-director of the gene-therapy technology program at Stanford University, told Business Week.

Within a few weeks of Ms. Blau’s promise, however, gene therapy started falling straight into the Trough. An 18-year-old man named Jesse Gelsinger who suffered from a metabolic disorder had enrolled in a gene-therapy trial. University of Pennsylvania scientists loaded a virus with a working version of an enzyme he needed and injected it into his body. The virus triggered an overwhelming reaction from his immune system and within four days Gelsinger was dead.

Gene therapy nearly came to a halt after his death. An investigation revealed errors and oversights in the design of Gelsinger’s trial. The breathless articles disappeared. Fortunately, research did not stop altogether. Scientists developed new ways of delivering genes without triggering fatal side effects. And they directed their efforts at one part of the body in particular: the eye. The eye is so delicate that inflammation could destroy it. As a result, it has evolved physical barriers that keep the body’s regular immune cells out, as well as a separate battalion of immune cells that are more cautious in their handling of infection.

It occurred to a number of gene-therapy researchers that they could try to treat genetic vision disorders with a very low risk of triggering horrendous side effects of the sort that had claimed Gelsinger’s life. If they injected genes into the eye, they would be unlikely to produce a devastating immune reaction, and any harmful effects would not be able to spread to the rest of the body.

Their hunch paid off. In 2009 scientists reported their first success with gene therapy for a congenital disorder. They treated a rare form of blindness known as Leber’s congenital amaurosis. Children who were once blind can now see.

As “The Forever Fix” shows, gene therapy is now starting its climb up the Slope of Enlightenment. Hundreds of clinical trials are under way to see if gene therapy can treat other diseases, both in and beyond the eye. It still costs a million dollars a patient, but that cost is likely to fall. It’s not yet clear how many other diseases gene therapy will help or how much it will help them, but it is clearly not a false hope.

Gene therapy produced so much excitement because it appealed to the popular idea that genes are software for our bodies. The metaphor only goes so far, though. DNA does not float in isolation. It is intricately wound around spool-like proteins called histones. It is studded with caps made of carbon, hydrogen and oxygen atoms, known as methyl groups. This coiling and capping of DNA allows individual genes to be turned on and off during our lifetimes.

The study of this extra layer of control on our genes is known as epigenetics. In “The Epigenetics Revolution,” molecular biologist Nessa Carey offers an enlightening introduction to what scientists have learned in the past decade about those caps and coils. While she delves into a fair amount of biological detail, she writes clearly and compellingly. As Ms. Carey explains, we depend for our very existence as functioning humans on epigenetics. We begin life as blobs of undifferentiated cells, but epigenetic changes allow some cells to become neurons, others muscle cells and so on.

Epigenetics also plays an important role in many diseases. In cancer cells, genes that are normally only active in embryos can reawaken after decades of slumber. A number of brain disorders, such as autism and schizophrenia, appear to involve the faulty epigenetic programming of genes in neurons.

Scientists got their first inklings about epigenetics decades ago, but in the past few years the field has become hot. In 2008 the National Institutes of Health pledged $190 million to map the epigenetic “marks” on the human genome. New biotech start-ups are trying to carry epigenetic discoveries into the doctor’s office. The FDA has approved cancer drugs that alter the pattern of caps on tumor-cell DNA. Some studies on mice hint that it may be possible to treat depression by taking a pill that adjusts the coils of DNA in neurons.

People seem to be getting giddy about the power of epigenetics in the same way they got giddy about gene therapy in the 1990s. No longer is our destiny written in our DNA: It can be completely overwritten with epigenetics. The excitement is moving far ahead of what the science warrants—or can ever deliver. Last June, an article on the Huffington Post eagerly seized on epigenetics, woefully mangling two biological facts: one, that experiences can alter the epigenetic patterns in the brain; and two, that sometimes epigenetic patterns can be passed down from parents to offspring. The article made a ridiculous leap to claim that we can use meditation to change our own brains and the brains of our children—and thereby alter the course of evolution: “We can jump-start evolution and leverage it on our own terms. We can literally rewire our brains toward greater compassion and cooperation.” You couldn’t ask for a better sign that epigenetics is climbing the Peak of Inflated Expectations at top speed.

The title “The Epigenetics Revolution” unfortunately adds to this unmoored excitement, but in Ms. Carey’s defense, the book itself is careful and measured. Still, epigenetics will probably be plunging soon into the Trough of Disillusionment. It will take years to see whether we can really improve our health with epigenetics or whether this hope will prove to be a false one.

The Forever Fix

By Ricki LewisSt. Martin’s, 323 pages, $25.99

The Epigenetics Revolution

By Nessa CareyColumbia, 339 pages, $26.95

—Mr. Zimmer’s books include “A Planet of Viruses and Evolution: Making Sense of Life,” co-authored with Doug Emlen, to be published in July.

Exterminate a species or two, save the planet (RT)

Published: 26 January, 2011, 14:43

Edited: 15 April, 2011, 05:18

 Biologists have suggested a mathematical model, which will hopefully predict which species need to be eliminated from an unstable ecosystem, and in which order, to help it recover.

The counterintuitive idea to kill living things for the sake of biodiversity conservation comes from the complex connections presented in ecosystems. Eliminate a predator, and its prey thrives and shrinks the amount of whatever it has for its own food. Such “cascading” impacts along the “food webs” can be unpredictable and sometimes catastrophic.

Sagar Sahasrabudhe and Adilson Motter of Northwestern University in the US have shown that in some food web models, the timely removal or suppression of one or several species can do quite the opposite and mitigate the damage caused by local extinction. The paper is described in Nature magazine.

The trick is not an easy one, since the timing of removal is just as important as the targeted species. A live example Sahasrabudhe and Motter use is that of island foxes on the Channel Islands off the coast of California. When feral pigs were introduced in the ecosystem, they attracted golden eagles, which preyed on foxes as well. Simply reversing the situation by removing the pigs would make the birds switch solely to foxes, which would eventually make them extinct. Instead, conservation activists captured and relocated the eagles before eradicating the pigs, saving the fox population.

Of course conservation scientists are not going to start taking decisions based on the models straight away. Real ecosystems are not limited to predator and prey relationships, and things like parasitism, pollination and nutrient dynamics have to be taken into account as well. On the other hand, ecosystems were thought to be too complex to be modeled at all some eight years ago, Martinez says. Their work gives more confidence that it will have practical uses in nearest future.

The world at seven billion (BBC)

27 October 2011 Last updated at 23:08 GMT

File photograph of newborn babies in Lucknow, India, in July 2009

As the world population reaches seven billion people, the BBC’s Mike Gallagher asks whether efforts to control population have been, as some critics claim, a form of authoritarian control over the world’s poorest citizens.

The temperature is some 30C. The humidity stifling, the noise unbearable. In a yard between two enormous tea-drying sheds, a number of dark-skinned women patiently sit, each accompanied by an unwieldy looking cloth sack. They are clad in colourful saris, but look tired and shabby. This is hardly surprising – they have spent most of the day in nearby plantation fields, picking tea that will net them around two cents a kilo – barely enough to feed their large families.

Vivek Baid thinks he knows how to help them. He runs the Mission for Population Control, a project in eastern India which aims to bring down high birth rates by encouraging local women to get sterilised after their second child.

As the world reaches an estimated seven billion people, people like Vivek say efforts to bring down the world’s population must continue if life on Earth is to be sustainable, and if poverty and even mass starvation are to be avoided.

There is no doubting their good intentions. Vivek, for instance, has spent his own money on the project, and is passionate about creating a brighter future for India.

But critics allege that campaigners like Vivek – a successful and wealthy male businessman – have tended to live very different lives from those they seek to help, who are mainly poor women.

These critics argue that rich people have imposed population control on the poor for decades. And, they say, such coercive attempts to control the world’s population often backfired and were sometimes harmful.

Population scare

Most historians of modern population control trace its roots back to the Reverend Thomas Malthus, an English clergyman born in the 18th Century who believed that humans would always reproduce faster than Earth’s capacity to feed them.

Giving succour to the resulting desperate masses would only imperil everyone else, he said. So the brutal reality was that it was better to let them starve.

‘Plenty is changed into scarcity’

Thomas Malthus

From Thomas Malthus’ Essay on Population, 1803 edition:

A man who is born into a world already possessed – if he cannot get subsistence from his parents on whom he has a just demand, and if the society do not want his labour, has no claim of right to the smallest portion of food.

At nature’s mighty feast there is no vacant cover for him. She tells him to be gone, and will quickly execute her own orders, if he does not work upon the compassion of some of her guests. If these guests get up and make room for him, other intruders immediately appear demanding the same favour. The plenty that before reigned is changed into scarcity; and the happiness of the guests is destroyed by the spectacle of misery and dependence in every part of the hall.

Rapid agricultural advances in the 19th Century proved his main premise wrong, because food production generally more than kept pace with the growing population.

But the idea that the rich are threatened by the desperately poor has cast a long shadow into the 20th Century.

From the 1960s, the World Bank, the UN and a host of independent American philanthropic foundations, such as the Ford and Rockefeller foundations, began to focus on what they saw as the problem of burgeoning Third World numbers.

The believed that overpopulation was the primary cause of environmental degradation, economic underdevelopment and political instability.

Massive populations in the Third World were seen as presenting a threat to Western capitalism and access to resources, says Professor Betsy Hartmann of Hampshire College, Massachusetts, in the US.

“The view of the south is very much put in this Malthusian framework. It becomes just this powerful ideology,” she says.

In 1966, President Lyndon Johnson warned that the US might be overwhelmed by desperate masses, and he made US foreign aid dependent on countries adopting family planning programmes.

Other wealthy countries such as Japan, Sweden and the UK also began to devote large amounts of money to reducing Third World birth rates.

‘Unmet need’

What virtually everyone agreed was that there was a massive demand for birth control among the world’s poorest people, and that if they could get their hands on reliable contraceptives, runaway population growth might be stopped.

But with the benefit of hindsight, some argue that this so-called unmet need theory put disproportionate emphasis on birth control and ignored other serious needs.

Graph of world population figures

“It was a top-down solution,” says Mohan Rao, a doctor and public health expert at Delhi’s Jawaharlal Nehru University.

“There was an unmet need for contraceptive services, of course. But there was also an unmet need for health services and all kinds of other services which did not get attention. The focus became contraception.”

Had the demographic experts worked at the grass-roots instead of imposing solutions from above, suggests Adrienne Germain, formerly of the Ford Foundation and then the International Women’s Health Coalition, they might have achieved a better picture of the dilemmas facing women in poor, rural communities.

“Not to have a full set of health services meant women were either unable to use family planning, or unwilling to – because they could still expect half their kids to die by the age of five,” she says.

India’s sterilisation ‘madness’

File photograph of Sanjay and Indira Gandhi in 1980

Indira Gandhi and her son Sanjay (above) presided over a mass sterilisation campaign. From the mid-1970s, Indian officials were set sterilisation quotas, and sought to ingratiate themselves with superiors by exceeding them. Stories abounded of men being accosted in the street and taken away for the operation. The head of the World Bank, Robert McNamara, congratulated the Indian government on “moving effectively” to deal with high birth rates. Funding was increased, and the sterilising went on.

In Delhi, some 700,000 slum dwellers were forcibly evicted, and given replacement housing plots far from the city centre, frequently on condition that they were either sterilised or produced someone else for the operation. In poorer agricultural areas, whole villages were rounded up for sterilisation. When residents of one village protested, an official is said to have threatened air strikes in retaliation.

“There was a certain madness,” recalls Nina Puri of the Family Planning Association of India. “All rationality was lost.”

Us and them

In 1968, the American biologist Paul Ehrlich caused a stir with his bestselling book, The Population Bomb, which suggested that it was already too late to save some countries from the dire effects of overpopulation, which would result in ecological disaster and the deaths of hundreds of millions of people in the 1970s.

Instead, governments should concentrate on drastically reducing population growth. He said financial assistance should be given only to those nations with a realistic chance of bringing birth rates down. Compulsory measures were not to be ruled out.

Western experts and local elites in the developing world soon imposed targets for reductions in family size, and used military analogies to drive home the urgency, says Matthew Connelly, a historian of population control at Columbia University in New York.

“They spoke of a war on population growth, fought with contraceptive weapons,” he says. “The war would entail sacrifices, and collateral damage.”

Such language betrayed a lack of empathy with their subjects, says Ms Germain: “People didn’t talk about people. They talked of acceptors and users of family planning.”

Emergency measures

Critics of population control had their say at the first ever UN population conference in 1974.

Karan Singh, India’s health minister at the time, declared that “development is the best contraceptive”.

But just a year later, Mr Singh’s government presided over one of the most notorious episodes in the history of population control.

In June 1975, the Indian premier, Indira Gandhi, declared a state of emergency after accusations of corruption threatened her government. Her son Sanjay used the measure to introduce radical population control measures targeted at the poor.

The Indian emergency lasted less than two years, but in 1975 alone, some eight million Indians – mainly poor men – were sterilised.

Yet, for all the official programmes and coercion, many poor women kept on having babies.

And where they did not, it arguably had less to do with coercive population control than with development, just as Karan Singh had argued in 1974, says historian Matt Connelly.

For example, in India, a disparity in birth rates could already be observed between the impoverished northern states and more developed southern regions like Kerala, where women were more likely to be literate and educated, and their offspring more likely to be healthy.

Women there realised that they could have fewer births and still expect to see their children survive into adulthood.

China: ‘We will not allow your baby to live’

Steven Mosher was a Stanford University anthropologist working in rural China who witnessed some of the early, disturbing moments of Beijing’s One Child Policy.

“I remember very well the evening of 8 March, 1980. The local Communist Party official in charge of my village came over waving a government document. He said: ‘The Party has decided to impose a cap of 1% on population growth this year.’ He said: ‘We’re going to decide who’s going to be allowed to continue their pregnancy and who’s going to be forced to terminate their pregnancy.’ And that’s exactly what they did.”

“These were women in the late second and third trimester of pregnancy. There were several women just days away from giving birth. And in my hearing, a party official said: ‘Do not think that you can simply wait until you go into labour and give birth, because we will not allow your baby to live. You will go home alone’.”

Total control

By now, this phenomenon could be observed in another country too – one that would nevertheless go on to impose the most draconian population control of all.

The One Child Policy is credited with preventing some 400 million births in China, and remains in place to this day. In 1983 alone, more than 16 million women and four million men were sterilised, and 14 million women received abortions.

Assessed by numbers alone, it is said to be by far the most successful population control initiative. Yet it remains deeply controversial, not only because of the human suffering it has caused.

A few years after its inception, the policy was relaxed slightly to allow rural couples two children if their first was not a boy. Boy children are prized, especially in the countryside where they provide labour and care for parents in old age.

But modern technology allows parents to discover the sex of the foetus, and many choose to abort if they are carrying a girl. In some regions, there is now a serious imbalance between men and women.

Moreover, since Chinese fertility was already in decline at the time the policy was implemented, some argue that it bears less responsibility for China’s falling birth rate than its supporters claim.

“I don’t think they needed to bring it down further,” says Indian demographer AR Nanda. “It would have happened at its own slow pace in another 10 years.”

Backlash

In the early 1980s, objections to the population control movement began to grow, especially in the United States.

In Washington, the new Reagan administration removed financial support for any programmes that involved abortion or sterilisation.

“If you give women the tools they need – education, employment, contraception, safe abortion – then they will make the choices that benefit society”

Adrienne Germain

The broad alliance to stem birth rates was beginning to dissolve and the debate become more polarised along political lines.

While some on the political right had moral objections to population control, some on the left saw it as neo-colonialism.

Faith groups condemned it as a Western attack on religious values, but women’s groups feared changes would mean poor women would be even less well-served.

By the time of a major UN conference on population and development in Cairo in 1994, women’s groups were ready to strike a blow for women’s rights, and they won.

The conference adopted a 20-year plan of action, known as the Cairo consensus, which called on countries to recognise that ordinary women’s needs – rather than demographers’ plans – should be at the heart of population strategies.

After Cairo

Today’s record-breaking global population hides a marked long-term trend towards lower birth rates, as urbanisation, better health care, education and access to family planning all affect women’s choices.

With the exception of sub-Saharan Africa and some of the poorest parts of India, we are now having fewer children than we once did – in some cases, failing even to replace ourselves in the next generation. And although total numbers are set to rise still further, the peak is now in sight.

Chinese poster from the 1960s of mother and baby, captioned: Practicing birth control is beneficial for the protection of the health of mother and childChina promoted birth control before implementing its one-child policy

Assuming that this trend continues, total numbers will one day level off, and even fall. As a result, some believe the sense of urgency that once surrounded population control has subsided.

The term population control itself has fallen out of fashion, as it was deemed to have authoritarian connotations. Post-Cairo, the talk is of women’s rights and reproductive rights, meaning the right to a free choice over whether or not to have children.

According to Adrienne Germain, that is the main lesson we should learn from the past 50 years.

“I have a profound conviction that if you give women the tools they need – education, employment, contraception, safe abortion – then they will make the choices that benefit society,” she says.

“If you don’t, then you’ll just be in an endless cycle of trying to exert control over fertility – to bring it up, to bring it down, to keep it stable. And it never comes out well. Never.”

Nevertheless, there remain to this day schemes to sterilise the less well-off, often in return for financial incentives. In effect, say critics, this amounts to coercion, since the very poor find it hard to reject cash.

“The people proposing this argue ‘Don’t worry, everything’ s fine now we have voluntary programmes on the Cairo model’,” says Betsy Hartmann.

“But what they don’t understand is the profound difference in power between rich and poor. The people who provide many services in poor areas are already prejudiced against the people they serve.”

Work in progress

For Mohan Rao, it is an example of how even the Cairo consensus fails to take account of the developing world.

“Cairo had some good things,” he says. “However Cairo was driven largely by First World feminist agendas. Reproductive rights are all very well, but [there needs to be] a whole lot of other kinds of enabling rights before women can access reproductive rights. You need rights to food, employment, water, justice and fair wages. Without all these you cannot have reproductive rights.”

Perhaps, then, the humanitarian ideals of Cairo are still a work in progress.

Meanwhile, Paul Ehrlich has also amended his view of the issue.

If he were to write his book today, “I wouldn’t focus on the poverty-stricken masses”, he told the BBC.

“I would focus on there being too many rich people. It’s crystal clear that we can’t support seven billion people in the style of the wealthier Americans.”

Mike Gallager is the producer of the radio programme Controlling People on BBC World Service

Where do you fit into 7 billion?

The world’s population is expected to hit seven billion in the next few weeks. After growing very slowly for most of human history, the number of people on Earth has more than doubled in the last 50 years. Where do you fit into this story of human life? Fill in your date of birth here to find out.

Archaeologists Find Sophisticated Blade Production Much Earlier Than Originally Thought (Tel Aviv University)

Monday, October 17, 2011
American Friends of Tel Aviv University

Blade manufacturing “production lines” existed as much as 400,000 years ago, say TAU researchers

Archaeology has long associated advanced blade production with the Upper Palaeolithic period, about 30,000-40,000 years ago, linked with the emergence of Homo Sapiens and cultural features such as cave art. Now researchers at Tel Aviv University have uncovered evidence which shows that “modern” blade production was also an element of Amudian industry during the late Lower Paleolithic period, 200,000-400,000 years ago as part of the Acheulo-Yabrudian cultural complex, a geographically limited group of hominins who lived in modern-day Israel, Lebanon, Syria and Jordan.

Prof. Avi Gopher, Dr. Ran Barkai and Dr. Ron Shimelmitz of TAU’s Department of Archaeology and Ancient Near Eastern Civilizations say that large numbers of long, slender cutting tools were discovered at Qesem Cave, located outside of Tel Aviv, Israel. This discovery challenges the notion that blade production is exclusively linked with recent modern humans.

The blades, which were described recently in the Journal of Human Evolution, are the product of a well planned “production line,” says Dr. Barkai. Every element of the blades, from the choice of raw material to the production method itself, points to a sophisticated tool production system to rival the blade technology used hundreds of thousands of years later.

An innovative product

Though blades have been found in earlier archaeological sites in Africa, Dr. Barkai and Prof. Gopher say that the blades found in Qesem Cave distinguish themselves through the sophistication of the technology used for manufacturing and mass production.

Evidence suggests that the process began with the careful selection of raw materials. The hominins collected raw material from the surface or quarried it from underground, seeking specific pieces of flint that would best fit their blade making technology, explains Dr. Barkai. With the right blocks of material, they were able to use a systematic and efficient method to produce the desired blades, which involved powerful and controlled blows that took into account the mechanics of stone fracture. Most of the blades of were made to have one sharp cutting edge and one naturally dull edge so it could be easily gripped in a human hand.

This is perhaps the first time that such technology was standardized, notes Prof. Gopher, who points out that the blades were produced with relatively small amounts of waste materials. This systematic industry enabled the inhabitants of the cave to produce tools, normally considered costly in raw material and time, with relative ease.

Thousands of these blades have been discovered at the site. “Because they could be produced so efficiently, they were almost used as expendable items,” he says.

Prof. Cristina Lemorini from Sapienza University of Rome conducted a closer analysis of markings on the blades under a microscope and conducted a series of experiments determining that the tools were primarily used for butchering.

Modern tools a part of modern behaviors

According to the researchers, this innovative industry and technology is one of a score of new behaviors exhibited by the inhabitants of Qesem Cave. “There is clear evidence of daily and habitual use of fire, which is news to archaeologists,” says Dr. Barkai. Previously, it was unknown if the Amudian culture made use of fire, and to what extent. There is also evidence of a division of space within the cave, he notes. The cave inhabitants used each space in a regular manner, conducting specific tasks in predetermined places. Hunted prey, for instance, was taken to an appointed area to be butchered, barbequed and later shared within the group, while the animal hide was processed elsewhere.

Religion: Sacred Electronics (Time Magazine)

Monday, Dec. 31, 1956

The five machines stood, rectangular, silver-green, silent. They were obviously not thinking about anything at all as Archbishop Giovanni Battista Montini of Milan raised his hand to bless them.

“It would seem at first sight,” said the archbishop, “that automation, which transfers to machines operations that were previously reserved to man’s genius and labor, so that machines think and remember and correct and control, would create a vaster difference between man and the contemplation of God. But this isn’t so. It mustn’t be so. By blessing these machines, we are causing a contract to be made and a current to run between the one pole, religion, and the other, technology . . . These machines become a modern means of contact between God and man.”

So last week at the Jesuit philosophical institute known as the Aloysianum (for St. Aloysius Gonzaga) in Gallarate, near Milan, man put his electronic brains to work for the glory of God. The experiment began ten years ago, when a young Jesuit named Roberto Busa at Rome’s Gregorian University chose an extraordinary project for his doctor’s thesis in theology: sorting out the different shades of meaning of every word used by St. Thomas Aquinas. But when he found that Aquinas had written 13 million words, Busa sadly settled for an analysis of only one word—the various meanings assigned by St. Thomas to the preposition “in.” Even this took him four years, and it irked him that the original task remained undone.

With permission from Jesuit General John B. Janssens himself, Father Busa took his problem to the U.S. and to International Business Machines. When he heard what Busa wanted, IBM Founder Thomas J. Watson threw up his hands. “Even if you had time to waste for the rest of your life, you couldn’t do a job like that,” he said. “You seem to be more go-ahead and American than we are!”

But in seven years IBM technicians in the U.S. and in Italy, working with Busa, devised a way to do the job. The complete works of Aquinas will be typed onto punch cards; the machines will then work through the words and produce a systematic index of every word St. Thomas used, together with the number of times it appears, where it appears, and the six words immediately preceding and following each appearance (to give the context). This will take the machines 8,125 hours; the same job would be likely to take one man a lifetime.

Next job for the scriptural brain: the Dead Sea Scrolls. In these and other ancient documents, gaps can often be filled in by examining the words immediately preceding and following the gap and determining what other words are most frequently associated with them in the rest of the text. “I am praying to God,” said Father Busa last week, “for ever faster, ever more accurate machines.”

Read more: http://www.time.com/time/magazine/article/0,9171,867529,00.html#ixzz1UkZsIT6S

Ensino americano abandona aos poucos a escrita em cursivo (Valor Econômico)

JC e-mail 4302, de 18 de Julho de 2011.

Maioria dos estados já não obriga o aprendizado; especialistas veem tendência.

O estado de Indiana, localizado no Meio-Oeste americano, acabou com a exigência de que as suas escolas ensinem a escrita cursiva, aquele estilo de escrever em que as palavras são formadas com letras emendadas pelas pontas. Com isso, juntou-se a uma onda crescente nos Estados Unidos de privilegiar no currículo outras habilidades hoje consideradas mais úteis, como digitar textos em teclados dos computadores.

Com a mudança, Indiana alinha-se a um padrão comum de ensino adotado por 46 Estados americanos. Nele, não há nenhuma menção à escrita cursiva, mas recomenda-se o ensino de digitação. É um reconhecimento de que, com as novas tecnologias, como computadores e telefones inteligentes, as pessoas cada vez menos precisam escrever de forma cursiva, seja no trabalho ou nas suas atividades do dia-a-dia. Basta aprender a escrever com a mão – exigência que ainda faz parte do currículo de Indiana e dos padrões comuns adotados pelos estados – seja com letras de forma, cursiva ou um misto dos dois estilos.

Também é um reflexo do que muitos nos Estados Unidos veem como uma sobrecarga no currículo escolar, com tempo sempre insuficiente para ensinar disciplinas consideradas fundamentais para passar nos testes usados para admissão nas faculdades, como matemática e leitura de textos. Pesquisas nacionais sobre como o tempo é gasto nas salas de aula mostram que 90% dos professores da 1ª a 3ª séries do ensino primário dedicam apenas 60 minutos por semana ao desenvolvimento da escrita com a mão.

A tendência de abandonar o ensino da escrita cursiva é vista com preocupação por parte dos americanos. Para alguns, as novas gerações terão mais dificuldades para fazer atividades básicas, como preencher e assinar cheques. Outros ponderam que os jovens não serão capazes de ler a declaração de independência no original, toda escrita de forma cursiva, num argumento que apela para o patriotismo americano.

Richard S. Christen, professor da Escola de Educação da Universidade de Portland, no Estado do Oregon, é um dos que dizem que as escolas devem pensar duas vezes antes de suspender o ensino da escrita cursiva, embora ele considere cada vez mais difícil defender a tese de que essa é uma habilidade com valor prático.

Divulgação – Richard Christen, professor da Escola de Educação da Universidade de Portland. “Se você voltar ao século XVII ou XIX, seria impossível fechar negócios sem os escrivãos, que foram cuidadosamente treinados na técnica de escrever com as mãos para registrar os fatos”, disse Christen ao Valor. “Mas hoje o valor prático disso é bem menor.”

Ele pondera, porém, que a escrita cursiva também tem um valor estético em si mesma e diz respeito a valores importantes como civilidade. “A escrita cursiva é um jeito de as pessoas se comunicarem com as outras de forma elegante, valorizando a beleza”, afirma. “Essa é uma chance para as crianças fazerem algo com suas mãos todos os dias, prestando atenção para os elementos de beleza, como formas, contornos e linhas”, afirma. Além disso, estimula as crianças a prestarem atenção na forma como se dirigem e se comunicam com as outras pessoas.

Para o professor Steve Graham, da Universidade de Vanderbilt, uma das maiores autoridades americanas no assunto, a questão central não é necessariamente a escrita cursiva, mas sim preservar o espaço para a escrita à mão de forma geral no currículo.

Apesar de todo o barulho em torno das novas tecnologias, a realidade, afirma ele ao Valor, é que hoje a maioria das crianças nas escolas americanas ainda faz os seus trabalhos em sala de aula com as mãos, pois de forma geral ainda não existe um computador para cada uma delas. Num ambiente como esse, a boa grafia é crucial para o bom aprendizado e para o sucesso na vida acadêmica, ainda que no mundo fora das salas de aula predominem computadores, iPads e telefones inteligentes.

Pesquisa recente conduzida por Graham mostra que, se trabalhos escolares ou provas são apresentados numa grafia sofrível, as notas tendem a ser mais baixas, a despeito do conteúdo. “As pessoas formam opiniões sobre a qualidade de suas ideias com base na sua qualidade de sua escrita”, afirma Graham.

Nesse estudo, alunos escreveram redações, que foram submetidas em seguida a avaliações com notas entre 0 e 100. O passo seguinte foi pegar redações medianas, que tiveram nota 50, e reproduzir seu conteúdo em duas versões, uma com grafia impecável e outra com grafia sofrível, embora legível. Submetidas a uma nova avaliação, a conclusão é que a mesma redação mediana ganhou notas muito boas quando escrita com letras caprichadas e notas inferiores quando escritas com garranchos.

A habilidade de escrever à mão também tem influência sobre a capacidade da criança de produzir bons conteúdos na escrita. Velocidade é crucial. Quando a escrita se torna um processo automático, afirma Graham, as ideias fluem mais rapidamente do cérebro para o papel e, portanto, não se perdem no meio do caminho. Pessoas bem treinadas para escrever com as mãos fazem tudo de forma automática e não precisam pensar sobre o que ocorre com o lápis – e sobram assim mais neurônios para serem dedicados a coisas mais importantes, como refletir sobre a mensagem, organizar as ideias e formar frases e parágrafos.

São bons argumentos para não se abandonar o ensino da escrita à mão pela digitação. Mas qual técnica é mais importante: a cursiva ou a simples escrita à mão? Graham diz que a escrita em letras de forma é em geral mais legível do que a cursiva, mas a escrita cursiva é mais rápida do que a escrita em letra de forma. “As diferenças não são grandes o suficiente para justificar muito debate”, disse. “O importante é ter um estilo de escrita à mão que seja ao mesmo tempo legível e rápido.”

Mas no futuro, reconhece ele, o ensino da escrita à mão pode se tornar menos importante, à medida que ter um computador para cada aluno se torne algo universal. O ensino de digitação, por outro lado, torna-se cada vez mais relevante. “Eles são muito bons com seus telefones, com o twitter, mas não com os computadores”, afirma Graham.

No Brasil, educadores se dividem sobre benefícios – Pais decepcionados com o aprendizado dos filhos poderiam dizer que tudo não passa de um debate bizantino sobre se é melhor tentar decifrar garranchos escritos com letra de médico ou torpedos criptografados numa novilíngua que aboliu as vogais. De qualquer forma, as opiniões se dividem também entre os educadores brasileiros quando se discute a validade de um abandono do ensino da escrita em cursivo.

Para Telma Weisz, doutora em psicologia da aprendizagem e do desenvolvimento pela Universidade de São Paulo (USP) e supervisora pedagógica do Programa Ler e Escrever, do governo do Estado de São Paulo, “a escrita manuscrita é um resto da Idade Média”. “Do ponto de vista da aprendizagem, não há perda em não usar a manuscrita”, afirma. Segundo ela, a escrita cursiva ajuda o aluno a memorizar a forma ortográfica das palavras, mas um programa de computador processador de texto tem a mesma eficiência, “com mais recursos, aliás”.

Weisz diz que o problema não é desprezar a escrita cursiva e mergulhar de vez na digitação, e sim que “no Brasil não há condições de se fazer isso. Temos escolas onde não há luz, que dirá escola onde todos os alunos tenham um computador”.

João Batista Araujo e Oliveira, doutor em pesquisa educacional pela Florida State University (EUA) e presidente do Instituto Alfa e Beto, ONG dedicada à alfabetização, discorda de Weisz. “Há pesquisas que comparam crianças que aprenderam com a letra cursiva e que aprenderam no teclado, e quem escreve mais à mão grava mais a forma ortográfica da palavra”, diz.

No entanto, Oliveira não tem uma posição radical contra a política adotada pela maioria dos Estados americanos, de não obrigar o ensino do cursivo. “Essas coisas mudam mesmo, é inevitável. Sempre que você tem uma tecnologia nova você procura um meio mais eficiente de avançar. A letra cursiva, por exemplo, é um grande avanço em relação à letra de forma, porque o aluno não tira o lápis do papel.”

Oliveira acredita que antes de se fazer uma mudança dessas é preciso pensar nos “efeitos colaterais”, dando como exemplo a tabuada e a máquina de calcular. “Para pagar o táxi, o cafezinho, você tem que fazer conta de cabeça. Quem só ensina usando a calculadora priva o cidadão de uma competência que dá uma eficiência social muito grande.”

Luis Marcio Barbosa, diretor-geral do colégio Equipe, de São Paulo, descarta adotar a política na sua escola. “Há um conjunto de aprendizado que vem junto com o aprendizado da escrita cursiva que é imprescindível para o desenvolvimento das crianças, que tem a ver com a motricidade, com a organização espacial.” E, além de tudo, diz, “as crianças podem aprender as duas coisas, não precisa ser uma em detrimento da outra.”

Lingodroid Robots Invent Their Own Spoken Language (IEEE Spectrum)

By EVAN ACKERMAN  /  TUE, MAY 17, 2011

lingodroids language robots

When robots talk to each other, they’re not generally using language as we think of it, with words to communicate both concrete and abstract concepts. Now Australian researchers are teaching a pair of robots to communicate linguistically like humans by inventing new spoken words, a lexicon that the roboticists can teach to other robots to generate an entirely new language.

Ruth Schulz and her colleagues at the University of Queensland and Queensland University of Technology call their robots the Lingodroids. The robots consist of a mobile platform equipped with a camera, laser range finder, and sonar for mapping and obstacle avoidance. The robots also carry a microphone and speakers for audible communication between them.

To understand the concept behind the project, consider a simplified case of how language might have developed. Let’s say that all of a sudden you wake up somewhere with your memory completely wiped, not knowing English, Klingon, or any other language. And then you meet some other person who’s in the exact same situation as you. What do you do?

What might very well end up happening is that you invent some random word to describe where you are right now, and then point at the ground and tell the word to the other person, establishing a connection between this new word and a place. And this is exactly what the Lingodroids do. If one of the robots finds itself in an unfamiliar area, it’ll make up a word to describe it, choosing a random combination from a set of syllables. It then communicates that word to other robots that it meets, thereby defining the name of a place.

lingodroids language robots

From this fundamental base, the robots can play games with each other to reinforce the language. For example, one robot might tell the other robot “kuzo,” and then both robots will race to where they think “kuzo” is. When they meet at or close to the same place, that reinforces the connection between a word and a location. And from “kuzo,” one robot can ask the other about the place they just came from, resulting in words for more abstract concepts like direction and distance:

lingodroids language robots
This image shows what words the robots agreed on for direction and distance concepts. For example, “vupe hiza” would mean a medium long distance to the east.

After playing several hundred games to develop their language, the robots agreed on directions within 10 degrees and distances within 0.375 meters. And using just their invented language, the robots created spatial maps (including areas that they were unable to explore) that agree remarkably well:

lingodroids language robots

In the future, researchers hope to enable the Lingodroids to “talk” about even more elaborate concepts, like descriptions of how to get to a place or the accessibility of places on the map. Ultimately, techniques like this may help robots to communicate with each other more effectively, and may even enable novel ways for robots to talk to humans.

Schulz and her colleagues — Arren Glover, Michael J. Milford, Gordon Wyeth, and Janet Wiles — describe their work in a paper, “Lingodroids: Studies in Spatial Cognition and Language,” presented last week at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.

[Original link here.]