Arquivo da tag: Tecnologia

Why Do the Anarcho-Primitivists Want to Abolish Civilization? (io9)

George Dvorsky

Sept 12, 2014 11:28am

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Anarcho-primitivists are the ultimate Luddites — ideologues who favor complete technological relinquishment and a return to a hunter-gatherer lifestyle. We spoke to a leading proponent to learn more about this idea and why he believes civilization was our worst mistake.

Philosopher John Zerzan wants you to get rid of all your technology — your car, your mobile phone, your computer, your appliances — the whole lot. In his perfect world, you’d be stripped off all your technological creature comforts, reduced to a lifestyle that harkens back to when our hunter-gatherer ancestors romped around the African plains.

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Photo via Cast/John Zerzan/CC

You see, Zerzan is an outspoken advocate of anarcho-primitivism, a philosophical and political movement predicated under the assumption that the move from hunter-gatherer to agricultural subsistence was a stupendously awful mistake — an existential paradigm shift that subsequently gave rise to social stratification, coercion, alienation, and unchecked population growth. It’s only through the abandonment of technology, and a return to “non-civilized” ways of being — a process anarcho-primitivists call “wilding” — that we can eliminate the host of social ills that now plagues the human species.

As an anarchist, Zerzan is opposed to the state, along with all forms of hierarchical and authoritarian relations. The crux of his argument, one inspired by Karl Marx and Ivan Illich, is that the advent of technologies irrevocably altered the way humans interact with each other. There’s a huge difference, he argues, between simple tools that stay under the control of the user, and those technological systems that draw the user under the control of those who produce the tools. Zerzan says that technology has come under the control of an elite class, thus giving rise to alienation, domestication, and symbolic thought.

Why Do the Anarcho-Primitivists Want to Abolish Civilization?

Zerzan is not alone in his views. When the radical Luddite Ted “the Unabomber” Kasczinski was on trial for killing three people and injuring 23, Zerzan became his confidant, offering support for his ideas but condemning his actions (Zerzan recentlystated that he and Kasczinski are “not on terms anymore.”) Radicalized groups have also sprung up promoting similar views, including a Mexican group called the Individualists Tending Toward the Wild — a group with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course.” Back in 2011, this group sent several mail bombs to nanotechnology lab and researchers in Latin America, killing two people.

Looking ahead to the future, and considering the scary potential for advanced technologies such as artificial superintelligence and robotics, there’s the very real possibility that these sorts of groups will start to become more common — and more radicalized (similar to the radical anti-technology terrorist group Revolutionary Independence From Technology (RIFT) that was portrayed in the recent Hollywood film, Transcendence).

Why Do the Anarcho-Primitivists Want to Abolish Civilization?EXPAND

But Zerzan does not promote or condone violence. He’d rather see the rise of the “Future Primitive” come about voluntarily. To that end, he uses technology — like computers and phones — to get his particular message across (he considers it a necessary evil). That’s how I was able to conduct this interview with him, which we did over email.

io9: Anarcho-primitivism is as much a critique of modernity as is it a prescription for our perceived ills. Can you describe the kind of future you’re envisioning?

Zerzan: I want to see mass society radically decentralized into face-to-face communities. Only then can the individual be both responsible and autonomous. As Paul Shepard said, “Back to the Pleistocene!”

As an ideology, primitivism is fairly self-explanatory. But why add the ‘anarcho’ part to it? How can you be so sure there’s a link between more primitive states of being and the diminishment of power relations and hierarchies among complex primates?

The anarcho part refers to the fact that this question, this approach, arose mainly within an anarchist or anti-civilization milieu. Everyone I know in this context is an anarchist. There are no guarantees for the future, but we do know that egalitarian and anti-hierarchical relations were the norm with Homo for 1-2 million years. This is indisputable in the anthropological literature.

Then how do you distinguish between tools that are acceptable for use versus those that give rise to “anti-hierarchical relations”?

Those tools that involve the least division of labor or specialization involve or imply qualities such as intimacy, equality, flexibility. With increased division of labor we moved away from tools to systems of technology, where the dominant qualities or values are distancing, reliance on experts, inflexibility.

But tool use and symbolic language are indelible attributes of Homo sapiens — these are our distinguishing features. Aren’t you just advocating for biological primitivism — a kind of devolution of neurological characteristics?

Anthropologists (e.g. Thomas Wynn) seem to think that Homo had an intelligence equal to ours at least a million years ago. Thus neurology doesn’t to enter into it. Tool use, of course, has been around from before the beginning of Homo some 3 million years ago. As for language, it’s quite debatable as to when it emerged.

Early humans had a workable, non-destructive approach, that did not generally speaking involve much work, did not objectify women, and was anti-hierarchical. Does this sound backward to you?

You’ve got some provocative ideas about language and how it demeans or diminishes experience.

Every symbolic dimension — time, language, art, number — is a mediation between ourselves and reality. We lived more directly, immediately before these dimensions arrived, fairly recently. Freud, the arch-rationalist, thought that we once communicated telepathically, though I concede that my critique of language is the most speculative of my forays into the symbolic.

You argue that a hunter-gatherer lifestyle is as close to the ideal state of being as is possible. The Amish, on the other hand, have drawn the line at industrialization, and they’ve subsequently adopted an agrarian lifestyle. What is it about the advent of agriculture and domestication that’s so problematic?

In the 1980s Jared Diamond called the move to domestication or agriculture “the worst mistake humans ever made.” A fundamental shift away from taking what nature gives to the domination of nature. The inner logic of domestication of animals and plants is an unbroken progression, which always deepens and extends the ethos of control. Now of course control has reached the molecular level with nanotechnology, and the sphere of what I think is the very unhealthy fantasies of transhumanist neuroscience and AI.

In which ways can anarcho-primitivism be seen as the ultimate green movement? Do you see it that way?

We are destroying the biosphere at a fearful rate. Anarcho-primitivism seeks the end of the primary institutions that drive the destruction: domestication/civilization and industrialization. To accept “green” and “sustainable” illusions ignores the causes of the all-enveloping undermining of nature, including our inner nature. Anarcho-primitivism insists on a deeper questioning and helps identify the reasons for the overall crisis.

Tell us about the anarcho-primitivist position on science.

The reigning notion of what is science is an objectifying method, which magnifies the subject-object split. “Science” for hunter-gatherers is very basically different. It is based on participation with living nature, intimacy with it. Science in modernity mainly breaks reality down into now dead, inert fragments to “unlock” its “secrets.” Is that superior to a forager who knows a number of things from the way a blade of grass is bent?

Well, being trapped in an endless cycle of Darwinian processes doesn’t seem like the most enlightened or moral path for our species to take. Civilization and industrialization have most certainly introduced innumerable problems, but our ability to remove ourselves from the merciless “survival of the fittest” paradigm is a no-brainer. How could you ever convince people to relinquish the gifts of modernity — things like shelter, food on-demand, vaccines, pain relief, anesthesia, and ambulances at our beckon call?

It is reality that will “convince” people — or not. Conceivably, denial will continue to rule the day. But maybe only up to a point. If/when it can be seen that their reality is worsening qualitatively in every sphere a new perspective may emerge. One that questions the deep un-health of mass society and its foundations. Again, non-robust, de-skilled folks may keep going through the motions, stupefied by techno-consumerism and drugs of all kinds. Do you think that can last?

Most futurists would answer that things are getting better — and that through responsible foresight and planning we’ll be able to create the future we imagine.

“Things are getting better”? I find this astounding. The immiseration surrounds us: anxiety, depression, stress, insomnia, etc. on a mass scale, the rampage shootings now commonplace. The progressive ruin of the natural world. I wonder how anyone who even occasionally picks up a newspaper can be so in the dark. Of course I haven’t scratched the surface of how bad it is becoming. It is deeply irresponsible to promote such ignorance and projections.

That’s a very presentist view. Some left-leaning futurists argue, for example, that ongoing technological progress (both in robotics and artificial intelligence) will lead to an automation revolution — one that will free us from dangerous and demeaning work. It’s very possible that we’ll be able to invent our way out of the current labor model that you’re so opposed to.

Technological advances have only meant MORE work. That is the record. In light of this it is not quite cogent to promise that a more technological mass society will mean less work. Again, reality anyone??

Transhumanists advocate for the iterative improvement of the human species, things like enhanced intelligence and memory, the elimination of psychological disorders (including depression), radical life extension, and greater physical capacities. Tell us why you’re so opposed to these things.

Why I am opposed to these things? Let’s take them in order:

Enhanced intelligence and memory? I think it is now quite clear that advancing technology in fact makes people stupider and reduces memory. Attention span is lessened by Tweet-type modes, abbreviated, illiterate means of communicating. People are being trained to stare at screens at all times, a techno-haze that displaces life around them. I see zombies, not sharper, more tuned in people.

Elimination of psychological disorders? But narcissism, autism and all manner of such disabilities are on the rise in a more and more tech-oriented world.

Radical life extension? One achievement of modernity is increased longevity, granted. This has begun to slip a bit, however, in some categories. And one can ponder what is the quality of life? Chronic conditions are on the rise though people can often be kept alive longer. There’s no evidence favoring a radical life extension.

Greater physical capacities? Our senses were once acute and we were far more robust than we are now under the sign of technology. Look at all the flaccid, sedentary computer jockeys and extend that forward. It is not I who doesn’t want these thing; rather, the results are negative looking at the techno project, eh?

Do you foresee the day when a state of anarcho-primitivism can be achieved (even partially by a few enthusiasts)?

A few people cannot achieve such a future in isolation. The totality infects everything. It all must go and perhaps it will. Do you think people are happy with it?

Final Thoughts

Zerzan’s critique of civilization is certainly interesting and worthy of discussion. There’s no doubt that technology has taken humanity along a path that’s resulted in massive destruction and suffering, both to ourselves and to our planet and its animal inhabitants.

But there’s something deeply unsatisfying with the anarcho-primitivist prescription — that of erasing our technological achievements and returning to a state of nature. It’s fed by a cynical and defeatist world view that buys into the notion that everything will be okay once we regress back to a state where our ecological and sociological footprints are reduced to practically nil. It’s a way of eliminating our ability to make an impact on the world — and onto ourselves.

It’s also an ideological view that fetishizes our ancestral past. Despite Zerzan’s cocksure proclamations to the contrary, our paleolithic forebears were almost certainly hierarchical and socially stratified. There isn’t a single social species on this planet — whether they’re primates or elephants or cetaceans — that doesn’t organize its individuals according to capability, influence, or level of reproductive fitness. Feeling “alienated,” “frustrated,” and “controlled” is an indelible part of the human condition, regardless of whether we live in tribal arrangements or in the information age. The anarcho-primitivist fantasy of the free and unhindered noble savage is just that — a fantasy. Hunter-gatherers were far from free, coerced by the demands of biology and nature to mete out an existence under the harshest of circumstances.

Technology One Step Ahead of War Laws (Science Daily)

Jan. 6, 2014 — Today’s emerging military technologies — including unmanned aerial vehicles, directed-energy weapons, lethal autonomous robots, and cyber weapons like Stuxnet — raise the prospect of upheavals in military practices so fundamental that they challenge long-established laws of war. Weapons that make their own decisions about targeting and killing humans, for example, have ethical and legal implications obvious and frightening enough to have entered popular culture (for example, in the Terminator films).

The current international laws of war were developed over many centuries and long before the current era of fast-paced technological change. Military ethics and technology expert Braden Allenby says the proper response to the growing mismatch between long-established international law and emerging military technology “is neither the wholesale rejection of the laws of war nor the comfortable assumption that only minor tweaks to them are necessary.” Rather, he argues, the rules of engagement should be reconsidered through deliberate and focused international discussion that includes a wide range of cultural and institutional perspectives.

Allenby’s article anchors a special issue on the threat of emerging military technologies in the latest Bulletin of the Atomic Scientists (BOS), published by SAGE.

History is replete with paradigm shifts in warfare technology, from the introduction of gunpowder, which arguably gave rise to nation states, to the air-land-battle technologies used during the Desert Storm offensive in Kuwait and Iraq in 1991, which caused 20,000 to 30,000 Iraqi casualties and left only 200 US coalition troops dead. But today’s accelerating advances across the technological frontier and dramatic increases in the numbers of social institutions at play around the world are blurring boundaries between military and civil entities and state and non-state actors. And because the United States has an acknowledged primacy in terms of conventional forces, the nations and groups that compete with it increasingly think in terms of asymmetric warfare, raising issues that lie beyond established norms of military conduct and may require new legal thinking and institutions to address.

“The impact of emerging technologies on the laws of war might be viewed as a case study and an important learning opportunity for humankind as it struggles to adapt to the complexity that it has already wrought, but has yet to learn to manage,” Allenby writes.

Other articles in the Bulletin’s January/February special issue on emerging military technologies include “The enhanced warfighter” by Ken Ford, which looks at the ethics and practicalities of performance enhancement for military personnel, and Michael C. Horowitz’s overview of the near-term future of US war-fighting technology, “Coming next in military tech.” The issue also offers two views of the use of advanced robotics: “Stopping killer robots,” Mark Gubrud’s argument in favor of an international ban on lethal autonomous weapons, and “Robot to the rescue,” Gill Pratt’s account of a US Defense Department initiative aiming to develop robots that will improve response to disasters, like the Fukushima nuclear catastrophe, that involve highly toxic environments.

Journal Reference:

  1. Braden R. Allenby. Are new technologies undermining the laws of war? Bulletin of the Atomic Scientists, January/February 2014

Solar Cells Made Thin, Efficient and Flexible (Science Daily)

Dec. 9, 2013 — Converting sunshine into electricity is not difficult, but doing so efficiently and on a large scale is one of the reasons why people still rely on the electric grid and not a national solar cell network.

Debashis Chanda helped create large sheets of nanotextured, silicon micro-cell arrays that hold the promise of making solar cells lightweight, more efficient, bendable and easy to mass produce. (Credit: UCF)

But a team of researchers from the University of Illinois at Urbana-Champaign and the University of Central Florida in Orlando may be one step closer to tapping into the full potential of solar cells. The team found a way to create large sheets of nanotextured, silicon micro-cell arrays that hold the promise of making solar cells lightweight, more efficient, bendable and easy to mass produce.

The team used a light-trapping scheme based on a nanoimprinting technique where a polymeric stamp mechanically emboss the nano-scale pattern on to the solar cell without involving further complex lithographic steps. This approach has led to the flexibility researchers have been searching for, making the design ideal for mass manufacturing, said UCF assistant professor Debashis Chanda, lead researcher of the study.

The study’s findings are the subject of the November cover story of the journal Advanced Energy Materials.

Previously, scientists had suggested designs that showed greater absorption rates of sunlight, but how efficiently that sunlight was converted into electrical energy was unclear, Debashis said. This study demonstrates that the light-trapping scheme offers higher electrical efficiency in a lightweight, flexible module.

The team believes this technology could someday lead to solar-powered homes fueled by cells that are reliable and provide stored energy for hours without interruption.

Journal Reference:

  1. Ki Jun Yu, Li Gao, Jae Suk Park, Yu Ri Lee, Christopher J. Corcoran, Ralph G. Nuzzo, Debashis Chanda, John A. Rogers. Light Trapping: Light Trapping in Ultrathin Monocrystalline Silicon Solar Cells (Adv. Energy Mater. 11/2013)Advanced Energy Materials, 2013; 3 (11): 1528 DOI: 10.1002/aenm.201370046

Bonobo genius makes stone tools like early humans did (New Scientist)

13:09 21 August 2012 by Hannah Krakauer

Kanzi the bonobo continues to impress. Not content with learning sign language or making up “words” for things like banana or juice, he now seems capable of making stone tools on a par with the efforts of early humans.

Even a human could manage this <i>(Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))</i>

Even a human could manage this (Image: Elizabeth Rubert-Pugh (Great Ape Trust of Iowa/Bonobo Hope Sanctuary))

Eviatar Nevo of the University of Haifa in Israel and his colleagues sealed food inside a log to mimic marrow locked inside long bones, and watched Kanzi, a 30-year-old male bonobo chimp, try to extract it. While a companion bonobo attempted the problem a handful of times, and succeeded only by smashing the log on the ground, Kanzi took a longer and arguably more sophisticated approach.

Both had been taught to knap flint flakes in the 1990s, holding a stone core in one hand and using another as a hammer. Kanzi used the tools he created to come at the log in a variety of ways: inserting sticks into seams in the log, throwing projectiles at it, and employing stone flints as choppers, drills, and scrapers. In the end, he got food out of 24 logs, while his companion managed just two.

Perhaps most remarkable about the tools Kanzi created is their resemblance to early hominid tools. Both bonobos made and used tools to obtain food – either by extracting it from logs or by digging it out of the ground. But only Kanzi’s met the criteria for both tool groups made by early Homo: wedges and choppers, and scrapers and drills.

Do Kanzi’s skills translate to all bonobos? It’s hard to say. The abilities of animals like Alex the parrot, who could purportedly count to six, and Betty the crow, who crafted a hook out of wire, sometimes prompt claims about the intelligence of an entire species. But since these animals are raised in unusual environments where they frequently interact with humans, their cases may be too singular to extrapolate their talents to their brethren.

The findings will fuel the ongoing debate over whether stone tools mark the beginning of modern human culture, or predate our Homo genus. They appear to suggest the latter – though critics will point out that Kanzi and his companion were taught how to make the tools. Whether the behaviour could arise in nature is unclear.

Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1212855109

Modern culture emerged in Africa 20,000 years earlier than thought (L.A.Times)

By Thomas H. Maugh II

July 30, 2012, 1:54 p.m.

Border Cave artifactsObjects found in the archaeological site called Border Cave include a) a wooden digging stick; b) a wooden poison applicator; c) a bone arrow point decorated with a spiral incision filled with red pigment; d) a bone object with four sets of notches; e) a lump of beeswax; and f) ostrich eggshell beads and marine shell beads used as personal ornaments. (Francesco d’Errico and Lucinda Backwell/ July 30, 2012)
Modern culture emerged in southern Africa at least 44,000 years ago, more than 20,000 years earlier than anthropologists had previously believed, researchers reported Monday.

That blossoming of technology and art occurred at roughly the same time that modern humans were migrating fromAfrica to Europe, where they soon displaced Neanderthals. Many of the characteristics of the ancient culture identified by anthropologists are still present in hunter-gatherer cultures of Africa today, such as the San culture of southern Africa, the researchers said.

The new evidence was provided by an international team of researchers excavating at an archaeological site called Border Cave in the foothills of the Lebombo Mountains on the border of KwaZulu-Natal in South Africa and Swaziland. The cave shows evidence of occupation by human ancestors going back more than 200,000 years, but the team reported in two papers in the Proceedings of the National Academy of Sciences that they were able to accurately date their discoveries to 42,000 to 44,000 years ago, a period known as the Later Stone Age or the Upper Paleolithic Period in Europe.

Among the organic — and thus datable — artifacts the team found in the cave were ostrich eggshell beads, thin bone arrowhead points, wooden digging sticks, a gummy substance called pitch that was used to attach bone and stone blades to wooden shafts, a lump of beeswax likely used for the same purpose, worked pig tusks that were probably use for planing wood, and notched bones used for counting.

“They adorned themselves with ostrich egg and marine shell beads, and notched bones for notational purposes,” said paleoanthropologist Lucinda Blackwell of the University of Witwatersrand in South Africa, a member of the team. “They fashioned fine bone points for use as awls and poisoned arrowheads. One point is decorated with a spiral groove filled with red ochre, which closely parallels similar marks that San make to identify their arrowheads when hunting.”

The very thin bone points are “very good evidence” for the use of bows and arrows, said co-author Paola Villa, a curator at the University of Colorado Museum of Natural History. Some of the bone points were apparently coated with ricinoleic acid, a poison made from the castor bean. “Such bone points could have penetrated thick hides, but the lack of ‘knock-down’ power means the use of poison probably was a requirement for successful kills,” she said.

The discovery also represents the first time pitch-making has been documented in South Africa, Villa said. The process requires burning peeled bark in the absence of air. The Stone Age residents probably dug holes in the ground, inserted the bark, lit it on fire, and covered the holes with stones, she said.

Science, Journalism, and the Hype Cycle: My piece in tomorrow’s Wall Street Journal (Discovery Magazine)

I think one of the biggest struggles a science writer faces is how to accurately describe the promise of new research. If we start promising that a preliminary experiment is going to lead to a cure for cancer, we are treating our readers cruelly–especially the readers who have cancer. On the other hand, scoffing at everything is not a sensible alternative, because sometimes preliminary experiments really do lead to great advances. In the 1950s, scientists discovered that bacteria can slice up virus DNA to avoid getting sick. That discovery led, some 30 years later, to biotechnology–to an industry that enabled, among other things, bacteria to produce human insulin.

This challenge was very much on my mind as I recently read two books, which I review in tomorrow’s Wall Street Journal. One is on gene therapy–a treatment that inspired wild expectations in the 1990s, then crashed, and now is coming back. The other is epigenetics, which seems to me to be in the early stages of the hype cycle. You can read the essay in full here. [see post below]

March 9th, 2012 5:33 PM by Carl Zimmer

Hope, Hype and Genetic Breakthroughs (Wall Street Journal)

By CARL ZIMMER

I talk to scientists for a living, and one of my most memorable conversations took place a couple of years ago with an engineer who put electrodes in bird brains. The electrodes were implanted into the song-generating region of the brain, and he could control them with a wireless remote. When he pressed a button, a bird singing in a cage across the lab would fall silent. Press again, and it would resume its song.

I could instantly see a future in which this technology brought happiness to millions of people. Imagine a girl blind from birth. You could implant a future version of these wireless electrodes in the back of her brain and then feed it images from a video camera.

As a journalist, I tried to get the engineer to explore what seemed to me to be the inevitable benefits of his research. To his great credit, he wouldn’t. He wasn’t even sure his design would ever see the inside of a human skull. There were just too many ways for it to go wrong. He wanted to be very sure that I understood that and that I wouldn’t claim otherwise. “False hope,” he warned me, “is a sinful thing.”

EPEGINE1

Stephen Voss. Gene therapy allowed this once-blind dog to see again.

Over the past two centuries, medical research has yielded some awesome treatments: smallpox wiped out with vaccines, deadly bacteria thwarted by antibiotics, face transplants. But when we look back across history, we forget the many years of failure and struggle behind each of these advances.

This foreshortened view distorts our expectations for research taking place today. We want to believe that every successful experiment means that another grand victory is weeks away. Big stories appear in the press about the next big thing. And then, as the years pass, the next big thing often fails to materialize. We are left with false hope, and the next big thing gets a reputation as the next big lie.

In 1995, a business analyst named Jackie Fenn captured this intellectual whiplash in a simple graph. Again and again, she had seen new advances burst on the scene and generate ridiculous excitement. Eventually they would reach what she dubbed the Peak of Inflated Expectations. Unable to satisfy their promise fast enough, many of them plunged into the Trough of Disillusionment. Their fall didn’t necessarily mean that these technologies were failures. The successful ones slowly emerged again and climbed the Slope of Enlightenment.

When Ms. Fenn drew the Hype Cycle, she had in mind dot-com-bubble technologies like cellphones and broadband. Yet it’s a good model for medical advances too. I could point to many examples of the medical hype cycle, but it’s hard to think of a better one than the subject of Ricki Lewis’s well-researched new book, “The Forever Fix”: gene therapy.

The concept of gene therapy is beguilingly simple. Many devastating disorders are the result of mutant genes. The disease phenylketonuria, for example, is caused by a mutation to a gene involved in breaking down a molecule called phenylalanine. The phenylalanine builds up in the bloodstream, causing brain damage. One solution is to eat a low-phenylalanine diet for your entire life. A much more appealing alternative would be to somehow fix the broken gene, restoring a person’s metabolism to normal.

In “The Forever Fix,” Ms. Lewis chronicles gene therapy’s climb toward the Peak of Inflated Expectations over the course of the 1990s. A geneticist and the author of a widely used textbook, she demonstrates a mastery of the history, even if her narrative sometimes meanders and becomes burdened by clichés. She explains how scientists learned how to identify the particular genes behind genetic disorders. They figured out how to load genes into viruses and then to use those viruses to insert the genes into human cells.

EPEGINE2

Stephen Voss. Alisha Bacoccini is tested on her ability to read letters, at UPenn Hospital, in Philadelphia, PA on Monday, June 23, 2008. Bacoccini is undergoing an experimental gene therapy trial to improve her sight.

By 1999, scientists had enjoyed some promising successes treating people—removing white blood cells from leukemia patients, for example, inserting working genes, and then returning the cells to their bodies. Gene therapy seemed as if it was on the verge of becoming standard medical practice. “Within the next decade, there will be an exponential increase in the use of gene therapy,” Helen M. Blau, the then-director of the gene-therapy technology program at Stanford University, told Business Week.

Within a few weeks of Ms. Blau’s promise, however, gene therapy started falling straight into the Trough. An 18-year-old man named Jesse Gelsinger who suffered from a metabolic disorder had enrolled in a gene-therapy trial. University of Pennsylvania scientists loaded a virus with a working version of an enzyme he needed and injected it into his body. The virus triggered an overwhelming reaction from his immune system and within four days Gelsinger was dead.

Gene therapy nearly came to a halt after his death. An investigation revealed errors and oversights in the design of Gelsinger’s trial. The breathless articles disappeared. Fortunately, research did not stop altogether. Scientists developed new ways of delivering genes without triggering fatal side effects. And they directed their efforts at one part of the body in particular: the eye. The eye is so delicate that inflammation could destroy it. As a result, it has evolved physical barriers that keep the body’s regular immune cells out, as well as a separate battalion of immune cells that are more cautious in their handling of infection.

It occurred to a number of gene-therapy researchers that they could try to treat genetic vision disorders with a very low risk of triggering horrendous side effects of the sort that had claimed Gelsinger’s life. If they injected genes into the eye, they would be unlikely to produce a devastating immune reaction, and any harmful effects would not be able to spread to the rest of the body.

Their hunch paid off. In 2009 scientists reported their first success with gene therapy for a congenital disorder. They treated a rare form of blindness known as Leber’s congenital amaurosis. Children who were once blind can now see.

As “The Forever Fix” shows, gene therapy is now starting its climb up the Slope of Enlightenment. Hundreds of clinical trials are under way to see if gene therapy can treat other diseases, both in and beyond the eye. It still costs a million dollars a patient, but that cost is likely to fall. It’s not yet clear how many other diseases gene therapy will help or how much it will help them, but it is clearly not a false hope.

Gene therapy produced so much excitement because it appealed to the popular idea that genes are software for our bodies. The metaphor only goes so far, though. DNA does not float in isolation. It is intricately wound around spool-like proteins called histones. It is studded with caps made of carbon, hydrogen and oxygen atoms, known as methyl groups. This coiling and capping of DNA allows individual genes to be turned on and off during our lifetimes.

The study of this extra layer of control on our genes is known as epigenetics. In “The Epigenetics Revolution,” molecular biologist Nessa Carey offers an enlightening introduction to what scientists have learned in the past decade about those caps and coils. While she delves into a fair amount of biological detail, she writes clearly and compellingly. As Ms. Carey explains, we depend for our very existence as functioning humans on epigenetics. We begin life as blobs of undifferentiated cells, but epigenetic changes allow some cells to become neurons, others muscle cells and so on.

Epigenetics also plays an important role in many diseases. In cancer cells, genes that are normally only active in embryos can reawaken after decades of slumber. A number of brain disorders, such as autism and schizophrenia, appear to involve the faulty epigenetic programming of genes in neurons.

Scientists got their first inklings about epigenetics decades ago, but in the past few years the field has become hot. In 2008 the National Institutes of Health pledged $190 million to map the epigenetic “marks” on the human genome. New biotech start-ups are trying to carry epigenetic discoveries into the doctor’s office. The FDA has approved cancer drugs that alter the pattern of caps on tumor-cell DNA. Some studies on mice hint that it may be possible to treat depression by taking a pill that adjusts the coils of DNA in neurons.

People seem to be getting giddy about the power of epigenetics in the same way they got giddy about gene therapy in the 1990s. No longer is our destiny written in our DNA: It can be completely overwritten with epigenetics. The excitement is moving far ahead of what the science warrants—or can ever deliver. Last June, an article on the Huffington Post eagerly seized on epigenetics, woefully mangling two biological facts: one, that experiences can alter the epigenetic patterns in the brain; and two, that sometimes epigenetic patterns can be passed down from parents to offspring. The article made a ridiculous leap to claim that we can use meditation to change our own brains and the brains of our children—and thereby alter the course of evolution: “We can jump-start evolution and leverage it on our own terms. We can literally rewire our brains toward greater compassion and cooperation.” You couldn’t ask for a better sign that epigenetics is climbing the Peak of Inflated Expectations at top speed.

The title “The Epigenetics Revolution” unfortunately adds to this unmoored excitement, but in Ms. Carey’s defense, the book itself is careful and measured. Still, epigenetics will probably be plunging soon into the Trough of Disillusionment. It will take years to see whether we can really improve our health with epigenetics or whether this hope will prove to be a false one.

The Forever Fix

By Ricki LewisSt. Martin’s, 323 pages, $25.99

The Epigenetics Revolution

By Nessa CareyColumbia, 339 pages, $26.95

—Mr. Zimmer’s books include “A Planet of Viruses and Evolution: Making Sense of Life,” co-authored with Doug Emlen, to be published in July.

Exterminate a species or two, save the planet (RT)

Published: 26 January, 2011, 14:43

Edited: 15 April, 2011, 05:18

 Biologists have suggested a mathematical model, which will hopefully predict which species need to be eliminated from an unstable ecosystem, and in which order, to help it recover.

The counterintuitive idea to kill living things for the sake of biodiversity conservation comes from the complex connections presented in ecosystems. Eliminate a predator, and its prey thrives and shrinks the amount of whatever it has for its own food. Such “cascading” impacts along the “food webs” can be unpredictable and sometimes catastrophic.

Sagar Sahasrabudhe and Adilson Motter of Northwestern University in the US have shown that in some food web models, the timely removal or suppression of one or several species can do quite the opposite and mitigate the damage caused by local extinction. The paper is described in Nature magazine.

The trick is not an easy one, since the timing of removal is just as important as the targeted species. A live example Sahasrabudhe and Motter use is that of island foxes on the Channel Islands off the coast of California. When feral pigs were introduced in the ecosystem, they attracted golden eagles, which preyed on foxes as well. Simply reversing the situation by removing the pigs would make the birds switch solely to foxes, which would eventually make them extinct. Instead, conservation activists captured and relocated the eagles before eradicating the pigs, saving the fox population.

Of course conservation scientists are not going to start taking decisions based on the models straight away. Real ecosystems are not limited to predator and prey relationships, and things like parasitism, pollination and nutrient dynamics have to be taken into account as well. On the other hand, ecosystems were thought to be too complex to be modeled at all some eight years ago, Martinez says. Their work gives more confidence that it will have practical uses in nearest future.

The world at seven billion (BBC)

27 October 2011 Last updated at 23:08 GMT

File photograph of newborn babies in Lucknow, India, in July 2009

As the world population reaches seven billion people, the BBC’s Mike Gallagher asks whether efforts to control population have been, as some critics claim, a form of authoritarian control over the world’s poorest citizens.

The temperature is some 30C. The humidity stifling, the noise unbearable. In a yard between two enormous tea-drying sheds, a number of dark-skinned women patiently sit, each accompanied by an unwieldy looking cloth sack. They are clad in colourful saris, but look tired and shabby. This is hardly surprising – they have spent most of the day in nearby plantation fields, picking tea that will net them around two cents a kilo – barely enough to feed their large families.

Vivek Baid thinks he knows how to help them. He runs the Mission for Population Control, a project in eastern India which aims to bring down high birth rates by encouraging local women to get sterilised after their second child.

As the world reaches an estimated seven billion people, people like Vivek say efforts to bring down the world’s population must continue if life on Earth is to be sustainable, and if poverty and even mass starvation are to be avoided.

There is no doubting their good intentions. Vivek, for instance, has spent his own money on the project, and is passionate about creating a brighter future for India.

But critics allege that campaigners like Vivek – a successful and wealthy male businessman – have tended to live very different lives from those they seek to help, who are mainly poor women.

These critics argue that rich people have imposed population control on the poor for decades. And, they say, such coercive attempts to control the world’s population often backfired and were sometimes harmful.

Population scare

Most historians of modern population control trace its roots back to the Reverend Thomas Malthus, an English clergyman born in the 18th Century who believed that humans would always reproduce faster than Earth’s capacity to feed them.

Giving succour to the resulting desperate masses would only imperil everyone else, he said. So the brutal reality was that it was better to let them starve.

‘Plenty is changed into scarcity’

Thomas Malthus

From Thomas Malthus’ Essay on Population, 1803 edition:

A man who is born into a world already possessed – if he cannot get subsistence from his parents on whom he has a just demand, and if the society do not want his labour, has no claim of right to the smallest portion of food.

At nature’s mighty feast there is no vacant cover for him. She tells him to be gone, and will quickly execute her own orders, if he does not work upon the compassion of some of her guests. If these guests get up and make room for him, other intruders immediately appear demanding the same favour. The plenty that before reigned is changed into scarcity; and the happiness of the guests is destroyed by the spectacle of misery and dependence in every part of the hall.

Rapid agricultural advances in the 19th Century proved his main premise wrong, because food production generally more than kept pace with the growing population.

But the idea that the rich are threatened by the desperately poor has cast a long shadow into the 20th Century.

From the 1960s, the World Bank, the UN and a host of independent American philanthropic foundations, such as the Ford and Rockefeller foundations, began to focus on what they saw as the problem of burgeoning Third World numbers.

The believed that overpopulation was the primary cause of environmental degradation, economic underdevelopment and political instability.

Massive populations in the Third World were seen as presenting a threat to Western capitalism and access to resources, says Professor Betsy Hartmann of Hampshire College, Massachusetts, in the US.

“The view of the south is very much put in this Malthusian framework. It becomes just this powerful ideology,” she says.

In 1966, President Lyndon Johnson warned that the US might be overwhelmed by desperate masses, and he made US foreign aid dependent on countries adopting family planning programmes.

Other wealthy countries such as Japan, Sweden and the UK also began to devote large amounts of money to reducing Third World birth rates.

‘Unmet need’

What virtually everyone agreed was that there was a massive demand for birth control among the world’s poorest people, and that if they could get their hands on reliable contraceptives, runaway population growth might be stopped.

But with the benefit of hindsight, some argue that this so-called unmet need theory put disproportionate emphasis on birth control and ignored other serious needs.

Graph of world population figures

“It was a top-down solution,” says Mohan Rao, a doctor and public health expert at Delhi’s Jawaharlal Nehru University.

“There was an unmet need for contraceptive services, of course. But there was also an unmet need for health services and all kinds of other services which did not get attention. The focus became contraception.”

Had the demographic experts worked at the grass-roots instead of imposing solutions from above, suggests Adrienne Germain, formerly of the Ford Foundation and then the International Women’s Health Coalition, they might have achieved a better picture of the dilemmas facing women in poor, rural communities.

“Not to have a full set of health services meant women were either unable to use family planning, or unwilling to – because they could still expect half their kids to die by the age of five,” she says.

India’s sterilisation ‘madness’

File photograph of Sanjay and Indira Gandhi in 1980

Indira Gandhi and her son Sanjay (above) presided over a mass sterilisation campaign. From the mid-1970s, Indian officials were set sterilisation quotas, and sought to ingratiate themselves with superiors by exceeding them. Stories abounded of men being accosted in the street and taken away for the operation. The head of the World Bank, Robert McNamara, congratulated the Indian government on “moving effectively” to deal with high birth rates. Funding was increased, and the sterilising went on.

In Delhi, some 700,000 slum dwellers were forcibly evicted, and given replacement housing plots far from the city centre, frequently on condition that they were either sterilised or produced someone else for the operation. In poorer agricultural areas, whole villages were rounded up for sterilisation. When residents of one village protested, an official is said to have threatened air strikes in retaliation.

“There was a certain madness,” recalls Nina Puri of the Family Planning Association of India. “All rationality was lost.”

Us and them

In 1968, the American biologist Paul Ehrlich caused a stir with his bestselling book, The Population Bomb, which suggested that it was already too late to save some countries from the dire effects of overpopulation, which would result in ecological disaster and the deaths of hundreds of millions of people in the 1970s.

Instead, governments should concentrate on drastically reducing population growth. He said financial assistance should be given only to those nations with a realistic chance of bringing birth rates down. Compulsory measures were not to be ruled out.

Western experts and local elites in the developing world soon imposed targets for reductions in family size, and used military analogies to drive home the urgency, says Matthew Connelly, a historian of population control at Columbia University in New York.

“They spoke of a war on population growth, fought with contraceptive weapons,” he says. “The war would entail sacrifices, and collateral damage.”

Such language betrayed a lack of empathy with their subjects, says Ms Germain: “People didn’t talk about people. They talked of acceptors and users of family planning.”

Emergency measures

Critics of population control had their say at the first ever UN population conference in 1974.

Karan Singh, India’s health minister at the time, declared that “development is the best contraceptive”.

But just a year later, Mr Singh’s government presided over one of the most notorious episodes in the history of population control.

In June 1975, the Indian premier, Indira Gandhi, declared a state of emergency after accusations of corruption threatened her government. Her son Sanjay used the measure to introduce radical population control measures targeted at the poor.

The Indian emergency lasted less than two years, but in 1975 alone, some eight million Indians – mainly poor men – were sterilised.

Yet, for all the official programmes and coercion, many poor women kept on having babies.

And where they did not, it arguably had less to do with coercive population control than with development, just as Karan Singh had argued in 1974, says historian Matt Connelly.

For example, in India, a disparity in birth rates could already be observed between the impoverished northern states and more developed southern regions like Kerala, where women were more likely to be literate and educated, and their offspring more likely to be healthy.

Women there realised that they could have fewer births and still expect to see their children survive into adulthood.

China: ‘We will not allow your baby to live’

Steven Mosher was a Stanford University anthropologist working in rural China who witnessed some of the early, disturbing moments of Beijing’s One Child Policy.

“I remember very well the evening of 8 March, 1980. The local Communist Party official in charge of my village came over waving a government document. He said: ‘The Party has decided to impose a cap of 1% on population growth this year.’ He said: ‘We’re going to decide who’s going to be allowed to continue their pregnancy and who’s going to be forced to terminate their pregnancy.’ And that’s exactly what they did.”

“These were women in the late second and third trimester of pregnancy. There were several women just days away from giving birth. And in my hearing, a party official said: ‘Do not think that you can simply wait until you go into labour and give birth, because we will not allow your baby to live. You will go home alone’.”

Total control

By now, this phenomenon could be observed in another country too – one that would nevertheless go on to impose the most draconian population control of all.

The One Child Policy is credited with preventing some 400 million births in China, and remains in place to this day. In 1983 alone, more than 16 million women and four million men were sterilised, and 14 million women received abortions.

Assessed by numbers alone, it is said to be by far the most successful population control initiative. Yet it remains deeply controversial, not only because of the human suffering it has caused.

A few years after its inception, the policy was relaxed slightly to allow rural couples two children if their first was not a boy. Boy children are prized, especially in the countryside where they provide labour and care for parents in old age.

But modern technology allows parents to discover the sex of the foetus, and many choose to abort if they are carrying a girl. In some regions, there is now a serious imbalance between men and women.

Moreover, since Chinese fertility was already in decline at the time the policy was implemented, some argue that it bears less responsibility for China’s falling birth rate than its supporters claim.

“I don’t think they needed to bring it down further,” says Indian demographer AR Nanda. “It would have happened at its own slow pace in another 10 years.”

Backlash

In the early 1980s, objections to the population control movement began to grow, especially in the United States.

In Washington, the new Reagan administration removed financial support for any programmes that involved abortion or sterilisation.

“If you give women the tools they need – education, employment, contraception, safe abortion – then they will make the choices that benefit society”

Adrienne Germain

The broad alliance to stem birth rates was beginning to dissolve and the debate become more polarised along political lines.

While some on the political right had moral objections to population control, some on the left saw it as neo-colonialism.

Faith groups condemned it as a Western attack on religious values, but women’s groups feared changes would mean poor women would be even less well-served.

By the time of a major UN conference on population and development in Cairo in 1994, women’s groups were ready to strike a blow for women’s rights, and they won.

The conference adopted a 20-year plan of action, known as the Cairo consensus, which called on countries to recognise that ordinary women’s needs – rather than demographers’ plans – should be at the heart of population strategies.

After Cairo

Today’s record-breaking global population hides a marked long-term trend towards lower birth rates, as urbanisation, better health care, education and access to family planning all affect women’s choices.

With the exception of sub-Saharan Africa and some of the poorest parts of India, we are now having fewer children than we once did – in some cases, failing even to replace ourselves in the next generation. And although total numbers are set to rise still further, the peak is now in sight.

Chinese poster from the 1960s of mother and baby, captioned: Practicing birth control is beneficial for the protection of the health of mother and childChina promoted birth control before implementing its one-child policy

Assuming that this trend continues, total numbers will one day level off, and even fall. As a result, some believe the sense of urgency that once surrounded population control has subsided.

The term population control itself has fallen out of fashion, as it was deemed to have authoritarian connotations. Post-Cairo, the talk is of women’s rights and reproductive rights, meaning the right to a free choice over whether or not to have children.

According to Adrienne Germain, that is the main lesson we should learn from the past 50 years.

“I have a profound conviction that if you give women the tools they need – education, employment, contraception, safe abortion – then they will make the choices that benefit society,” she says.

“If you don’t, then you’ll just be in an endless cycle of trying to exert control over fertility – to bring it up, to bring it down, to keep it stable. And it never comes out well. Never.”

Nevertheless, there remain to this day schemes to sterilise the less well-off, often in return for financial incentives. In effect, say critics, this amounts to coercion, since the very poor find it hard to reject cash.

“The people proposing this argue ‘Don’t worry, everything’ s fine now we have voluntary programmes on the Cairo model’,” says Betsy Hartmann.

“But what they don’t understand is the profound difference in power between rich and poor. The people who provide many services in poor areas are already prejudiced against the people they serve.”

Work in progress

For Mohan Rao, it is an example of how even the Cairo consensus fails to take account of the developing world.

“Cairo had some good things,” he says. “However Cairo was driven largely by First World feminist agendas. Reproductive rights are all very well, but [there needs to be] a whole lot of other kinds of enabling rights before women can access reproductive rights. You need rights to food, employment, water, justice and fair wages. Without all these you cannot have reproductive rights.”

Perhaps, then, the humanitarian ideals of Cairo are still a work in progress.

Meanwhile, Paul Ehrlich has also amended his view of the issue.

If he were to write his book today, “I wouldn’t focus on the poverty-stricken masses”, he told the BBC.

“I would focus on there being too many rich people. It’s crystal clear that we can’t support seven billion people in the style of the wealthier Americans.”

Mike Gallager is the producer of the radio programme Controlling People on BBC World Service

Where do you fit into 7 billion?

The world’s population is expected to hit seven billion in the next few weeks. After growing very slowly for most of human history, the number of people on Earth has more than doubled in the last 50 years. Where do you fit into this story of human life? Fill in your date of birth here to find out.

Archaeologists Find Sophisticated Blade Production Much Earlier Than Originally Thought (Tel Aviv University)

Monday, October 17, 2011
American Friends of Tel Aviv University

Blade manufacturing “production lines” existed as much as 400,000 years ago, say TAU researchers

Archaeology has long associated advanced blade production with the Upper Palaeolithic period, about 30,000-40,000 years ago, linked with the emergence of Homo Sapiens and cultural features such as cave art. Now researchers at Tel Aviv University have uncovered evidence which shows that “modern” blade production was also an element of Amudian industry during the late Lower Paleolithic period, 200,000-400,000 years ago as part of the Acheulo-Yabrudian cultural complex, a geographically limited group of hominins who lived in modern-day Israel, Lebanon, Syria and Jordan.

Prof. Avi Gopher, Dr. Ran Barkai and Dr. Ron Shimelmitz of TAU’s Department of Archaeology and Ancient Near Eastern Civilizations say that large numbers of long, slender cutting tools were discovered at Qesem Cave, located outside of Tel Aviv, Israel. This discovery challenges the notion that blade production is exclusively linked with recent modern humans.

The blades, which were described recently in the Journal of Human Evolution, are the product of a well planned “production line,” says Dr. Barkai. Every element of the blades, from the choice of raw material to the production method itself, points to a sophisticated tool production system to rival the blade technology used hundreds of thousands of years later.

An innovative product

Though blades have been found in earlier archaeological sites in Africa, Dr. Barkai and Prof. Gopher say that the blades found in Qesem Cave distinguish themselves through the sophistication of the technology used for manufacturing and mass production.

Evidence suggests that the process began with the careful selection of raw materials. The hominins collected raw material from the surface or quarried it from underground, seeking specific pieces of flint that would best fit their blade making technology, explains Dr. Barkai. With the right blocks of material, they were able to use a systematic and efficient method to produce the desired blades, which involved powerful and controlled blows that took into account the mechanics of stone fracture. Most of the blades of were made to have one sharp cutting edge and one naturally dull edge so it could be easily gripped in a human hand.

This is perhaps the first time that such technology was standardized, notes Prof. Gopher, who points out that the blades were produced with relatively small amounts of waste materials. This systematic industry enabled the inhabitants of the cave to produce tools, normally considered costly in raw material and time, with relative ease.

Thousands of these blades have been discovered at the site. “Because they could be produced so efficiently, they were almost used as expendable items,” he says.

Prof. Cristina Lemorini from Sapienza University of Rome conducted a closer analysis of markings on the blades under a microscope and conducted a series of experiments determining that the tools were primarily used for butchering.

Modern tools a part of modern behaviors

According to the researchers, this innovative industry and technology is one of a score of new behaviors exhibited by the inhabitants of Qesem Cave. “There is clear evidence of daily and habitual use of fire, which is news to archaeologists,” says Dr. Barkai. Previously, it was unknown if the Amudian culture made use of fire, and to what extent. There is also evidence of a division of space within the cave, he notes. The cave inhabitants used each space in a regular manner, conducting specific tasks in predetermined places. Hunted prey, for instance, was taken to an appointed area to be butchered, barbequed and later shared within the group, while the animal hide was processed elsewhere.

Religion: Sacred Electronics (Time Magazine)

Monday, Dec. 31, 1956

The five machines stood, rectangular, silver-green, silent. They were obviously not thinking about anything at all as Archbishop Giovanni Battista Montini of Milan raised his hand to bless them.

“It would seem at first sight,” said the archbishop, “that automation, which transfers to machines operations that were previously reserved to man’s genius and labor, so that machines think and remember and correct and control, would create a vaster difference between man and the contemplation of God. But this isn’t so. It mustn’t be so. By blessing these machines, we are causing a contract to be made and a current to run between the one pole, religion, and the other, technology . . . These machines become a modern means of contact between God and man.”

So last week at the Jesuit philosophical institute known as the Aloysianum (for St. Aloysius Gonzaga) in Gallarate, near Milan, man put his electronic brains to work for the glory of God. The experiment began ten years ago, when a young Jesuit named Roberto Busa at Rome’s Gregorian University chose an extraordinary project for his doctor’s thesis in theology: sorting out the different shades of meaning of every word used by St. Thomas Aquinas. But when he found that Aquinas had written 13 million words, Busa sadly settled for an analysis of only one word—the various meanings assigned by St. Thomas to the preposition “in.” Even this took him four years, and it irked him that the original task remained undone.

With permission from Jesuit General John B. Janssens himself, Father Busa took his problem to the U.S. and to International Business Machines. When he heard what Busa wanted, IBM Founder Thomas J. Watson threw up his hands. “Even if you had time to waste for the rest of your life, you couldn’t do a job like that,” he said. “You seem to be more go-ahead and American than we are!”

But in seven years IBM technicians in the U.S. and in Italy, working with Busa, devised a way to do the job. The complete works of Aquinas will be typed onto punch cards; the machines will then work through the words and produce a systematic index of every word St. Thomas used, together with the number of times it appears, where it appears, and the six words immediately preceding and following each appearance (to give the context). This will take the machines 8,125 hours; the same job would be likely to take one man a lifetime.

Next job for the scriptural brain: the Dead Sea Scrolls. In these and other ancient documents, gaps can often be filled in by examining the words immediately preceding and following the gap and determining what other words are most frequently associated with them in the rest of the text. “I am praying to God,” said Father Busa last week, “for ever faster, ever more accurate machines.”

Read more: http://www.time.com/time/magazine/article/0,9171,867529,00.html#ixzz1UkZsIT6S

Ensino americano abandona aos poucos a escrita em cursivo (Valor Econômico)

JC e-mail 4302, de 18 de Julho de 2011.

Maioria dos estados já não obriga o aprendizado; especialistas veem tendência.

O estado de Indiana, localizado no Meio-Oeste americano, acabou com a exigência de que as suas escolas ensinem a escrita cursiva, aquele estilo de escrever em que as palavras são formadas com letras emendadas pelas pontas. Com isso, juntou-se a uma onda crescente nos Estados Unidos de privilegiar no currículo outras habilidades hoje consideradas mais úteis, como digitar textos em teclados dos computadores.

Com a mudança, Indiana alinha-se a um padrão comum de ensino adotado por 46 Estados americanos. Nele, não há nenhuma menção à escrita cursiva, mas recomenda-se o ensino de digitação. É um reconhecimento de que, com as novas tecnologias, como computadores e telefones inteligentes, as pessoas cada vez menos precisam escrever de forma cursiva, seja no trabalho ou nas suas atividades do dia-a-dia. Basta aprender a escrever com a mão – exigência que ainda faz parte do currículo de Indiana e dos padrões comuns adotados pelos estados – seja com letras de forma, cursiva ou um misto dos dois estilos.

Também é um reflexo do que muitos nos Estados Unidos veem como uma sobrecarga no currículo escolar, com tempo sempre insuficiente para ensinar disciplinas consideradas fundamentais para passar nos testes usados para admissão nas faculdades, como matemática e leitura de textos. Pesquisas nacionais sobre como o tempo é gasto nas salas de aula mostram que 90% dos professores da 1ª a 3ª séries do ensino primário dedicam apenas 60 minutos por semana ao desenvolvimento da escrita com a mão.

A tendência de abandonar o ensino da escrita cursiva é vista com preocupação por parte dos americanos. Para alguns, as novas gerações terão mais dificuldades para fazer atividades básicas, como preencher e assinar cheques. Outros ponderam que os jovens não serão capazes de ler a declaração de independência no original, toda escrita de forma cursiva, num argumento que apela para o patriotismo americano.

Richard S. Christen, professor da Escola de Educação da Universidade de Portland, no Estado do Oregon, é um dos que dizem que as escolas devem pensar duas vezes antes de suspender o ensino da escrita cursiva, embora ele considere cada vez mais difícil defender a tese de que essa é uma habilidade com valor prático.

Divulgação – Richard Christen, professor da Escola de Educação da Universidade de Portland. “Se você voltar ao século XVII ou XIX, seria impossível fechar negócios sem os escrivãos, que foram cuidadosamente treinados na técnica de escrever com as mãos para registrar os fatos”, disse Christen ao Valor. “Mas hoje o valor prático disso é bem menor.”

Ele pondera, porém, que a escrita cursiva também tem um valor estético em si mesma e diz respeito a valores importantes como civilidade. “A escrita cursiva é um jeito de as pessoas se comunicarem com as outras de forma elegante, valorizando a beleza”, afirma. “Essa é uma chance para as crianças fazerem algo com suas mãos todos os dias, prestando atenção para os elementos de beleza, como formas, contornos e linhas”, afirma. Além disso, estimula as crianças a prestarem atenção na forma como se dirigem e se comunicam com as outras pessoas.

Para o professor Steve Graham, da Universidade de Vanderbilt, uma das maiores autoridades americanas no assunto, a questão central não é necessariamente a escrita cursiva, mas sim preservar o espaço para a escrita à mão de forma geral no currículo.

Apesar de todo o barulho em torno das novas tecnologias, a realidade, afirma ele ao Valor, é que hoje a maioria das crianças nas escolas americanas ainda faz os seus trabalhos em sala de aula com as mãos, pois de forma geral ainda não existe um computador para cada uma delas. Num ambiente como esse, a boa grafia é crucial para o bom aprendizado e para o sucesso na vida acadêmica, ainda que no mundo fora das salas de aula predominem computadores, iPads e telefones inteligentes.

Pesquisa recente conduzida por Graham mostra que, se trabalhos escolares ou provas são apresentados numa grafia sofrível, as notas tendem a ser mais baixas, a despeito do conteúdo. “As pessoas formam opiniões sobre a qualidade de suas ideias com base na sua qualidade de sua escrita”, afirma Graham.

Nesse estudo, alunos escreveram redações, que foram submetidas em seguida a avaliações com notas entre 0 e 100. O passo seguinte foi pegar redações medianas, que tiveram nota 50, e reproduzir seu conteúdo em duas versões, uma com grafia impecável e outra com grafia sofrível, embora legível. Submetidas a uma nova avaliação, a conclusão é que a mesma redação mediana ganhou notas muito boas quando escrita com letras caprichadas e notas inferiores quando escritas com garranchos.

A habilidade de escrever à mão também tem influência sobre a capacidade da criança de produzir bons conteúdos na escrita. Velocidade é crucial. Quando a escrita se torna um processo automático, afirma Graham, as ideias fluem mais rapidamente do cérebro para o papel e, portanto, não se perdem no meio do caminho. Pessoas bem treinadas para escrever com as mãos fazem tudo de forma automática e não precisam pensar sobre o que ocorre com o lápis – e sobram assim mais neurônios para serem dedicados a coisas mais importantes, como refletir sobre a mensagem, organizar as ideias e formar frases e parágrafos.

São bons argumentos para não se abandonar o ensino da escrita à mão pela digitação. Mas qual técnica é mais importante: a cursiva ou a simples escrita à mão? Graham diz que a escrita em letras de forma é em geral mais legível do que a cursiva, mas a escrita cursiva é mais rápida do que a escrita em letra de forma. “As diferenças não são grandes o suficiente para justificar muito debate”, disse. “O importante é ter um estilo de escrita à mão que seja ao mesmo tempo legível e rápido.”

Mas no futuro, reconhece ele, o ensino da escrita à mão pode se tornar menos importante, à medida que ter um computador para cada aluno se torne algo universal. O ensino de digitação, por outro lado, torna-se cada vez mais relevante. “Eles são muito bons com seus telefones, com o twitter, mas não com os computadores”, afirma Graham.

No Brasil, educadores se dividem sobre benefícios – Pais decepcionados com o aprendizado dos filhos poderiam dizer que tudo não passa de um debate bizantino sobre se é melhor tentar decifrar garranchos escritos com letra de médico ou torpedos criptografados numa novilíngua que aboliu as vogais. De qualquer forma, as opiniões se dividem também entre os educadores brasileiros quando se discute a validade de um abandono do ensino da escrita em cursivo.

Para Telma Weisz, doutora em psicologia da aprendizagem e do desenvolvimento pela Universidade de São Paulo (USP) e supervisora pedagógica do Programa Ler e Escrever, do governo do Estado de São Paulo, “a escrita manuscrita é um resto da Idade Média”. “Do ponto de vista da aprendizagem, não há perda em não usar a manuscrita”, afirma. Segundo ela, a escrita cursiva ajuda o aluno a memorizar a forma ortográfica das palavras, mas um programa de computador processador de texto tem a mesma eficiência, “com mais recursos, aliás”.

Weisz diz que o problema não é desprezar a escrita cursiva e mergulhar de vez na digitação, e sim que “no Brasil não há condições de se fazer isso. Temos escolas onde não há luz, que dirá escola onde todos os alunos tenham um computador”.

João Batista Araujo e Oliveira, doutor em pesquisa educacional pela Florida State University (EUA) e presidente do Instituto Alfa e Beto, ONG dedicada à alfabetização, discorda de Weisz. “Há pesquisas que comparam crianças que aprenderam com a letra cursiva e que aprenderam no teclado, e quem escreve mais à mão grava mais a forma ortográfica da palavra”, diz.

No entanto, Oliveira não tem uma posição radical contra a política adotada pela maioria dos Estados americanos, de não obrigar o ensino do cursivo. “Essas coisas mudam mesmo, é inevitável. Sempre que você tem uma tecnologia nova você procura um meio mais eficiente de avançar. A letra cursiva, por exemplo, é um grande avanço em relação à letra de forma, porque o aluno não tira o lápis do papel.”

Oliveira acredita que antes de se fazer uma mudança dessas é preciso pensar nos “efeitos colaterais”, dando como exemplo a tabuada e a máquina de calcular. “Para pagar o táxi, o cafezinho, você tem que fazer conta de cabeça. Quem só ensina usando a calculadora priva o cidadão de uma competência que dá uma eficiência social muito grande.”

Luis Marcio Barbosa, diretor-geral do colégio Equipe, de São Paulo, descarta adotar a política na sua escola. “Há um conjunto de aprendizado que vem junto com o aprendizado da escrita cursiva que é imprescindível para o desenvolvimento das crianças, que tem a ver com a motricidade, com a organização espacial.” E, além de tudo, diz, “as crianças podem aprender as duas coisas, não precisa ser uma em detrimento da outra.”

Lingodroid Robots Invent Their Own Spoken Language (IEEE Spectrum)

By EVAN ACKERMAN  /  TUE, MAY 17, 2011

lingodroids language robots

When robots talk to each other, they’re not generally using language as we think of it, with words to communicate both concrete and abstract concepts. Now Australian researchers are teaching a pair of robots to communicate linguistically like humans by inventing new spoken words, a lexicon that the roboticists can teach to other robots to generate an entirely new language.

Ruth Schulz and her colleagues at the University of Queensland and Queensland University of Technology call their robots the Lingodroids. The robots consist of a mobile platform equipped with a camera, laser range finder, and sonar for mapping and obstacle avoidance. The robots also carry a microphone and speakers for audible communication between them.

To understand the concept behind the project, consider a simplified case of how language might have developed. Let’s say that all of a sudden you wake up somewhere with your memory completely wiped, not knowing English, Klingon, or any other language. And then you meet some other person who’s in the exact same situation as you. What do you do?

What might very well end up happening is that you invent some random word to describe where you are right now, and then point at the ground and tell the word to the other person, establishing a connection between this new word and a place. And this is exactly what the Lingodroids do. If one of the robots finds itself in an unfamiliar area, it’ll make up a word to describe it, choosing a random combination from a set of syllables. It then communicates that word to other robots that it meets, thereby defining the name of a place.

lingodroids language robots

From this fundamental base, the robots can play games with each other to reinforce the language. For example, one robot might tell the other robot “kuzo,” and then both robots will race to where they think “kuzo” is. When they meet at or close to the same place, that reinforces the connection between a word and a location. And from “kuzo,” one robot can ask the other about the place they just came from, resulting in words for more abstract concepts like direction and distance:

lingodroids language robots
This image shows what words the robots agreed on for direction and distance concepts. For example, “vupe hiza” would mean a medium long distance to the east.

After playing several hundred games to develop their language, the robots agreed on directions within 10 degrees and distances within 0.375 meters. And using just their invented language, the robots created spatial maps (including areas that they were unable to explore) that agree remarkably well:

lingodroids language robots

In the future, researchers hope to enable the Lingodroids to “talk” about even more elaborate concepts, like descriptions of how to get to a place or the accessibility of places on the map. Ultimately, techniques like this may help robots to communicate with each other more effectively, and may even enable novel ways for robots to talk to humans.

Schulz and her colleagues — Arren Glover, Michael J. Milford, Gordon Wyeth, and Janet Wiles — describe their work in a paper, “Lingodroids: Studies in Spatial Cognition and Language,” presented last week at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.

[Original link here.]