Arquivo da tag: Chomsky

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)


Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages


We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Noam Chomsky is right: It’s the so-called serious who devastate the planet and cause the wars (Salon)

MONDAY, JAN 27, 2014 11:52 AM -0200

Fear the sober voices on the New York Times Op-Ed page and in the think tanks — they’re more dangerous than hawks


Noam Chomsky is right: It's the so-called serious who devastate the planet and cause the warsNoam Chomsky (Credit: AP/Hatem Moussa)

A captain ready to drive himself and all around him to ruin in the hunt for a white whale. It’s a well-known story, and over the years, mad Ahab in Herman Melville’s most famous novel, Moby-Dick, has been used as an exemplar of unhinged American power, most recently of George W. Bush’s disastrous invasion of Iraq.

But what’s really frightening isn’t our Ahabs, the hawks who periodically want to bomb some poor country, be it Vietnam or Afghanistan, back to the Stone Age.  The respectable types are the true “terror of our age,” as Noam Chomsky called them collectively nearly 50 years ago.  The really scary characters are our soberest politiciansscholarsjournalistsprofessionals, and managers, men and women (though mostly men) who imagine themselves as morally serious, and then enable the wars, devastate the planet, and rationalize the atrocities.  They are a type that has been with us for a long time.  More than a century and a half ago, Melville, who had a captain for every face of empire, found their perfect expression — for his moment and ours.

For the last six years, I’ve been researching the life of an American seal killer, a ship captain named Amasa Delano who, in the 1790s, was among the earliest New Englanders to sail into the South Pacific.  Money was flush, seals were many, and Delano and his fellow ship captains established the first unofficial U.S. colonies on islands off the coast of Chile.  They operated under an informal council of captains, divvied up territory, enforced debt contracts, celebrated the Fourth of July, and set up ad hoc courts of law.  When no bible was available, the collected works of William Shakespeare, found in the libraries of most ships, were used to swear oaths.

From his first expedition, Delano took hundreds of thousands of sealskins to China, where he traded them for spices, ceramics, and tea to bring back to Boston.  During a second, failed voyage, however, an event took place that would make Amasa notorious — at least among the readers of the fiction of Herman Melville.

Here’s what happened: One day in February 1805 in the South Pacific, Amasa Delano spent nearly a full day on board a battered Spanish slave ship, conversing with its captain, helping with repairs, and distributing food and water to its thirsty and starving voyagers, a handful of Spaniards and about 70 West African men and women he thought were slaves. They weren’t.

Those West Africans had rebelled weeks earlier, killing most of the Spanish crew, along with the slaver taking them to Peru to be sold, and demanded to be returned to Senegal.  When they spotted Delano’s ship, they came up with a plan: let him board and act as if they were still slaves, buying time to seize the sealer’s vessel and supplies.  Remarkably, for nine hours, Delano, an experienced mariner and distant relative of future president Franklin Delano Roosevelt, was convinced that he was on a distressed but otherwise normally functioning slave ship.

Having barely survived the encounter, he wrote about the experience in his memoir, which Melville read and turned into what many consider his “other” masterpiece.  Published in 1855, on the eve of the Civil War, Benito Cereno is one of the darkest stories in American literature.  It’s told from the perspective of Amasa Delano as he wanders lost through a shadow world of his own racial prejudices.

One of the things that attracted Melville to the historical Amasa was undoubtedly the juxtaposition between his cheerful self-regard — he considers himself a modern man, a liberal opposed to slavery — and his complete obliviousness to the social world around him.  The real Amasa was well meaning, judicious, temperate, and modest.

In other words, he was no Ahab, whose vengeful pursuit of a metaphysical whale has been used as an allegory for every American excess, every catastrophic war, every disastrous environmental policy, from Vietnam and Iraq to the explosion of the BP oil rig in the Gulf of Mexico in 2010.

Ahab, whose peg-legged pacing of the quarterdeck of his doomed ship enters the dreams of his men sleeping below like the “crunching teeth of sharks.”  Ahab, whose monomania is an extension of the individualism born out of American expansion and whose rage is that of an ego that refuses to be limited by nature’s frontier.  “Our Ahab,” as a soldier in Oliver Stone’s movie Platoon calls a ruthless sergeant who senselessly murders innocent Vietnamese.

Ahab is certainly one face of American power. In the course of writing a book on the history that inspired Benito Cereno, I’ve come to think of it as not the most frightening — or even the most destructive of American faces.  Consider Amasa.

Killing Seals

Since the end of the Cold War, extractive capitalism has spread over our post-industrialized world with a predatory force that would shock even Karl Marx.  From the mineral-rich Congo to the open-pit gold mines of Guatemala, from Chile’s until recently pristine Patagonia to the fracking fields of Pennsylvania and the melting Arctic north, there is no crevice where some useful rock, liquid, or gas can hide, no jungle forbidden enough to keep out the oil rigs and elephant killers, no citadel-like glacier, no hard-baked shale that can’t be cracked open, no ocean that can’t be poisoned.

And Amasa was there at the beginning.  Seal fur may not have been the world’s first valuable natural resource, but sealing represented one of young America’s first experiences of boom-and-bust resource extraction beyond its borders.

With increasing frequency starting in the early 1790s and then in a mad rush beginning in 1798, ships left New Haven, Norwich, Stonington, New London, and Boston, heading for the great half-moon archipelago of remote islands running from Argentina in the Atlantic to Chile in the Pacific.  They were on the hunt for the fur seal, which wears a layer of velvety down like an undergarment just below an outer coat of stiff gray-black hair.

In Moby-Dick, Melville portrayed whaling as the American industry.  Brutal and bloody but also humanizing, work on a whale ship required intense coordination and camaraderie.  Out of the gruesomeness of the hunt, the peeling of the whale’s skin from its carcass, and the hellish boil of the blubber or fat, something sublime emerged: human solidarity among the workers.  And like the whale oil that lit the lamps of the world, divinity itself glowed from the labor: “Thou shalt see it shining in the arm that wields a pick or drives a spike; that democratic dignity which, on all hands, radiates without end from God.”

Sealing was something else entirely.  It called to mind not industrial democracy but the isolation and violence of conquest, settler colonialism, and warfare.  Whaling took place in a watery commons open to all.  Sealing took place on land.  Sealers seized territory, fought one another to keep it, and pulled out what wealth they could as fast as they could before abandoning their empty and wasted island claims.  The process pitted desperate sailors against equally desperate officers in as all-or-nothing a system of labor relations as can be imagined.

In other words, whaling may have represented the promethean power of proto-industrialism, with all the good (solidarity, interconnectedness, and democracy) and bad (the exploitation of men and nature) that went with it, but sealing better predicted today’s postindustrial extracted, hunted, drilled, fracked, hot, and strip-mined world.

Seals were killed by the millions and with a shocking casualness.  A group of sealers would get between the water and the rookeries and simply start clubbing.  A single seal makes a noise like a cow or a dog, but tens of thousands of them together, so witnesses testified, sound like a Pacific cyclone.  Once we “began the work of death,” one sealer remembered, “the battle caused me considerable terror.”

South Pacific beaches came to look like Dante’s Inferno.  As the clubbing proceeded, mountains of skinned, reeking carcasses piled up and the sands ran red with torrents of blood.  The killing was unceasing, continuing into the night by the light of bonfires kindled with the corpses of seals and penguins.

And keep in mind that this massive kill-off took place not for something like whale oil, used by all for light and fire.  Seal fur was harvested to warm the wealthy and meet a demand created by a new phase of capitalism: conspicuous consumption.  Pelts were used for ladies’ capes, coats, muffs, and mittens, and gentlemen’s waistcoats.  The fur of baby pups wasn’t much valued, so some beaches were simply turned into seal orphanages, with thousands of newborns left to starve to death.  In a pinch though, their downy fur, too, could be used — to make wallets.

Occasionally, elephant seals would be taken for their oil in an even more horrific manner: when they opened their mouths to bellow, their hunters would toss rocks in and then begin to stab them with long lances.  Pierced in multiple places like Saint Sebastian, the animals’ high-pressured circulatory system gushed “fountains of blood, spouting to a considerable distance.”

At first the frenetic pace of the killing didn’t matter: there were so many seals.  On one island alone, Amasa Delano estimated, there were “two to three millions of them” when New Englanders first arrived to make “a business of killing seals.”

“If many of them were killed in a night,” wrote one observer, “they would not be missed in the morning.”  It did indeed seem as if you could kill every one in sight one day, then start afresh the next.  Within just a few years, though, Amasa and his fellow sealers had taken so many seal skins to China that Canton’s warehouses couldn’t hold them.  They began to pile up on the docks, rotting in the rain, and their market price crashed.

To make up the margin, sealers further accelerated the pace of the killing — until there was nothing left to kill.  In this way, oversupply and extinction went hand in hand.  In the process, cooperation among sealers gave way to bloody battles over thinning rookeries.  Previously, it only took a few weeks and a handful of men to fill a ship’s hold with skins.  As those rookeries began to disappear, however, more and more men were needed to find and kill the required number of seals and they were often left on desolate islands for two- or three-year stretches, living alone in miserable huts in dreary weather, wondering if their ships were ever going to return for them.

“On island after island, coast after coast,” one historian wrote, “the seals had been destroyed to the last available pup, on the supposition that if sealer Tom did not kill every seal in sight, sealer Dick or sealer Harry would not be so squeamish.”  By 1804, on the very island where Amasa estimated that there had been millions of seals, there were more sailors than prey.  Two years later, there were no seals at all.

The Machinery of Civilization

There exists a near perfect inverse symmetry between the real Amasa and the fictional Ahab, with each representing a face of the American Empire.  Amasa is virtuous, Ahab vengeful.  Amasa seems trapped by the shallowness of his perception of the world.  Ahab is profound; he peers into the depths.  Amasa can’t see evil (especially his own). Ahab sees only nature’s “intangible malignity.”

Both are representatives of the most predatory industries of their day, their ships carrying what Delano once called the “machinery of civilization” to the Pacific, using steel, iron, and fire to kill animals and transform their corpses into value on the spot.

Yet Ahab is the exception, a rebel who hunts his white whale against all rational economic logic.  He has hijacked the “machinery” that his ship represents and rioted against “civilization.”  He pursues his quixotic chase in violation of the contract he has with his employers.  When his first mate, Starbuck, insists that his obsession will hurt the profits of the ship’s owners, Ahab dismisses the concern: “Let the owners stand on Nantucket beach and outyell the Typhoons. What cares Ahab?  Owners, Owners?  Thou art always prating to me, Starbuck, about those miserly owners, as if the owners were my conscience.”

Insurgents like Ahab, however dangerous to the people around them, are not the primary drivers of destruction.  They are not the ones who will hunt animals to near extinction — or who are today forcing the world to the brink.  Those would be the men who never dissent, who either at the frontlines of extraction or in the corporate backrooms administer the destruction of the planet, day in, day out, inexorably, unsensationally without notice, their actions controlled by an ever greater series of financial abstractions and calculations made in the stock exchanges of New York, London, and Shanghai.

If Ahab is still the exception, Delano is still the rule.  Throughout his long memoir, he reveals himself as ever faithful to the customs and institutions of maritime law, unwilling to take any action that would injure the interests of his investors and insurers.  “All bad consequences,” he wrote, describing the importance of protecting property rights, “may be avoided by one who has a knowledge of his duty, and is disposed faithfully to obey its dictates.”

It is in Delano’s reaction to the West African rebels, once he finally realizes he has been the target of an elaborately staged con, that the distinction separating the sealer from the whaler becomes clear.  The mesmeric Ahab — the “thunder-cloven old oak” — has been taken as a prototype of the twentieth-century totalitarian, a one-legged Hitler or Stalin who uses an emotional magnetism to convince his men to willingly follow him on his doomed hunt for Moby Dick.

Delano is not a demagogue.  His authority is rooted in a much more common form of power: the control of labor and the conversion of diminishing natural resources into marketable items.  As seals disappeared, however, so too did his authority.  His men first began to grouse and then conspire.  In turn, Delano had to rely ever more on physical punishment, on floggings even for the most minor of offences, to maintain control of his ship — until, that is, he came across the Spanish slaver.  Delano might have been personally opposed to slavery, yet once he realized he had been played for a fool, he organized his men to retake the slave ship and violently pacify the rebels.  In the process, they disemboweled some of the rebels and left them writhing in their viscera, using their sealing lances, which Delano described as “exceedingly sharp and as bright as a gentleman’s sword.”

Caught in the pincers of supply and demand, trapped in the vortex of ecological exhaustion, with no seals left to kill, no money to be made, and his own crew on the brink of mutiny, Delano rallied his men to the chase — not of a white whale but of black rebels.  In the process, he reestablished his fraying authority.  As for the surviving rebels, Delano re-enslaved them.  Propriety, of course, meant returning them and the ship to its owners.

Our Amasas, Ourselves

With Ahab, Melville looked to the past, basing his obsessed captain on Lucifer, the fallen angel in revolt against the heavens, and associating him with America’s “manifest destiny,” with the nation’s restless drive beyond its borders.  With Amasa, Melville glimpsed the future.  Drawing on the memoirs of a real captain, he created a new literary archetype, a moral man sure of his righteousness yet unable to link cause to effect, oblivious to the consequences of his actions even as he careens toward catastrophe.

They are still with us, our Amasas.  They have knowledge of their duty and are disposed faithfully to follow its dictates, even unto the ends of the Earth.

TomDispatch regular Greg Grandin’s new book, The Empire of Necessity:  Slavery, Freedom, and Deception in the New World, has just been published. 

To stay on top of important articles like these, sign up to receive the latest updates from here.

Greg Grandin is a professor of history at New York University and the author, most recently, of “Fordlandia: The Rise and Fall of Henry Ford’s Forgotten Jungle City”. Check out a TomDispatch audio interview with Grandin about Henry Ford’s strange adventure in the Amazon by clicking here.

Noam Chomsky: What Is the Common Good? (Truthout)

Tuesday, 07 January 2014 10:41

By Noam ChomskyTruthout | Op-Ed

 (Image: <a href="" target="_blank"> Jared Rodriguez / t r u t h o u t; Adapted: Brian Hillegas, Reigh LeBlanc, abrinsky</a>)(Image: Jared Rodriguez / t r u t h o u t; Adapted: Brian Hillegas, Reigh LeBlanc, abrinsky)

This article is adapted from a Dewey Lecture by Noam Chomsky at Columbia University in New York on Dec. 6, 2013.

Humans are social beings, and the kind of creature that a person becomes depends crucially on the social, cultural and institutional circumstances of his life.

We are therefore led to inquire into the social arrangements that are conducive to people’s rights and welfare, and to fulfilling their just aspirations – in brief, the common good.

For perspective I’d like to invoke what seem to me virtual truisms. They relate to an interesting category of ethical principles: those that are not only universal, in that they are virtually always professed, but also doubly universal, in that at the same time they are almost universally rejected in practice.

These range from very general principles, such as the truism that we should apply to ourselves the same standards we do to others (if not harsher ones), to more specific doctrines, such as a dedication to promoting democracy and human rights, which is proclaimed almost universally, even by the worst monsters – though the actual record is grim, across the spectrum.

A good place to start is with John Stuart Mill’s classic “On Liberty.” Its epigraph formulates “The grand, leading principle, towards which every argument unfolded in these pages directly converges: the absolute and essential importance of human development in its richest diversity.”

The words are quoted from Wilhelm von Humboldt, a founder of classical liberalism. It follows that institutions that constrain such development are illegitimate, unless they can somehow justify themselves.

Concern for the common good should impel us to find ways to cultivate human development in its richest diversity.

Adam Smith, another Enlightenment thinker with similar views, felt that it shouldn’t be too difficult to institute humane policies. In his “Theory of Moral Sentiments” he observed that “How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it.”

Smith acknowledges the power of what he calls the “vile maxim of the masters of mankind”: “All for ourselves, and nothing for other people.” But the more benign “original passions of human nature” might compensate for that pathology.

Classical liberalism shipwrecked on the shoals of capitalism, but its humanistic commitments and aspirations didn’t die. Rudolf Rocker, a 20th-century anarchist thinker and activist, reiterated similar ideas.

Rocker described what he calls “a definite trend in the historic development of mankind” that strives for “the free unhindered unfolding of all the individual and social forces in life.”

Rocker was outlining an anarchist tradition culminating in anarcho-syndicalism – in European terms, a variety of “libertarian socialism.”

This brand of socialism, he held, doesn’t depict “a fixed, self-enclosed social system” with a definite answer to all the multifarious questions and problems of human life, but rather a trend in human development that strives to attain Enlightenment ideals.

So understood, anarchism is part of a broader range of libertarian socialist thought and action that includes the practical achievements of revolutionary Spain in 1936; reaches further to worker-owned enterprises spreading today in the American rust belt, in northern Mexico, in Egypt, and many other countries, most extensively in the Basque country in Spain; and encompasses the many cooperative movements around the world and a good part of feminist and civil and human rights initiatives.

This broad tendency in human development seeks to identify structures of hierarchy, authority and domination that constrain human development, and then subject them to a very reasonable challenge: Justify yourself.

If these structures can’t meet that challenge, they should be dismantled – and, anarchists believe, “refashioned from below,” as commentator Nathan Schneider observes.

In part this sounds like truism: Why should anyone defend illegitimate structures and institutions? But truisms at least have the merit of being true, which distinguishes them from a good deal of political discourse. And I think they provide useful stepping stones to finding the common good.

For Rocker, “the problem that is set for our time is that of freeing man from the curse of economic exploitation and political and social enslavement.”

It should be noted that the American brand of libertarianism differs sharply from the libertarian tradition, accepting and indeed advocating the subordination of working people to the masters of the economy, and the subjection of everyone to the restrictive discipline and destructive features of markets.

Anarchism is, famously, opposed to the state, while advocating “planned administration of things in the interest of the community,” in Rocker’s words; and beyond that, wide-ranging federations of self-governing communities and workplaces.

Today, anarchists dedicated to these goals often support state power to protect people, society and the earth itself from the ravages of concentrated private capital. That’s no contradiction. People live and suffer and endure in the existing society. Available means should be used to safeguard and benefit them, even if a long-term goal is to construct preferable alternatives.

In the Brazilian rural workers movement, they speak of “widening the floors of the cage” – the cage of existing coercive institutions that can be widened by popular struggle – as has happened effectively over many years.

We can extend the image to think of the cage of state institutions as a protection from the savage beasts roaming outside: the predatory, state-supported capitalist institutions dedicated in principle to private gain, power and domination, with community and people’s interest at most a footnote, revered in rhetoric but dismissed in practice as a matter of principle and even law.

Much of the most respected work in academic political science compares public attitudes and government policy. In “Affluence and Influence: Economic Inequality and Political Power in America,” the Princeton scholar Martin Gilens reveals that the majority of the U.S. population is effectively disenfranchised.

About 70 percent of the population, at the lower end of the wealth/income scale, has no influence on policy, Gilens concludes. Moving up the scale, influence slowly increases. At the very top are those who pretty much determine policy, by means that aren’t obscure. The resulting system is not democracy but plutocracy.

Or perhaps, a little more kindly, it’s what legal scholar Conor Gearty calls “neo-democracy,” a partner to neoliberalism – a system in which liberty is enjoyed by the few, and security in its fullest sense is available only to the elite, but within a system of more general formal rights.

In contrast, as Rocker writes, a truly democratic system would achieve the character of “an alliance of free groups of men and women based on cooperative labor and a planned administration of things in the interest of the community.”

No one took the American philosopher John Dewey to be an anarchist. But consider his ideas. He recognized that “Power today resides in control of the means of production, exchange, publicity, transportation and communication. Whoever owns them rules the life of the country,” even if democratic forms remain. Until those institutions are in the hands of the public, politics will remain “the shadow cast on society by big business,” much as is seen today.

These ideas lead very naturally to a vision of society based on workers’ control of productive institutions, as envisioned by 19th century thinkers, notably Karl Marx but also – less familiar – John Stuart Mill.

Mill wrote, “The form of association, however, which if mankind continue to improve, must be expected to predominate, is . the association of the labourers themselves on terms of equality, collectively owning the capital with which they carry on their operations, and working under managers electable and removable by themselves.”

The Founding Fathers of the United States were well aware of the hazards of democracy. In the Constitutional Convention debates, the main framer, James Madison, warned of these hazards.

Naturally taking England as his model, Madison observed that “In England, at this day, if elections were open to all classes of people, the property of landed proprietors would be insecure. An agrarian law would soon take place,” undermining the right to property.

The basic problem that Madison foresaw in “framing a system which we wish to last for ages” was to ensure that the actual rulers will be the wealthy minority so as “to secure the rights of property agst. the danger from an equality & universality of suffrage, vesting compleat power over property in hands without a share in it.”

Scholarship generally agrees with the Brown University scholar Gordon S. Wood’s assessment that “The Constitution was intrinsically an aristocratic document designed to check the democratic tendencies of the period.”

Long before Madison, Artistotle, in his “Politics,” recognized the same problem with democracy.

Reviewing a variety of political systems, Aristotle concluded that this system was the best – or perhaps the least bad – form of government. But he recognized a flaw: The great mass of the poor could use their voting power to take the property of the rich, which would be unfair.

Madison and Aristotle arrived at opposite solutions: Aristotle advised reducing inequality, by what we would regard as welfare state measures. Madison felt that the answer was to reduce democracy.

In his last years, Thomas Jefferson, the man who drafted the United States’ Declaration of Independence, captured the essential nature of the conflict, which has far from ended. Jefferson had serious concerns about the quality and fate of the democratic experiment. He distinguished between “aristocrats and democrats.”

The aristocrats are “those who fear and distrust the people, and wish to draw all powers from them into the hands of the higher classes.”

The democrats, in contrast, “identify with the people, have confidence in them, cherish and consider them as the most honest and safe, although not the most wise depository of the public interest.”

Today the successors to Jefferson’s “aristocrats” might argue about who should play the guiding role: technocratic and policy-oriented intellectuals, or bankers and corporate executives.

It is this political guardianship that the genuine libertarian tradition seeks to dismantle and reconstruct from below, while also changing industry, as Dewey put it, “from a feudalistic to a democratic social order” based on workers’ control, respecting the dignity of the producer as a genuine person, not a tool in the hands of others.

Like Karl Marx’s Old Mole – “our old friend, our old mole, who knows so well how to work underground, then suddenly to emerge” – the libertarian tradition is always burrowing close to the surface, always ready to peek through, sometimes in surprising and unexpected ways, seeking to bring about what seems to me to be a reasonable approximation to the common good.

© 2014 Noam Chomsky
Distributed by The New York Times Syndicate

Will one researcher’s discovery deep in the Amazon destroy the foundation of modern linguistics? (The Chronicle of Higher Education)

The Chronicle Review

By Tom Bartlett

March 20, 2012

Angry Words

chomsky everett

A Christian missionary sets out to convert a remote Amazonian tribe. He lives with them for years in primitive conditions, learns their extremely difficult language, risks his life battling malaria, giant anacondas, and sometimes the tribe itself. In a plot twist, instead of converting them he loses his faith, morphing from an evangelist trying to translate the Bible into an academic determined to understand the people he’s come to respect and love.

Along the way, the former missionary discovers that the language these people speak doesn’t follow one of the fundamental tenets of linguistics, a finding that would seem to turn the field on its head, undermine basic assumptions about how children learn to communicate, and dethrone the discipline’s long-reigning king, who also happens to be among the most well-known and influential intellectuals of the 20th century.

It feels like a movie, and it may in fact turn into one—there’s a script and producers on board. It’s already a documentary that will air in May on the Smithsonian Channel. A play is in the works in London. And the man who lived the story, Daniel Everett, has written two books about it. His 2008 memoir Don’t Sleep, There Are Snakes, is filled with Joseph Conrad-esque drama. The new book, Language: The Cultural Tool, which is lighter on jungle anecdotes, instead takes square aim at Noam Chomsky, who has remained the pre-eminent figure in linguistics since the 1960s, thanks to the brilliance of his ideas and the force of his personality.

But before any Hollywood premiere, it’s worth asking whether Everett actually has it right. Answering that question is not straightforward, in part because it hinges on a bit of grammar that no one except linguists ever thinks about. It’s also made tricky by the fact that Everett is the foremost expert on this language, called Pirahã, and one of only a handful of outsiders who can speak it, making it tough for others to weigh in and leading his critics to wonder aloud if he has somehow rigged the results.

More than any of that, though, his claim is difficult to verify because linguistics is populated by a deeply factionalized group of scholars who can’t agree on what they’re arguing about and who tend to dismiss their opponents as morons or frauds or both. Such divisions exist, to varying degrees, in all disciplines, but linguists seem uncommonly hostile. The word “brutal” comes up again and again, as do “spiteful,” “ridiculous,” and “childish.”

With that in mind, why should anyone care about the answer? Because it might hold the key to understanding what separates us from the rest of the animals.

Imagine a linguist from Mars lands on Earth to survey the planet’s languages (presumably after obtaining the necessary interplanetary funding). The alien would reasonably conclude that the languages of the world are mostly similar with interesting but relatively minor variations.

As science-fiction premises go it’s rather dull, but it roughly illustrates Chomsky’s view of linguistics, known as Universal Grammar, which has dominated the field for a half-century. Chomsky is fond of this hypothetical and has used it repeatedly for decades, including in a 1971 discussion with Michel Foucault, during which he added that “this Martian would, if he were rational, conclude that the structure of the knowledge that is acquired in the case of language is basically internal to the human mind.”

In his new book, Everett, now dean of arts and sciences at Bentley University, writes about hearing Chomsky bring up the Martian in a lecture he gave in the early 1990s. Everett noticed a group of graduate students in the back row laughing and exchanging money. After the talk, Everett asked them what was so funny, and they told him they had taken bets on precisely when Chomsky would once again cite the opinion of the linguist from Mars.

The somewhat unkind implication is that the distinguished scholar had become so predictable that his audiences had to search for ways to amuse themselves. Another Chomsky nugget is the way he responds when asked to give a definition of Universal Grammar. He will sometimes say that Universal Grammar is whatever made it possible for his granddaughter to learn to talk but left the world’s supply of kittens and rocks speechless—a less-than-precise answer. Say “kittens and rocks” to a cluster of linguists and eyes are likely to roll.

Chomsky’s detractors have said that Universal Grammar is whatever he needs it to be at that moment. By keeping it mysterious, they contend, he is able to dodge criticism and avoid those who are gunning for him. It’s hard to murder a phantom.

Everett’s book is an attempt to deliver, if not a fatal blow, then at least a solid right cross to Universal Grammar. He believes that the structure of language doesn’t spring from the mind but is instead largely formed by culture, and he points to the Amazonian tribe he studied for 30 years as evidence. It’s not that Everett thinks our brains don’t play a role—they obviously do. But he argues that just because we are capable of language does not mean it is necessarily prewired. As he writes in his book: “The discovery that humans are better at building human houses than porpoises tells us nothing about whether the architecture of human houses is innate.”

The language Everett has focused on, Pirahã, is spoken by just a few hundred members of a hunter-gatherer tribe in a remote part of Brazil. Everett got to know the Pirahã in the late 1970s as an American missionary. With his wife and kids, he lived among them for months at a time, learning their language from scratch. He would point to objects and ask their names. He would transcribe words that sounded identical to his ears but had completely different meanings. His progress was maddeningly slow, and he had to deal with the many challenges of jungle living. His story of taking his family, by boat, to get treatment for severe malaria is an epic in itself.

His initial goal was to translate the Bible. He got his Ph.D. in linguistics along the way and, in 1984, spent a year studying at the Massachusetts Institute of Technology in an office near Chomsky’s. He was a true-blue Chomskyan then, so much so that his kids grew up thinking Chomsky was more saint than professor. “All they ever heard about was how great Chomsky was,” he says. He was a linguist with a dual focus: studying the Pirahã language and trying to save the Pirahã from hell. The second part, he found, was tough because the Pirahã are rooted in the present. They don’t discuss the future or the distant past. They don’t have a belief in gods or an afterlife. And they have a strong cultural resistance to the influence of outsiders, dubbing all non-Pirahã “crooked heads.” They responded to Everett’s evangelism with indifference or ridicule.

As he puts it now, the Pirahã weren’t lost, and therefore they had no interest in being saved. They are a happy people. Living in the present has been an excellent strategy, and their lack of faith in the divine has not hindered them. Everett came to convert them, but over many years found that his own belief in God had melted away.

So did his belief in Chomsky, albeit for different reasons. The Pirahã language is remarkable in many respects. Entire conversations can be whistled, making it easier to communicate in the jungle while hunting. Also, the Pirahã don’t use numbers. They have words for amounts, like a lot or a little, but nothing for five or one hundred. Most significantly, for Everett’s argument, he says their language lacks what linguists call “recursion”—that is, the Pirahã don’t embed phrases in other phrases. They instead speak only in short, simple sentences.

In a recursive language, additional phrases and clauses can be inserted in a sentence, complicating the meaning, in theory indefinitely. For most of us, the lack of recursion in a little-known Brazilian language may not seem terribly interesting. But when Everett published a paper with that finding in 2005, the news created a stir. There were magazine articles and TV appearances. Fellow linguists weighed in, if only in some cases to scoff. Everett had put himself and the Pirahã on the map.

His paper might have received a shrug if Chomsky had not recently co-written a paper, published in 2002, that said (or seemed to say) that recursion was the single most important feature of human language. “In particular, animal communication systems lack the rich expressive and open-ended power of human language (based on humans’ capacity for recursion),” the authors wrote. Elsewhere in the paper, the authors wrote that the faculty of human language “at minimum” contains recursion. They also deemed it the “only uniquely human component of the faculty of language.”

In other words, Chomsky had finally issued what seemed like a concrete, definitive statement about what made human language unique, exposing a possible vulnerability. Before Everett’s paper was published, there had already been back and forth between Chomsky and the authors of a response to the 2002 paper, Ray Jackendoff and Steven Pinker. In the wake of that public disagreement, Everett’s paper had extra punch.

It’s been said that if you want to make a name for yourself in modern linguistics, you have to either align yourself with Chomsky or seek to destroy him. Either you are desirous of his approval or his downfall. With his 2005 paper, Everett opted for the latter course.

Because the pace of academic debate is just this side of glacial, it wasn’t until June 2009 that the next major chapter in the saga was written. Three scholars who are generally allies of Chomsky published a lengthy paper in the journal Language dissecting Everett’s claims one by one. What he considered unique features of Pirahã weren’t unique. What he considered “gaps” in the language weren’t gaps. They argued this in part by comparing Everett’s recent paper to work he published in the 1980s, calling it, slightly snidely, his earlier “rich material.” Everett wasn’t arguing with Chomsky, they claimed; he was arguing with himself. Young Everett thought Pirahã had recursion. Old Everett did not.

Everett’s defense was, in so many words, to agree. Yes, his earlier work was contradictory, but that’s because he was still under Chomsky’s sway when he wrote it. It’s natural, he argued, even when doing basic field work, cataloging the words of a language and the stories of a people, to be biased by your theoretical assumptions. Everett was a Chomskyan through and through, so much so that he had written the MSN Encarta encyclopedia entry on him. But now, after more years with the Pirahã, the scales had fallen from his eyes, and he saw the language on its own terms rather than those he was trying to impose on it.

David Pesetsky, a linguistics professor at MIT and one of the authors of the critical Languagepaper, thinks Everett was trying to gin up a “Star Wars-level battle between himself and the forces of Universal Grammar,” presumably with Everett as Luke Skywalker and Chomsky as Darth Vader.

Contradicting Everett meant getting into the weeds of the Pirahã language, a language that Everett knew intimately and his critics did not. “Most people took the attitude that this wasn’t worth taking on,” Pesetsky says. “There’s a junior-high-school corridor, two kids are having a fight, and everyone else stands back.” Everett wrote a lengthy reply that Pesetsky and his co-authors found unsatisfying and evasive. “The response could have been ‘Yeah, we need to do this more carefully,'” says Pesetsky. “But he’s had seven years to do it more carefully and he hasn’t.”

Critics haven’t just accused Everett of inaccurate analysis. He’s the sole authority on a language that he says changes everything. If he wanted to, they suggest, he could lie about his findings without getting caught. Some were willing to declare him essentially a fraud. That’s what one of the authors of the 2009 paper, Andrew Nevins, now at University College London, seems to believe. When I requested an interview with Nevins, his reply read, “I may be being glib, but it seems you’ve already analyzed this kind of case!” Below his message was a link to an article I had written about a Dutch social psychologist who had admitted to fabricating results, including creating data from studies that were never conducted. In another e-mail, after declining to expand on his apparent accusation, Nevins wrote that the “world does not need another article about Dan Everett.”

In 2007, Everett heard reports of a letter signed by Cilene Rodrigues, who is Brazilian, and who co-wrote the paper with Pesetsky and Nevins, that accuses him of racism. According to Everett, he got a call from a source informing him that Rodrigues, an honorary research fellow at University College London, had sent a letter to the organization in Brazil that grants permission for researchers to visit indigenous groups like the Pirahã. He then discovered that the organization, called FUNAI, the National Indian Foundation, would no longer grant him permission to visit the Pirahã, whom he had known for most of his adult life and who remain the focus of his research.

He still hasn’t been able to return. Rodrigues would not respond directly to questions about whether she had signed such a letter, nor would Nevins. Rodrigues forwarded an e-mail from another linguist who has worked in Brazil, which speculates that Everett was denied access to the Pirahã because he did not obtain the proper permits and flouted the law, accusations Everett calls “completely false” and “amazingly nasty lies.”

Whatever the reason for his being blocked, the question remains: Is Everett’s work racist? The accusation goes that because Everett says that the Pirahã do not have recursion, and that all human languages supposedly have recursion, Everett is asserting that the Pirahã are less than human. Part of this claim is based on an online summary, written by a former graduate student of Everett’s, that quotes traders in Brazil saying the Pirahã “talk like chickens and act like monkeys,” something Everett himself never said and condemns. The issue is sensitive because the Pirahã, who eschew the trappings of modern civilization and live the way their forebears lived for thousands of years, are regularly denigrated by their neighbors in the region as less than human. The fact that Everett is American, not Brazilian, lends the charge added symbolic weight.

When you read Everett’s two books about the Pirahã, it is nearly impossible to think that he believes they are inferior. In fact, he goes to great lengths not to condescend and offers defenses of practices that outsiders would probably find repugnant. In one instance he describes, a Pirahã woman died, leaving behind a baby that the rest of the tribe thought was too sick to live. Everett cared for the infant. One day, while he was away, members of the tribe killed the baby, telling him that it was in pain and wanted to die. He cried, but didn’t condemn, instead defending in the book their seemingly cruel logic.

Likewise, the Pirahã’s aversion to learning agriculture, or preserving meat, or the fact that they show no interest in producing artwork, is portrayed by Everett not as a shortcoming but as evidence of the Pirahã’s insistence on living in the present. Their nonhierarchical social system seems to Everett fair and sensible. He is critical of his own earlier attempts to convert the Pirahã to Christianity as a sort of “colonialism of the mind.” If anything, Everett is more open to a charge of romanticizing the Pirahã culture.

Other critics are more measured but equally suspicious. Mark Baker, a linguist at Rutgers University at New Brunswick, who considers himself part of Chomsky’s camp, mentions Everett’s “vested motive” in saying that the Pirahã don’t have recursion. “We always have to be a little careful when we have one person who has researched a language that isn’t accessible to other people,” Baker says. He is dubious of Everett’s claims. “I can’t believe it’s true as described,” he says.

Chomsky hasn’t exactly risen above the fray. He told a Brazilian newspaper that Everett was a “charlatan.” In the documentary about Everett, Chomsky raises the possibility, without saying he believes it, that Everett may have faked his results. Behind the scenes, he has been active as well. According to Pesetsky, Chomsky asked him to send an e-mail to David Papineau, a professor of philosophy at King’s College London, who had written a positive, or at least not negative, review of Don’t Sleep, There Are Snakes. The e-mail complained that Papineau had misunderstood recursion and was incorrectly siding with Everett. Papineau thought he had done nothing of the sort. “For people outside of linguistics, it’s rather surprising to find this kind of protection of orthodoxy,” Papineau says.

And what if the Pirahã don’t have recursion? Rather than ferreting out flaws in Everett’s work as Pesetsky did, Chomsky’s preferred response is to say that it doesn’t matter. In a lecture he gave last October at University College London, he referred to Everett’s work without mentioning his name, talking about those who believed that “exceptions to the generalizations are considered lethal.” He went on to say that a “rational reaction” to finding such exceptions “isn’t to say ‘Let’s throw out the field.'” Universal Grammar permits such exceptions. There is no problem. As Pesetsky puts it: “There’s nothing that says languages without subordinate clauses can’t exist.”

Except the 2002 paper on which Chomsky’s name appears. Pesetsky and others have backed away from that paper, arguing not that it was incorrect, but that it was “written in an unfortunate way” and that the authors were “trying to make certain things comprehensible about linguistics to a larger public, but they didn’t make it clear that they were simplifying.” Some say that Chomsky signed his name to the paper but that it was actually written by Marc Hauser, the former professor of psychology at Harvard University, who resigned after Harvard officials found him guilty of eight counts of research misconduct. (For the record, no one has suggested the alleged misconduct affected his work with Chomsky.)

Chomsky declined to grant me an interview. Those close to him say he sees Everett as seizing on a few stray, perhaps underexplained, lines from that 2002 paper and distorting them for his own purposes. And the truth, Chomsky has made clear, should be apparent to any rational person.

Ted Gibson has heard that one before. When Gibson, a professor of cognitive sciences at MIT, gave a paper on the topic at a January meeting of the Linguistic Society of America, held in Portland, Ore., Pesetsky stood up at the end to ask a question. “His first comment was that Chomsky never said that. I went back and found the slide,” he says. “Whenever I talk about this question in front of these people I have to put up the literal quote from Chomsky. Then I have to put it up again.”

Geoffrey Pullum, a professor of linguistics at the University of Edinburgh, is also vexed at how Chomsky and company have, in his view, played rhetorical sleight-of-hand to make their case. “They have retreated to such an extreme degree that it says really nothing,” he says. “If it has a sentence longer than three words then they’re claiming they were right. If that’s what they claim, then they weren’t claiming anything.” Pullum calls this move “grossly dishonest and deeply silly.”

Everett has been arguing about this for seven years. He says Pirahã undermines Universal Grammar. The other side says it doesn’t. In an effort to settle the dispute, Everett asked Gibson, who holds a joint appointment in linguistics at MIT, to look at the data and reach his own conclusions. He didn’t provide Gibson with data he had collected himself because he knows his critics suspect those data have been cooked. Instead he provided him with sentences and stories collected by his missionary predecessor. That way, no one could object that it was biased.

In the documentary about Everett, handing over the data to Gibson is given tremendous narrative importance. Everett is the bearded, safari-hatted field researcher boating down a river in the middle of nowhere, talking and eating with the natives. Meanwhile, Gibson is the nerd hunched over his keyboard back in Cambridge, crunching the data, examining it with his research assistants, to determine whether Everett really has discovered something. If you watch the documentary, you get the sense that what Gibson has found confirms Everett’s theory. And that’s the story you get from Everett, too. In our first interview, he encouraged me to call Gibson. “The evidence supports what I’m saying,” he told me, noting that he and Gibson had a few minor differences of interpretation.

But that’s not what Gibson thinks. Some of what he found does support Everett. For example, he’s confirmed that Pirahã lacks possessive recursion, phrases like “my brother’s mother’s house.” Also, there appear to be no conjunctions like “and” or “or.” In other instances, though, he’s found evidence that seems to undercut Everett’s claims—specifically, when it comes to noun phrases in sentences like “His mother, Itaha, spoke.”

That is a simple sentence, but inserting the mother’s name is a hallmark of recursion. Gibson’s paper, on which Everett is a co-author, states, “We have provided suggestive evidence that Pirahã may have sentences with recursive structures.”

If that turns out to be true, it would undermine the primary thesis of both of Everett’s books about the Pirahã. Rather than the hero who spent years in the Amazon emerging with evidence that demolished the field’s predominant theory, Everett would be the descriptive linguist who came back with a couple of books full of riveting anecdotes and cataloged a language that is remarkable, but hardly changes the game.

Everett only realized during the reporting of this article that Gibson disagreed with him so strongly. Until then, he had been saying that the results generally supported his theory. “I don’t know why he says that,” Gibson says. “Because it doesn’t. He wrote that our work corroborates it. A better word would be falsified. Suggestive evidence is against it right now and not for it.” Though, he points out, the verdict isn’t final. “It looks like it is recursive,” he says. “I wouldn’t bet my life on it.”

Another researcher, Ray Jackendoff, a linguist at Tufts University, was also provided the data and sees it slightly differently. “I think we decided there is some embedding but it is of limited depth,” he says. “It’s not recursive in the sense that you can have infinitely deep embedding.” Remember that in Chomsky’s paper, it was the idea that “open-ended” recursion was possible that separated human and animal communication. Whether the kind of limited recursion Gibson and Jackendoff have noted qualifies depends, like everything else in this debate, on the interpretation.

Everett thinks what Gibson has found is not recursion, but rather false starts, and he believes further research will back him up. “These are very short, extremely limited examples and they almost always are nouns clarifying other nouns,” he says. “You almost never see anything but that in these cases.” And he points out that there still doesn’t seem to be any evidence of infinite recursion. Says Everett: “There simply is no way, even if what I claim to be false starts are recursive instead, to say, “‘My mother, Susie, you know who I mean, you like her, is coming tonight.'”

The field has a history of theoretical disagreements that turn ugly. In the book The Linguistic Wars, published in 1995, Randy Allen Harris tells the story of another skirmish between Chomsky and a group of insurgent linguists called generative semanticists. Chomsky dismissed his opponents’ arguments as absurd. His opponents accused him of altering his theories when confronted and of general arrogance. “Chomsky has the impressive rhetorical talent of offering ideas which are at once tentative and fully endorsed, of appearing to take the if out of his arguments while nevertheless keeping it safely around,” writes Harris.

That rhetorical talent was on display in his lecture last October, in which he didn’t just disagree with other linguists, but treated their arguments as ridiculous and a mortal danger to the field. The style seems to be reflected in his political activism. Watch his 1969 debate on Firing Lineagainst William F. Buckley Jr., available on YouTube, and witness Chomsky tie his famous interlocutor in knots. It is a thorough, measured evisceration. Chomsky is willing to deploy those formidable skills in linguistic arguments as well.

Everett is far from the only current Chomsky challenger. Recently there’s been a rise in so-called corpus linguistics, a data-driven method of evaluating a language, using computer software to analyze sentences and phrases. The method produces detailed information and, for scholars like Gibson, finally provides scientific rigor for a field he believes has been mired in never-ending theoretical disputes. That, along with the brain-scanning technology that linguists are increasingly making use of, may be able to help resolve questions about how much of the structure of language is innate and how much is shaped by culture.

But Chomsky has little use for that method. In his lecture, he deemed corpus linguistics nonscientific, comparing it to doing physics by describing the swirl of leaves on a windy day rather than performing experiments. This was “just statistical modeling,” he said, evidence of a “kind of pathology in the cognitive sciences.” Referring to brain scans, Chomsky joked that the only way to get a grant was to propose an fMRI.

As for Universal Grammar, some are already writing its obituary. Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology, has stated flatly that “Universal Grammar is dead.” Two linguists, Nicholas Evans and Stephen Levinson, published a paper in 2009 titled “The Myth of Language Universals,” arguing that the “claims of Universal Grammar … are either empirically false, unfalsifiable, or misleading in that they refer to tendencies rather than strict universals.” Pullum has a similar take: “There is no Universal Grammar now, not if you take Chomsky seriously about the things he says.”

Gibson puts it even more harshly. Just as Chomsky doesn’t think corpus linguistics is science, Gibson doesn’t think Universal Grammar is worthwhile. “The question is, ‘What is it?’ How much is built-in and what does it do? There are no details,” he says. “It’s crazy to say it’s dead. It was never alive.”

Such proclamations have been made before and Chomsky, now 83, has a history of outmaneuvering and outlasting his adversaries. Whether Everett will be yet another in a long line of would-be debunkers who turn into footnotes remains to be seen. “I probably do, despite my best intentions, hope that I turn out to be right,” he says. “I know that it is not scientific. But I would be a hypocrite if I didn’t admit it.”

The Responsibility of Intellectuals, Redux (Boston Review)

Boston Review – SEPTEMBER/OCTOBER 2011

Using Privilege to Challenge the State

Noam Chomsky

A San Francisco mural depicting Archbishop Óscar Romero / Photograph: Franco Folini

Since we often cannot see what is happening before our eyes, it is perhaps not too surprising that what is at a slight distance removed is utterly invisible. We have just witnessed an instructive example: President Obama’s dispatch of 79 commandos into Pakistan on May 1 to carry out what was evidently a planned assassination of the prime suspect in the terrorist atrocities of 9/11, Osama bin Laden. Though the target of the operation, unarmed and with no protection, could easily have been apprehended, he was simply murdered, his body dumped at sea without autopsy. The action was deemed “just and necessary” in the liberal press. There will be no trial, as there was in the case of Nazi criminals—a fact not overlooked by legal authorities abroad who approve of the operation but object to the procedure. As Elaine Scarry reminds us, the prohibition of assassination in international law traces back to a forceful denunciation of the practice by Abraham Lincoln, who condemned the call for assassination as “international outlawry” in 1863, an “outrage,” which “civilized nations” view with “horror” and merits the “sternest retaliation.”

In 1967, writing about the deceit and distortion surrounding the American invasion of Vietnam, I discussed the responsibility of intellectuals, borrowing the phrase from an important essay of Dwight Macdonald’s after World War II. With the tenth anniversary of 9/11 arriving, and widespread approval in the United States of the assassination of the chief suspect, it seems a fitting time to revisit that issue. But before thinking about the responsibility of intellectuals, it is worth clarifying to whom we are referring.

The concept of intellectuals in the modern sense gained prominence with the 1898 “Manifesto of the Intellectuals” produced by the Dreyfusards who, inspired by Emile Zola’s open letter of protest to France’s president, condemned both the framing of French artillery officer Alfred Dreyfus on charges of treason and the subsequent military cover-up. The Dreyfusards’ stance conveys the image of intellectuals as defenders of justice, confronting power with courage and integrity. But they were hardly seen that way at the time. A minority of the educated classes, the Dreyfusards were bitterly condemned in the mainstream of intellectual life, in particular by prominent figures among “the immortals of the strongly anti-Dreyfusard Académie Française,” Steven Lukes writes. To the novelist, politician, and anti-Dreyfusard leader Maurice Barrès, Dreyfusards were “anarchists of the lecture-platform.” To another of these immortals, Ferdinand Brunetière, the very word “intellectual” signified “one of the most ridiculous eccentricities of our time—I mean the pretension of raising writers, scientists, professors and philologists to the rank of supermen,” who dare to “treat our generals as idiots, our social institutions as absurd and our traditions as unhealthy.”

Who then were the intellectuals? The minority inspired by Zola (who was sentenced to jail for libel, and fled the country)? Or the immortals of the academy? The question resonates through the ages, in one or another form, and today offers a framework for determining the “responsibility of intellectuals.” The phrase is ambiguous: does it refer to intellectuals’ moral responsibility as decent human beings in a position to use their privilege and status to advance the causes of freedom, justice, mercy, peace, and other such sentimental concerns? Or does it refer to the role they are expected to play, serving, not derogating, leadership and established institutions?

• • •

One answer came during World War I, when prominent intellectuals on all sides lined up enthusiastically in support of their own states.

In their “Manifesto of 93 German Intellectuals,” leading figures in one of the world’s most enlightened states called on the West to “have faith in us! Believe, that we shall carry on this war to the end as a civilized nation, to whom the legacy of a Goethe, a Beethoven, and a Kant, is just as sacred as its own hearths and homes.” Their counterparts on the other side of the intellectual trenches matched them in enthusiasm for the noble cause, but went beyond in self-adulation. In The New Republic they proclaimed, “The effective and decisive work on behalf of the war has been accomplished by . . . a class which must be comprehensively but loosely described as the ‘intellectuals.’” These progressives believed they were ensuring that the United States entered the war “under the influence of a moral verdict reached, after the utmost deliberation by the more thoughtful members of the community.” They were, in fact, the victims of concoctions of the British Ministry of Information, which secretly sought “to direct the thought of most of the world,” but particularly the thought of American progressive intellectuals who might help to whip a pacifist country into war fever.

John Dewey was impressed by the great “psychological and educational lesson” of the war, which proved that human beings—more precisely, “the intelligent men of the community”—can “take hold of human affairs and manage them . . . deliberately and intelligently” to achieve the ends sought, admirable by definition.

Not everyone toed the line so obediently, of course. Notable figures such as Bertrand Russell, Eugene Debs, Rosa Luxemburg, and Karl Liebknecht were, like Zola, sentenced to prison. Debs was punished with particular severity—a ten-year prison term for raising questions about President Wilson’s “war for democracy and human rights.” Wilson refused him amnesty after the war ended, though Harding finally relented. Some, such as Thorstein Veblen, were chastised but treated less harshly; Veblen was fired from his position in the Food Administration after preparing a report showing that the shortage of farm labor could be overcome by ending Wilson’s brutal persecution of labor, specifically the International Workers of the World. Randolph Bourne was dropped by the progressive journals after criticizing the “league of benevolently imperialistic nations” and their exalted endeavors.

The pattern of praise and punishment is a familiar one throughout history: those who line up in the service of the state are typically praised by the general intellectual community, and those who refuse to line up in service of the state are punished. Thus in retrospect Wilson and the progressive intellectuals who offered him their services are greatly honored, but not Debs. Luxemburg and Liebknecht were murdered and have hardly been heroes of the intellectual mainstream. Russell continued to be bitterly condemned until after his death—and in current biographies still is.

Since power tends to prevail, intellectuals who serve their governments are considered the responsible ones.

In the 1970s prominent scholars distinguished the two categories of intellectuals more explicitly. A 1975 study, The Crisis of Democracy, labeled Brunetière’s ridiculous eccentrics “value-oriented intellectuals” who pose a “challenge to democratic government which is, potentially at least, as serious as those posed in the past by aristocratic cliques, fascist movements, and communist parties.” Among other misdeeds, these dangerous creatures “devote themselves to the derogation of leadership, the challenging of authority,” and they challenge the institutions responsible for “the indoctrination of the young.” Some even sink to the depths of questioning the nobility of war aims, as Bourne had. This castigation of the miscreants who question authority and the established order was delivered by the scholars of the liberal internationalist Trilateral Commission; the Carter administration was largely drawn from their ranks.

Like The New Republic progressives during World War I, the authors of The Crisis of Democracy extend the concept of the “intellectual” beyond Brunetière’s ridiculous eccentrics to include the better sort as well: the “technocratic and policy-oriented intellectuals,” responsible and serious thinkers who devote themselves to the constructive work of shaping policy within established institutions and to ensuring that indoctrination of the young proceeds on course.

It took Dewey only a few years to shift from the responsible technocratic and policy-oriented intellectual of World War I to an anarchist of the lecture-platform, as he denounced the “un-free press” and questioned “how far genuine intellectual freedom and social responsibility are possible on any large scale under the existing economic regime.”

What particularly troubled the Trilateral scholars was the “excess of democracy” during the time of troubles, the 1960s, when normally passive and apathetic parts of the population entered the political arena to advance their concerns: minorities, women, the young, the old, working people . . . in short, the population, sometimes called the “special interests.” They are to be distinguished from those whom Adam Smith called the “masters of mankind,” who are “the principal architects” of government policy and pursue their “vile maxim”: “All for ourselves and nothing for other people.” The role of the masters in the political arena is not deplored, or discussed, in the Trilateral volume, presumably because the masters represent “the national interest,” like those who applauded themselves for leading the country to war “after the utmost deliberation by the more thoughtful members of the community” had reached its “moral verdict.”

To overcome the excessive burden imposed on the state by the special interests, the Trilateralists called for more “moderation in democracy,” a return to passivity on the part of the less deserving, perhaps even a return to the happy days when “Truman had been able to govern the country with the cooperation of a relatively small number of Wall Street lawyers and bankers,” and democracy therefore flourished.

The Trilateralists could well have claimed to be adhering to the original intent of the Constitution, “intrinsically an aristocratic document designed to check the democratic tendencies of the period” by delivering power to a “better sort” of people and barring “those who were not rich, well born, or prominent from exercising political power,” in the accurate words of the historian Gordon Wood. In Madison’s defense, however, we should recognize that his mentality was pre-capitalist. In determining that power should be in the hands of “the wealth of the nation,” “a the more capable set of men,” he envisioned those men on the model of the “enlightened Statesmen” and “benevolent philosopher” of the imagined Roman world. They would be “pure and noble,” “men of intelligence, patriotism, property, and independent circumstances” “whose wisdom may best discern the true interest of their country, and whose patriotism and love of justice will be least likely to sacrifice it to temporary or partial considerations.” So endowed, these men would “refine and enlarge the public views,” guarding the public interest against the “mischiefs” of democratic majorities. In a similar vein, the progressive Wilsonian intellectuals might have taken comfort in the discoveries of the behavioral sciences, explained in 1939 by the psychologist and education theorist Edward Thorndike:

It is the great good fortune of mankind that there is a substantial correlation between intelligence and morality including good will toward one’s fellows . . . . Consequently our superiors in ability are on the average our benefactors, and it is often safer to trust our interests to them than to ourselves.

A comforting doctrine, though some might feel that Adam Smith had the sharper eye.

• • •

Since power tends to prevail, intellectuals who serve their governments are considered responsible, and value-oriented intellectuals are dismissed or denigrated. At home that is.

With regard to enemies, the distinction between the two categories of intellectuals is retained, but with values reversed. In the old Soviet Union, the value-oriented intellectuals were the honored dissidents, while we had only contempt for the apparatchiks and commissars, the technocratic and policy-oriented intellectuals. Similarly in Iran we honor the courageous dissidents and condemn those who defend the clerical establishment. And elsewhere generally.

The honorable term “dissident” is used selectively. It does not, of course, apply, with its favorable connotations, to value-oriented intellectuals at home or to those who combat U.S.-supported tyranny abroad. Take the interesting case of Nelson Mandela, who was removed from the official terrorist list in 2008, and can now travel to the United States without special authorization.

Father Ignacio Ellacuría / Photograph: Gervasio Sánchez

Twenty years earlier, he was the criminal leader of one of the world’s “more notorious terrorist groups,” according to a Pentagon report. That is why President Reagan had to support the apartheid regime, increasing trade with South Africa in violation of congressional sanctions and supporting South Africa’s depredations in neighboring countries, which led, according to a UN study, to 1.5 million deaths. That was only one episode in the war on terrorism that Reagan declared to combat “the plague of the modern age,” or, as Secretary of State George Shultz had it, “a return to barbarism in the modern age.” We may add hundreds of thousands of corpses in Central America and tens of thousands more in the Middle East, among other achievements. Small wonder that the Great Communicator is worshipped by Hoover Institution scholars as a colossus whose “spirit seems to stride the country, watching us like a warm and friendly ghost,” recently honored further by a statue that defaces the American Embassy in London.

What particularly troubled the Trilateral scholars was the ‘excess of democracy’ in the 1960s.

The Latin American case is revealing. Those who called for freedom and justice in Latin America are not admitted to the pantheon of honored dissidents. For example, a week after the fall of the Berlin Wall, six leading Latin American intellectuals, all Jesuit priests, had their heads blown off on the direct orders of the Salvadoran high command. The perpetrators were from an elite battalion armed and trained by Washington that had already left a gruesome trail of blood and terror, and had just returned from renewed training at the John F. Kennedy Special Warfare Center and School at Fort Bragg, North Carolina. The murdered priests are not commemorated as honored dissidents, nor are others like them throughout the hemisphere. Honored dissidents are those who called for freedom in enemy domains in Eastern Europe, who certainly suffered, but not remotely like their counterparts in Latin America.

The distinction is worth examination, and tells us a lot about the two senses of the phrase “responsibility of intellectuals,” and about ourselves. It is not seriously in question, as John Coatsworth writes in the recently published Cambridge University History of the Cold War, that from 1960 to “the Soviet collapse in 1990, the numbers of political prisoners, torture victims, and executions of nonviolent political dissenters in Latin America vastly exceeded those in the Soviet Union and its East European satellites.” Among the executed were many religious martyrs, and there were mass slaughters as well, consistently supported or initiated by Washington.

Why then the distinction? It might be argued that what happened in Eastern Europe is far more momentous than the fate of the South at our hands. It would be interesting to see the argument spelled out. And also to see the argument explaining why we should disregard elementary moral principles, among them that if we are serious about suffering and atrocities, about justice and rights, we will focus our efforts on where we can do the most good—typically, where we share responsibility for what is being done. We have no difficulty demanding that our enemies follow such principles.

Few of us care, or should, what Andrei Sakharov or Shirin Ebadi say about U.S. or Israeli crimes; we admire them for what they say and do about those of their own states, and the conclusion holds far more strongly for those who live in more free and democratic societies, and therefore have far greater opportunities to act effectively. It is of some interest that in the most respected circles, practice is virtually the opposite of what elementary moral values dictate.

But let us conform and keep only to the matter of historical import.

The U.S. wars in Latin America from 1960 to 1990, quite apart from their horrors, have long-term historical significance. To consider just one important aspect, in no small measure they were wars against the Church, undertaken to crush a terrible heresy proclaimed at Vatican II in 1962, which, under the leadership of Pope John XXIII, “ushered in a new era in the history of the Catholic Church,” in the words of the distinguished theologian Hans Küng, restoring the teachings of the gospels that had been put to rest in the fourth century when the Emperor Constantine established Christianity as the religion of the Roman Empire, instituting “a revolution” that converted “the persecuted church” to a “persecuting church.” The heresy of Vatican II was taken up by Latin American bishops who adopted the “preferential option for the poor.” Priests, nuns, and laypersons then brought the radical pacifist message of the gospels to the poor, helping them organize to ameliorate their bitter fate in the domains of U.S. power.

That same year, 1962, President Kennedy made several critical decisions. One was to shift the mission of the militaries of Latin America from “hemispheric defense”—an anachronism from World War II—to “internal security,” in effect, war against the domestic population, if they raise their heads. Charles Maechling, who led U.S. counterinsurgency and internal defense planning from 1961 to 1966, describes the unsurprising consequences of the 1962 decision as a shift from toleration “of the rapacity and cruelty of the Latin American military” to “direct complicity” in their crimes to U.S. support for “the methods of Heinrich Himmler’s extermination squads.” One major initiative was a military coup in Brazil, planned in Washington and implemented shortly after Kennedy’s assassination, instituting a murderous and brutal national security state. The plague of repression then spread through the hemisphere, including the 1973 coup installing the Pinochet dictatorship, and later the most vicious of all, the Argentine dictatorship, Reagan’s favorite. Central America’s turn—not for the first time—came in the 1980s under the leadership of the “warm and friendly ghost” who is now revered for his achievements.

The murder of the Jesuit intellectuals as the Berlin wall fell was a final blow in defeating the heresy, culminating a decade of horror in El Salvador that opened with the assassination, by much the same hands, of Archbishop Óscar Romero, the “voice for the voiceless.” The victors in the war against the Church declare their responsibility with pride. The School of the Americas (since renamed), famous for its training of Latin American killers, announces as one of its “talking points” that the liberation theology that was initiated at Vatican II was “defeated with the assistance of the US army.”

Actually, the November 1989 assassinations were almost a final blow. More was needed.

A year later Haiti had its first free election, and to the surprise and shock of Washington, which like others had anticipated the easy victory of its own candidate from the privileged elite, the organized public in the slums and hills elected Jean-Bertrand Aristide, a popular priest committed to liberation theology. The United States at once moved to undermine the elected government, and after the military coup that overthrew it a few months later, lent substantial support to the vicious military junta and its elite supporters. Trade was increased in violation of international sanctions and increased further under Clinton, who also authorized the Texaco oil company to supply the murderous rulers, in defiance of his own directives.

I will skip the disgraceful aftermath, amply reviewed elsewhere, except to point out that in 2004, the two traditional torturers of Haiti, France and the United States, joined by Canada, forcefully intervened, kidnapped President Aristide (who had been elected again), and shipped him off to central Africa. He and his party were effectively barred from the farcical 2010–11 elections, the most recent episode in a horrendous history that goes back hundreds of years and is barely known among the perpetrators of the crimes, who prefer tales of dedicated efforts to save the suffering people from their grim fate.

If we are serious about justice, we will focus our efforts where we share responsibility for what is being done.

Another fateful Kennedy decision in 1962 was to send a special forces mission to Colombia, led by General William Yarborough, who advised the Colombian security forces to undertake “paramilitary, sabotage and/or terrorist activities against known communist proponents,” activities that “should be backed by the United States.” The meaning of the phrase “communist proponents” was spelled out by the respected president of the Colombian Permanent Committee for Human Rights, former Minister of Foreign Affairs Alfredo Vázquez Carrizosa, who wrote that the Kennedy administration “took great pains to transform our regular armies into counterinsurgency brigades, accepting the new strategy of the death squads,” ushering in

what is known in Latin America as the National Security Doctrine. . . . [not] defense against an external enemy, but a way to make the military establishment the masters of the game . . . [with] the right to combat the internal enemy, as set forth in the Brazilian doctrine, the Argentine doctrine, the Uruguayan doctrine, and the Colombian doctrine: it is the right to fight and to exterminate social workers, trade unionists, men and women who are not supportive of the establishment, and who are assumed to be communist extremists. And this could mean anyone, including human rights activists such as myself.

In a 1980 study, Lars Schoultz, the leading U.S. academic specialist on human rights in Latin America, found that U.S. aid “has tended to flow disproportionately to Latin American governments which torture their citizens . . . to the hemisphere’s relatively egregious violators of fundamental human rights.” That included military aid, was independent of need, and continued through the Carter years. Ever since the Reagan administration, it has been superfluous to carry out such a study. In the 1980s one of the most notorious violators was El Salvador, which accordingly became the leading recipient of U.S. military aid, to be replaced by Colombia when it took the lead as the worst violator of human rights in the hemisphere. Vázquez Carrizosa himself was living under heavy guard in his Bogotá residence when I visited him there in 2002 as part of a mission of Amnesty International, which was opening its year-long campaign to protect human rights defenders in Colombia because of the country’s horrifying record of attacks against human rights and labor activists, and mostly the usual victims of state terror: the poor and defenseless. Terror and torture in Colombia were supplemented by chemical warfare (“fumigation”), under the pretext of the war on drugs, leading to huge flight to urban slums and misery for the survivors. Colombia’s attorney general’s office now estimates that more than 140,000 people have been killed by paramilitaries, often acting in close collaboration with the U.S.-funded military.

Signs of the slaughter are everywhere. On a nearly impassible dirt road to a remote village in southern Colombia a year ago, my companions and I passed a small clearing with many simple crosses marking the graves of victims of a paramilitary attack on a local bus. Reports of the killings are graphic enough; spending a little time with the survivors, who are among the kindest and most compassionate people I have ever had the privilege of meeting, makes the picture more vivid, and only more painful.

This is the briefest sketch of terrible crimes for which Americans bear substantial culpability, and that we could easily ameliorate, at the very least.

But it is more gratifying to bask in praise for courageously protesting the abuses of official enemies, a fine activity, but not the priority of a value-oriented intellectual who takes the responsibilities of that stance seriously.

The victims within our domains, unlike those in enemy states, are not merely ignored and quickly forgotten, but are also cynically insulted. One striking illustration came a few weeks after the murder of the Latin American intellectuals in El Salvador. Vaclav Havel visited Washington and addressed a joint session of Congress. Before his enraptured audience, Havel lauded the “defenders of freedom” in Washington who “understood the responsibility that flowed from” being “the most powerful nation on earth”—crucially, their responsibility for the brutal assassination of his Salvadoran counterparts shortly before.

The liberal intellectual class was enthralled by his presentation. Havel reminds us that “we live in a romantic age,” Anthony Lewis gushed. Other prominent liberal commentators reveled in Havel’s “idealism, his irony, his humanity,” as he “preached a difficult doctrine of individual responsibility” while Congress “obviously ached with respect” for his genius and integrity; and asked why America lacks intellectuals so profound, who “elevate morality over self-interest” in this way, praising us for the tortured and mutilated corpses that litter the countries that we have left in misery. We need not tarry on what the reaction would have been had Father Ellacuría, the most prominent of the murdered Jesuit intellectuals, spoken such words at the Duma after elite forces armed and trained by the Soviet Union assassinated Havel and half a dozen of his associates—a performance that is inconceivable.

John Dewey / Photograph: New York Public Library / Photoresearchers, Inc.

The assassination of bin Laden, too, directs our attention to our insulted victims. There is much more to say about the operation—including Washington’s willingness to face a serious risk of major war and even leakage of fissile materials to jihadis, as I have discussed elsewhere—but let us keep to the choice of name: Operation Geronimo. The name caused outrage in Mexico and was protested by indigenous groups in the United States, but there seems to have been no further notice of the fact that Obama was identifying bin Laden with the Apache Indian chief. Geronimo led the courageous resistance to invaders who sought to consign his people to the fate of “that hapless race of native Americans, which we are exterminating with such merciless and perfidious cruelty, among the heinous sins of this nation, for which I believe God will one day bring [it] to judgement,” in the words of the grand strategist John Quincy Adams, the intellectual architect of manifest destiny, uttered long after his own contributions to these sins. The casual choice of the name is reminiscent of the ease with which we name our murder weapons after victims of our crimes: Apache, Blackhawk, Cheyenne . . . We might react differently if the Luftwaffe were to call its fighter planes “Jew” and “Gypsy.”

The first 9/11, unlike the second, did not change the world. It was ‘nothing of very great consequence,’ Kissinger said.

Denial of these “heinous sins” is sometimes explicit. To mention a few recent cases, two years ago in one of the world’s leading left-liberal intellectual journals, The New York Review of Books, Russell Baker outlined what he learned from the work of the “heroic historian” Edmund Morgan: namely, that when Columbus and the early explorers arrived they “found a continental vastness sparsely populated by farming and hunting people . . . . In the limitless and unspoiled world stretching from tropical jungle to the frozen north, there may have been scarcely more than a million inhabitants.” The calculation is off by many tens of millions, and the “vastness” included advanced civilizations throughout the continent. No reactions appeared, though four months later the editors issued a correction, noting that in North America there may have been as many as 18 million people—and, unmentioned, tens of millions more “from tropical jungle to the frozen north.” This was all well known decades ago—including the advanced civilizations and the “merciless and perfidious cruelty” of the “extermination”—but not important enough even for a casual phrase. In London Review of Books a year later, the noted historian Mark Mazower mentioned American “mistreatment of the Native Americans,” again eliciting no comment. Would we accept the word “mistreatment” for comparable crimes committed by enemies?

• • •

If the responsibility of intellectuals refers to their moral responsibility as decent human beings in a position to use their privilege and status to advance the cause of freedom, justice, mercy, and peace—and to speak out not simply about the abuses of our enemies, but, far more significantly, about the crimes in which we are implicated and can ameliorate or terminate if we choose—how should we think of 9/11?

The notion that 9/11 “changed the world” is widely held, understandably. The events of that day certainly had major consequences, domestic and international. One was to lead President Bush to re-declare Ronald Reagan’s war on terrorism—the first one has been effectively “disappeared,” to borrow the phrase of our favorite Latin American killers and torturers, presumably because the consequences do not fit well with preferred self images. Another consequence was the invasion of Afghanistan, then Iraq, and more recently military interventions in several other countries in the region and regular threats of an attack on Iran (“all options are open,” in the standard phrase). The costs, in every dimension, have been enormous. That suggests a rather obvious question, not asked for the first time: was there an alternative?

A number of analysts have observed that bin Laden won major successes in his war against the United States. “He repeatedly asserted that the only way to drive the U.S. from the Muslim world and defeat its satraps was by drawing Americans into a series of small but expensive wars that would ultimately bankrupt them,” the journalist Eric Margolis writes.

The United States, first under George W. Bush and then Barack Obama, rushed right into bin Laden’s trap. . . . Grotesquely overblown military outlays and debt addiction . . . . may be the most pernicious legacy of the man who thought he could defeat the United States.

A report from the Costs of War project at Brown University’s Watson Institute for International Studies estimates that the final bill will be $3.2–4 trillion. Quite an impressive achievement by bin Laden.

That Washington was intent on rushing into bin Laden’s trap was evident at once. Michael Scheuer, the senior CIA analyst responsible for tracking bin Laden from 1996 to 1999, writes, “Bin Laden has been precise in telling America the reasons he is waging war on us.” The al Qaeda leader, Scheuer continues, “is out to drastically alter U.S. and Western policies toward the Islamic world.”

And, as Scheuer explains, bin Laden largely succeeded: “U.S. forces and policies are completing the radicalization of the Islamic world, something Osama bin Laden has been trying to do with substantial but incomplete success since the early 1990s. As a result, I think it is fair to conclude that the United States of America remains bin Laden’s only indispensable ally.” And arguably remains so, even after his death.

There is good reason to believe that the jihadi movement could have been split and undermined after the 9/11 attack, which was criticized harshly within the movement. Furthermore, the “crime against humanity,” as it was rightly called, could have been approached as a crime, with an international operation to apprehend the likely suspects. That was recognized in the immediate aftermath of the attack, but no such idea was even considered by decision-makers in government. It seems no thought was given to the Taliban’s tentative offer—how serious an offer, we cannot know—to present the al Qaeda leaders for a judicial proceeding.

At the time, I quoted Robert Fisk’s conclusion that the horrendous crime of 9/11 was committed with “wickedness and awesome cruelty”—an accurate judgment. The crimes could have been even worse. Suppose that Flight 93, downed by courageous passengers in Pennsylvania, had bombed the White House, killing the president. Suppose that the perpetrators of the crime planned to, and did, impose a military dictatorship that killed thousands and tortured tens of thousands. Suppose the new dictatorship established, with the support of the criminals, an international terror center that helped impose similar torture-and-terror states elsewhere, and, as icing on the cake, brought in a team of economists—call them “the Kandahar boys”—who quickly drove the economy into one of the worst depressions in its history. That, plainly, would have been a lot worse than 9/11.

As we all should know, this is not a thought experiment. It happened. I am, of course, referring to what in Latin America is often called “the first 9/11”: September 11, 1973, when the United States succeeded in its intensive efforts to overthrow the democratic government of Salvador Allende in Chile with a military coup that placed General Pinochet’s ghastly regime in office. The dictatorship then installed the Chicago Boys—economists trained at the University of Chicago—to reshape Chile’s economy. Consider the economic destruction, the torture and kidnappings, and multiply the numbers killed by 25 to yield per capita equivalents, and you will see just how much more devastating the first 9/11 was.

Privilege yields opportunity, and opportunity confers responsibilities.

The goal of the overthrow, in the words of the Nixon administration, was to kill the “virus” that might encourage all those “foreigners [who] are out to screw us”—screw us by trying to take over their own resources and more generally to pursue a policy of independent development along lines disliked by Washington. In the background was the conclusion of Nixon’s National Security Council that if the United States could not control Latin America, it could not expect “to achieve a successful order elsewhere in the world.” Washington’s “credibility” would be undermined, as Henry Kissinger put it.

The first 9/11, unlike the second, did not change the world. It was “nothing of very great consequence,” Kissinger assured his boss a few days later. And judging by how it figures in conventional history, his words can hardly be faulted, though the survivors may see the matter differently.

These events of little consequence were not limited to the military coup that destroyed Chilean democracy and set in motion the horror story that followed. As already discussed, the first 9/11 was just one act in the drama that began in 1962 when Kennedy shifted the mission of the Latin American militaries to “internal security.” The shattering aftermath is also of little consequence, the familiar pattern when history is guarded by responsible intellectuals.

• • •

It seems to be close to a historical universal that conformist intellectuals, the ones who support official aims and ignore or rationalize official crimes, are honored and privileged in their own societies, and the value-oriented punished in one or another way. The pattern goes back to the earliest records. It was the man accused of corrupting the youth of Athens who drank the hemlock, much as Dreyfusards were accused of “corrupting souls, and, in due course, society as a whole” and the value-oriented intellectuals of the 1960s were charged with interference with “indoctrination of the young.”

In the Hebrew scriptures there are figures who by contemporary standards are dissident intellectuals, called “prophets” in the English translation. They bitterly angered the establishment with their critical geopolitical analysis, their condemnation of the crimes of the powerful, their calls for justice and concern for the poor and suffering. King Ahab, the most evil of the kings, denounced the Prophet Elijah as a hater of Israel, the first “self-hating Jew” or “anti-American” in the modern counterparts. The prophets were treated harshly, unlike the flatterers at the court, who were later condemned as false prophets. The pattern is understandable. It would be surprising if it were otherwise.

As for the responsibility of intellectuals, there does not seem to me to be much to say beyond some simple truths. Intellectuals are typically privileged—merely an observation about usage of the term. Privilege yields opportunity, and opportunity confers responsibilities. An individual then has choices.