Arquivo da tag: Economia experimental

A real-time revolution will up-end the practice of macroeconomics (The Economist)

The Economist Oct 23rd 2021

DOES ANYONE really understand what is going on in the world economy? The pandemic has made plenty of observers look clueless. Few predicted $80 oil, let alone fleets of container ships waiting outside Californian and Chinese ports. As covid-19 let rip in 2020, forecasters overestimated how high unemployment would be by the end of the year. Today prices are rising faster than expected and nobody is sure if inflation and wages will spiral upward. For all their equations and theories, economists are often fumbling in the dark, with too little information to pick the policies that would maximise jobs and growth.

Yet, as we report this week, the age of bewilderment is starting to give way to greater enlightenment. The world is on the brink of a real-time revolution in economics, as the quality and timeliness of information are transformed. Big firms from Amazon to Netflix already use instant data to monitor grocery deliveries and how many people are glued to “Squid Game”. The pandemic has led governments and central banks to experiment, from monitoring restaurant bookings to tracking card payments. The results are still rudimentary, but as digital devices, sensors and fast payments become ubiquitous, the ability to observe the economy accurately and speedily will improve. That holds open the promise of better public-sector decision-making—as well as the temptation for governments to meddle.

The desire for better economic data is hardly new. America’s GNP estimates date to 1934 and initially came with a 13-month time lag. In the 1950s a young Alan Greenspan monitored freight-car traffic to arrive at early estimates of steel production. Ever since Walmart pioneered supply-chain management in the 1980s private-sector bosses have seen timely data as a source of competitive advantage. But the public sector has been slow to reform how it works. The official figures that economists track—think of GDP or employment—come with lags of weeks or months and are often revised dramatically. Productivity takes years to calculate accurately. It is only a slight exaggeration to say that central banks are flying blind.

Bad and late data can lead to policy errors that cost millions of jobs and trillions of dollars in lost output. The financial crisis would have been a lot less harmful had the Federal Reserve cut interest rates to near zero in December 2007, when America entered recession, rather than in December 2008, when economists at last saw it in the numbers. Patchy data about a vast informal economy and rotten banks have made it harder for India’s policymakers to end their country’s lost decade of low growth. The European Central Bank wrongly raised interest rates in 2011 amid a temporary burst of inflation, sending the euro area back into recession. The Bank of England may be about to make a similar mistake today.

The pandemic has, however, become a catalyst for change. Without the time to wait for official surveys to reveal the effects of the virus or lockdowns, governments and central banks have experimented, tracking mobile phones, contactless payments and the real-time use of aircraft engines. Instead of locking themselves in their studies for years writing the next “General Theory”, today’s star economists, such as Raj Chetty at Harvard University, run well-staffed labs that crunch numbers. Firms such as JPMorgan Chase have opened up treasure chests of data on bank balances and credit-card bills, helping reveal whether people are spending cash or hoarding it.

These trends will intensify as technology permeates the economy. A larger share of spending is shifting online and transactions are being processed faster. Real-time payments grew by 41% in 2020, according to McKinsey, a consultancy (India registered 25.6bn such transactions). More machines and objects are being fitted with sensors, including individual shipping containers that could make sense of supply-chain blockages. Govcoins, or central-bank digital currencies (CBDCs), which China is already piloting and over 50 other countries are considering, might soon provide a goldmine of real-time detail about how the economy works.

Timely data would cut the risk of policy cock-ups—it would be easier to judge, say, if a dip in activity was becoming a slump. And the levers governments can pull will improve, too. Central bankers reckon it takes 18 months or more for a change in interest rates to take full effect. But Hong Kong is trying out cash handouts in digital wallets that expire if they are not spent quickly. CBDCs might allow interest rates to fall deeply negative. Good data during crises could let support be precisely targeted; imagine loans only for firms with robust balance-sheets but a temporary liquidity problem. Instead of wasteful universal welfare payments made through social-security bureaucracies, the poor could enjoy instant income top-ups if they lost their job, paid into digital wallets without any paperwork.

The real-time revolution promises to make economic decisions more accurate, transparent and rules-based. But it also brings dangers. New indicators may be misinterpreted: is a global recession starting or is Uber just losing market share? They are not as representative or free from bias as the painstaking surveys by statistical agencies. Big firms could hoard data, giving them an undue advantage. Private firms such as Facebook, which launched a digital wallet this week, may one day have more insight into consumer spending than the Fed does.

Know thyself

The biggest danger is hubris. With a panopticon of the economy, it will be tempting for politicians and officials to imagine they can see far into the future, or to mould society according to their preferences and favour particular groups. This is the dream of the Chinese Communist Party, which seeks to engage in a form of digital central planning.

In fact no amount of data can reliably predict the future. Unfathomably complex, dynamic economies rely not on Big Brother but on the spontaneous behaviour of millions of independent firms and consumers. Instant economics isn’t about clairvoyance or omniscience. Instead its promise is prosaic but transformative: better, timelier and more rational decision-making. ■

Enter third-wave economics

Oct 23rd 2021

AS PART OF his plan for socialism in the early 1970s, Salvador Allende created Project Cybersyn. The Chilean president’s idea was to offer bureaucrats unprecedented insight into the country’s economy. Managers would feed information from factories and fields into a central database. In an operations room bureaucrats could see if production was rising in the metals sector but falling on farms, or what was happening to wages in mining. They would quickly be able to analyse the impact of a tweak to regulations or production quotas.

Cybersyn never got off the ground. But something curiously similar has emerged in Salina, a small city in Kansas. Salina311, a local paper, has started publishing a “community dashboard” for the area, with rapid-fire data on local retail prices, the number of job vacancies and more—in effect, an electrocardiogram of the economy.

What is true in Salina is true for a growing number of national governments. When the pandemic started last year bureaucrats began studying dashboards of “high-frequency” data, such as daily airport passengers and hour-by-hour credit-card-spending. In recent weeks they have turned to new high-frequency sources, to get a better sense of where labour shortages are worst or to estimate which commodity price is next in line to soar. Economists have seized on these new data sets, producing a research boom (see chart 1). In the process, they are influencing policy as never before.

This fast-paced economics involves three big changes. First, it draws on data that are not only abundant but also directly relevant to real-world problems. When policymakers are trying to understand what lockdowns do to leisure spending they look at live restaurant reservations; when they want to get a handle on supply-chain bottlenecks they look at day-by-day movements of ships. Troves of timely, granular data are to economics what the microscope was to biology, opening a new way of looking at the world.

Second, the economists using the data are keener on influencing public policy. More of them do quick-and-dirty research in response to new policies. Academics have flocked to Twitter to engage in debate.

And, third, this new type of economics involves little theory. Practitioners claim to let the information speak for itself. Raj Chetty, a Harvard professor and one of the pioneers, has suggested that controversies between economists should be little different from disagreements among doctors about whether coffee is bad for you: a matter purely of evidence. All this is causing controversy among dismal scientists, not least because some, such as Mr Chetty, have done better from the shift than others: a few superstars dominate the field.

Their emerging discipline might be called “third wave” economics. The first wave emerged with Adam Smith and the “Wealth of Nations”, published in 1776. Economics mainly involved books or papers written by one person, focusing on some big theoretical question. Smith sought to tear down the monopolistic habits of 18th-century Europe. In the 20th century John Maynard Keynes wanted people to think differently about the government’s role in managing the economic cycle. Milton Friedman aimed to eliminate many of the responsibilities that politicians, following Keynes’s ideas, had arrogated to themselves.

All three men had a big impact on policies—as late as 1850 Smith was quoted 30 times in Parliament—but in a diffuse way. Data were scarce. Even by the 1970s more than half of economics papers focused on theory alone, suggests a study published in 2012 by Daniel Hamermesh, an economist.

That changed with the second wave of economics. By 2011 purely theoretical papers accounted for only 19% of publications. The growth of official statistics gave wonks more data to work with. More powerful computers made it easier to spot patterns and ascribe causality (this year’s Nobel prize was awarded for the practice of identifying cause and effect). The average number of authors per paper rose, as the complexity of the analysis increased (see chart 2). Economists had greater involvement in policy: rich-world governments began using cost-benefit analysis for infrastructure decisions from the 1950s.

Second-wave economics nonetheless remained constrained by data. Most national statistics are published with lags of months or years. “The traditional government statistics weren’t really all that helpful—by the time they came out, the data were stale,” says Michael Faulkender, an assistant treasury secretary in Washington at the start of the pandemic. The quality of official local economic data is mixed, at best; they do a poor job of covering the housing market and consumer spending. National statistics came into being at a time when the average economy looked more industrial, and less service-based, than it does now. The Standard Industrial Classification, introduced in 1937-38 and still in use with updates, divides manufacturing into 24 subsections, but the entire financial industry into just three.

The mists of time

Especially in times of rapid change, policymakers have operated in a fog. “If you look at the data right now…we are not in what would normally be characterised as a recession,” argued Edward Lazear, then chairman of the White House Council of Economic Advisers, in May 2008. Five months later, after Lehman Brothers had collapsed, the IMF noted that America was “not necessarily” heading for a deep recession. In fact America had entered a recession in December 2007. In 2007-09 there was no surge in economics publications. Economists’ recommendations for policy were mostly based on judgment, theory and a cursory reading of national statistics.

The gap between official data and what is happening in the real economy can still be glaring. Walk around a Walmart in Kansas and many items, from pet food to bottled water, are in short supply. Yet some national statistics fail to show such problems. Dean Baker of the Centre for Economic and Policy Research, using official data, points out that American real inventories, excluding cars and farm products, are barely lower than before the pandemic.

There were hints of an economics third wave before the pandemic. Some economists were finding new, extremely detailed streams of data, such as anonymised tax records and location information from mobile phones. The analysis of these giant data sets requires the creation of what are in effect industrial labs, teams of economists who clean and probe the numbers. Susan Athey, a trailblazer in applying modern computational methods in economics, has 20 or so non-faculty researchers at her Stanford lab (Mr Chetty’s team boasts similar numbers). Of the 20 economists with the most cited new work during the pandemic, three run industrial labs.

More data sprouted from firms. Visa and Square record spending patterns, Apple and Google track movements, and security companies know when people go in and out of buildings. “Computers are in the middle of every economic arrangement, so naturally things are recorded,” says Jon Levin of Stanford’s Graduate School of Business. Jamie Dimon, the boss of JPMorgan Chase, a bank, is an unlikely hero of the emergence of third-wave economics. In 2015 he helped set up an institute at his bank which tapped into data from its network to analyse questions about consumer finances and small businesses.

The Brexit referendum of June 2016 was the first big event when real-time data were put to the test. The British government and investors needed to get a sense of this unusual shock long before Britain’s official GDP numbers came out. They scraped web pages for telltale signs such as restaurant reservations and the number of supermarkets offering discounts—and concluded, correctly, that though the economy was slowing, it was far from the catastrophe that many forecasters had predicted.

Real-time data might have remained a niche pursuit for longer were it not for the pandemic. Chinese firms have long produced granular high-frequency data on everything from cinema visits to the number of glasses of beer that people are drinking daily. Beer-and-movie statistics are a useful cross-check against sometimes dodgy official figures. China-watchers turned to them in January 2020, when lockdowns began in Hubei province. The numbers showed that the world’s second-largest economy was heading for a slump. And they made it clear to economists elsewhere how useful such data could be.

Vast and fast

In the early days of the pandemic Google started releasing anonymised data on people’s physical movements; this has helped researchers produce a day-by-day measure of the severity of lockdowns (see chart 3). OpenTable, a booking platform, started publishing daily information on restaurant reservations. America’s Census Bureau quickly introduced a weekly survey of households, asking them questions ranging from their employment status to whether they could afford to pay the rent.

In May 2020 Jose Maria Barrero, Nick Bloom and Steven Davis, three economists, began a monthly survey of American business practices and work habits. Working-age Americans are paid to answer questions on how often they plan to visit the office, say, or how they would prefer to greet a work colleague. “People often complete a survey during their lunch break,” says Mr Bloom, of Stanford University. “They sit there with a sandwich, answer some questions, and that pays for their lunch.”

Demand for research to understand a confusing economic situation jumped. The first analysis of America’s $600 weekly boost to unemployment insurance, implemented in March 2020, was published in weeks. The British government knew by October 2020 that a scheme to subsidise restaurant attendance in August 2020 had probably boosted covid infections. Many apparently self-evident things about the pandemic—that the economy collapsed in March 2020, that the poor have suffered more than the rich, or that the shift to working from home is turning out better than expected—only seem obvious because of rapid-fire economic research.

It is harder to quantify the policy impact. Some economists scoff at the notion that their research has influenced politicians’ pandemic response. Many studies using real-time data suggested that the Paycheck Protection Programme, an effort to channel money to American small firms, was doing less good than hoped. Yet small-business lobbyists ensured that politicians did not get rid of it for months. Tyler Cowen, of George Mason University, points out that the most significant contribution of economists during the pandemic involved recommending early pledges to buy vaccines—based on older research, not real-time data.

Still, Mr Faulkender says that the special support for restaurants that was included in America’s stimulus was influenced by a weak recovery in the industry seen in the OpenTable data. Research by Mr Chetty in early 2021 found that stimulus cheques sent in December boosted spending by lower-income households, but not much for richer households. He claims this informed the decision to place stronger income limits on the stimulus cheques sent in March.

Shaping the economic conversation

As for the Federal Reserve, in May 2020 the Dallas and New York regional Feds and James Stock, a Harvard economist, created an activity index using data from SafeGraph, a data provider that tracks mobility using mobile-phone pings. The St Louis Fed used data from Homebase to track employment numbers daily. Both showed shortfalls of economic activity in advance of official data. This led the Fed to communicate its doveish policy stance faster.

Speedy data also helped frame debate. Everyone realised the world was in a deep recession much sooner than they had in 2007-09. In the IMF’s overviews of the global economy in 2009, 40% of the papers cited had been published in 2008-09. In the overview published in October 2020, by contrast, over half the citations were for papers published that year.

The third wave of economics has been better for some practitioners than others. As lockdowns began, many male economists found themselves at home with no teaching responsibilities and more time to do research. Female ones often picked up the slack of child care. A paper in Covid Economics, a rapid-fire journal, finds that female authors accounted for 12% of economics working-paper submissions during the pandemic, compared with 20% before. Economists lucky enough to have researched topics before the pandemic which became hot, from home-working to welfare policy, were suddenly in demand.

There are also deeper shifts in the value placed on different sorts of research. The Economist has examined rankings of economists from IDEAS RePEC, a database of research, and citation data from Google Scholar. We divided economists into three groups: “lone wolves” (who publish with less than one unique co-author per paper on average); “collaborators” (those who tend to work with more than one unique co-author per paper, usually two to four people); and “lab leaders” (researchers who run a large team of dedicated assistants). We then looked at the top ten economists for each as measured by RePEC author rankings for the past ten years.

Collaborators performed far ahead of the other two groups during the pandemic (see chart 4). Lone wolves did worst: working with large data sets benefits from a division of labour. Why collaborators did better than lab leaders is less clear. They may have been more nimble in working with those best suited for the problems at hand; lab leaders are stuck with a fixed group of co-authors and assistants.

The most popular types of research highlight another aspect of the third wave: its usefulness for business. Scott Baker, another economist, and Messrs Bloom and Davis—three of the top four authors during the pandemic compared with the year before—are all “collaborators” and use daily newspaper data to study markets. Their uncertainty index has been used by hedge funds to understand the drivers of asset prices. The research by Messrs Bloom and Davis on working from home has also gained attention from businesses seeking insight on the transition to remote work.

But does it work in theory?

Not everyone likes where the discipline is going. When economists say that their fellows are turning into data scientists, it is not meant as a compliment. A kinder interpretation is that the shift to data-heavy work is correcting a historical imbalance. “The most important problem with macro over the past few decades has been that it has been too theoretical,” says Jón Steinsson of the University of California, Berkeley, in an essay published in July. A better balance with data improves theory. Half of the recent Nobel prize went for the application of new empirical methods to labour economics; the other half was for the statistical theory around such methods.

Some critics question the quality of many real-time sources. High-frequency data are less accurate at estimating levels (for example, the total value of GDP) than they are at estimating changes, and in particular turning-points (such as when growth turns into recession). In a recent review of real-time indicators Samuel Tombs of Pantheon Macroeconomics, a consultancy, pointed out that OpenTable data tended to exaggerate the rebound in restaurant attendance last year.

Others have worries about the new incentives facing economists. Researchers now race to post a working paper with America’s National Bureau of Economic Research in order to stake their claim to an area of study or to influence policymakers. The downside is that consumers of fast-food academic research often treat it as if it is as rigorous as the slow-cooked sort—papers which comply with the old-fashioned publication process involving endless seminars and peer review. A number of papers using high-frequency data which generated lots of clicks, including one which claimed that a motorcycle rally in South Dakota had caused a spike in covid cases, have since been called into question.

Whatever the concerns, the pandemic has given economists a new lease of life. During the Chilean coup of 1973 members of the armed forces broke into Cybersyn’s operations room and smashed up the slides of graphs—not only because it was Allende’s creation, but because the idea of an electrocardiogram of the economy just seemed a bit weird. Third-wave economics is still unusual, but ever less odd. ■

5 Economists Redefining… Everything. Oh Yes, And They’re Women (Forbes)

Avivah Wittenberg-Cox

May 31, 2020,09:56am EDT

Five female economists.
From top left: Mariana Mazzucato, Carlota Perez, Kate Raworth, Stephanie Kelton, Esther Duflo. 20-first

Few economists become household names. Last century, it was John Maynard Keynes or Milton Friedman. Today, Thomas Piketty has become the economists’ poster-boy. Yet listen to the buzz, and it is five female economists who deserve our attention. They are revolutionising their field by questioning the meaning of everything from ‘value’ and ‘debt’ to ‘growth’ and ‘GDP.’ Esther Duflo, Stephanie Kelton, Mariana Mazzucato, Carlota Perez and Kate Raworth are united in one thing: their amazement at the way economics has been defined and debated to date. Their incredulity is palpable.

It reminds me of many women I’ve seen emerge into power over the past decade. Like Rebecca Henderson, a Management and Strategy professor at Harvard Business School and author of the new Reimagining Capitalism in a World on Fire. “It’s odd to finally make it to the inner circle,” she says, “and discover just how strangely the world is being run.” When women finally make it to the pinnacle of many professions, they often discover a world more wart-covered frog than handsome prince. Like Dorothy in The Wizard of Oz, when they get a glimpse behind the curtain, they discover the machinery of power can be more bluster than substance. As newcomers to the game, they can often see this more clearly than the long-term players. Henderson cites Tom Toro’s cartoon as her mantra. A group in rags sit around a fire with the ruins of civilisation in the background. “Yes, the planet got destroyed” says a man in a disheveled suit, “but for a beautiful moment in time we created a lot of value for shareholders.”

You get the same sense when you listen to the female economists throwing themselves into the still very male dominated economics field. A kind of collective ‘you’re kidding me, right? These five female economists are letting the secret out – and inviting people to flip the priorities. A growing number are listening – even the Pope (see below).

All question concepts long considered sacrosanct. Here are four messages they share:

Get Over It – Challenge the Orthodoxy

Described as “one of the most forward-thinking economists of our times,” Mariana Mazzucato is foremost among the flame throwers.  A professor at University College London and the Founder/Director of the UCL Institute for Innovation and Public Purpose, she asks fundamental questions about how ‘value’ has been defined, who decides what that means, and who gets to measure it. Her TED talk, provocatively titled “What is economic value? And who creates it?” lays down the gauntlet. If some people are value creators,” she asks, what does that make everyone else? “The couch potatoes? The value extractors? The value destroyers?” She wants to make economics explicitly serve the people, rather than explain their servitude.

Stephanie Kelton takes on our approach to debt and spoofs the simplistic metaphors, like comparing national income and expenditure to ‘family budgets’ in an attempt to prove how dangerous debt is. In her upcoming book, The Deficit Myth (June 2020), she argues they are not at all similar; what household can print additional money, or set interest rates? Debt should be rebranded as a strategic investment in the future. Deficits can be used in ways good or bad but are themselves a neutral and powerful policy tool. “They can fund unjust wars that destabilize the world and cost millions their lives,” she writes, “or they can be used to sustain life and build a more just economy that works for the many and not just the few.” Like all the economists profiled here, she’s pointing at the mind and the meaning behind the money.

Get Green Growth – Reshaping Growth Beyond GDP

Kate Raworth, a Senior Research Associate at Oxford University’s Environmental Change Institute, is the author of Doughnut Economics. She challenges our obsession with growth, and its outdated measures. The concept of Gross Domestic Product (GDP), was created in the 1930s and is being applied in the 21st century to an economy ten times larger. GDP’s limited scope (eg. ignoring the value of unpaid labour like housework and parenting or making no distinction between revenues from weapons or water) has kept us “financially, politically and socially addicted to growth” without integrating its costs on people and planet. She is pushing for new visual maps and metaphors to represent sustainable growth that doesn’t compromise future generations. What this means is moving away from the linear, upward moving line of ‘progress’ ingrained in us all, to a “regenerative and distributive” model designed to engage everyone and shaped like … a doughnut (food and babies figure prominently in these women’s metaphors). 

Carlota Perez doesn’t want to stop or slow growth, she wants to dematerialize it. “Green won’t spread by guilt and fear, we need aspiration and desire,” she says. Her push is towards a redefinition of the ‘good life’ and the need for “smart green growth” to be fuelled by a desire for new, attractive and aspirational lifestyles. Lives will be built on a circular economy that multiplies services and intangibles which offer limitless (and less environmentally harmful) growth. She points to every technological revolution creating new lifestyles. She says we can see it emerging, as it has in the past, among the educated, the wealthy and the young: more services rather than more things, active and creative work, a focus on health and care, a move to solar power, intense use of the internet, a preference for customisation over conformity, renting vs owning, and recycling over waste. As these new lifestyles become widespread, they offer immense opportunities for innovation and new jobs to service them.

Get Good Government – The Strategic Role of the State

All these economists want the state to play a major role. Women understand viscerally how reliant the underdogs of any system are on the inclusivity of the rules of the game. “It shapes the context to create a positive sum game” for both the public and business, says Perez. You need an active state to “tilt the playing field toward social good.” Perez outlines five technological revolutions, starting with the industrial one. She suggests we’re halfway through the fifth, the age of Tech & Information. Studying the repetitive arcs of each revolution enables us to see the opportunity of the extraordinary moment we are in. It’s the moment to shape the future for centuries to come. But she balances economic sustainability with the need for social sustainability, warning that one without the other is asking for trouble.

Mariana Mazzucato challenges governments to be more ambitious. They gain confidence and public trust by remembering and communicating what they are there to do. In her mind that is ensuring the public good. This takes vision and strategy, two ingredients she says are too often sorely lacking. Especially post-COVID, purpose needs to be the driver determining the ‘directionality’ of focus, investments and public/ private partnerships. Governments should be using their power – both of investment and procurement – to orient efforts towards the big challenges on our horizon, not just the immediate short-term recovery. They should be putting conditions on the massive financial bail outs they are currently handing out. She points to the contrast in imagination and impact between airline bailouts in Austria and the UK. The Austrian airlines are getting government aid on the condition they meet agreed emissions targets. The UK is supporting airlines without any conditionality, a huge missed opportunity to move towards larger, broader goals of building a better and greener economy out of the crisis.

Get Real – Beyond the Formulae and Into the Field

All of these economists also argue for getting out of the theories and into the field. They reject the idea of nerdy theoretical calculations done within the confines of a university tower and challenge economists to experiment and test their formulae in the real world.

Esther Duflo, Professor of Poverty Alleviation and Development Economics at MIT, is the major proponent of bringing what is accepted practice in medicine to the field of economics: field trials with randomised control groups. She rails against the billions poured into aid without any actual understanding or measurement of the returns. She gently accuses us of being no better with our 21st century approaches to problems like immunisation, education or malaria than any medieval doctor, throwing money and solutions at things with no idea of their impact. She and her husband, Abhijit Banerjee, have pioneered randomised control trials across hundreds of locations in different countries of the world, winning a Nobel Prize for Economics in 2019 for the insights.

They test, for example, how to get people to use bed nets against malaria. Nets are a highly effective preventive measure but getting people to acquire and use them has been a hard nut to crack. Duflo set up experiments to answer the conundrums: If people have to pay for nets, will they value them more? If they are free, will they use them? If they get them free once, will this discourage future purchases? As it turns out, based on these comparisons, take-up is best if nets are initially given, “people don’t get used to handouts, they get used to nets,” and will buy them – and use them – once they understand their effectiveness. Hence, she concludes, we can target policy and money towards impact.

Mazzucato is also hands-on with a number of governments around the world, including Denmark, the UK, Austria, South Africa and even the Vatican, where she has just signed up for weekly calls contributing to a post-Covid policy. ‘I believe [her vision] can help to think about the future,’ Pope Francis said after reading her book, The Value of Everything: Making and Taking in the Global Economy. No one can accuse her of being stuck in an ivory tower. Like Duflo, she is elbow-deep in creating new answers to seemingly intractable problems.

She warns that we don’t want to go back to normal after Covid-19. Normal was what got us here. Instead, she invites governments to use the crisis to embed ‘directionality’ towards more equitable public good into their recovery strategies and investments. Her approach is to define ambitious ‘missions’ which can focus minds and bring together broad coalitions of stakeholders to create solutions to support them. The original NASA mission to the moon is an obvious precursor model. Why, anyone listening to her comes away thinking, did we forget purpose in our public spending? And why, when so much commercial innovation and profit has grown out of government basic research spending, don’t a greater share of the fruits of success return to promote the greater good?

Economics has long remained a stubbornly male domain and men continue to dominate mainstream thinking. Yet, over time, ideas once considered without value become increasingly visible. The move from outlandish to acceptable to policy is often accelerated by crisis. Emerging from this crisis, five smart economists are offering an innovative range of new ideas about a greener, healthier and more inclusive way forward. Oh, and they happen to be women.

For the next generation: Democracy ensures we don’t take it all with us (Science Daily)

Date: June 25, 2014

Source: Yale University

Summary: Given the chance to vote, people will leave behind a legacy of resources that ensures the survival of the next generation, a series of experiments by psychologists show. However, when people are left to their own devices, the next generation isn’t so lucky.

Given the chance to vote, people will leave behind a legacy of resources that ensures the survival of the next generation, a series of experiments by psychologists show. However, when people are left to their own devices, the next generation isn’t so lucky. Credit: © Sunny studio / Fotolia

Given the chance to vote, people will leave behind a legacy of resources that ensures the survival of the next generation, a series of experiments by Yale and Harvard psychologists show. However, when people are left to their own devices, the next generation isn’t so lucky.

“People want to do the right thing; they just need a little help from their institutions,” said David Rand, assistant professor of psychology at Yale and a co-author of the study appearing June 25 in the journalNature.

The experiments shed light on the psychology underlying issues such as Social Security funding or resource conservation, in which the interests of future generations are at stake.

The study builds upon “public goods” economics experiments that consistently show that people are willing to forego immediate reward if convinced the group as a whole will benefit. But Rand and Harvard colleagues Martin Nowak, Oliver Hauer, and Alexander Peysakhovich wanted to know if people would be willing to sacrifice resources if the benefit accrues not to individuals in a group, but to people not yet born.

In their experiments, they broke subjects into groups of five and gave them 100 units to spend. In one experiment, each individual could take out up to 20 units, but if the group as a whole used more than 50 units, all successor groups would get nothing. If a given group showed restraint, a line-up of successor groups — new generations each consisting of five new people — would be given the same choices.

The good news was that more than two out of three people were willing to take only 10 units — the sustainable “fair share” allotment — for their own use and preserve resources for the next generation. The bad news was that the minority of selfish individuals consistently destroyed the resource for future generations. Even one or two people in the group taking more than their “fair share” was enough to push the group over the 50 unit threshold, exhausting the resource. In 18 experiments in which individuals were free to extract more than 10 units, only four groups left enough resources to support a second generation, and by the fourth generation, all resources were exhausted.

The results changed dramatically when democratic principles were introduced. All five members of the group voted for a number of units to take. The median vote was then taken out for all group members. In this scenario, all groups passed on enough resources to sustain future generations. Even when researchers made the sacrifice more costly — reducing the “sustainable” level of units available to the group to 40 or even 30 — a majority of groups passed resources down through generations.

Problems arose in a third scenario when only three of five members voted on how many units to take. The results of the vote were not binding for the other two subjects. Here sustainability failed, because a selfish person not bound by the vote could over-consume and destroy the resource.

The latter results would be equivalent to Kyoto protocols, a non-binding attempt to get nations to reduce carbon emissions, the authors noted.

“You are wasting your time if voting results are not binding on everyone,” Rand said.

While voting may be potentially challenging for global-level international agreements, it is much more promising for local- or national-level sustainability policies, note the researchers. In a final analysis of real-world data, Rand and colleagues show that democratic countries of the world have made most advances toward sustainability, even when accounting for factors such as wealth, population size, economic output, and inequality.

Journal Reference:
  1. Oliver P. Hauser, David G. Rand, Alexander Peysakhovich, Martin A. Nowak.Cooperating with the futureNature, 2014; DOI: 10.1038/nature13530

They Finally Tested The ‘Prisoner’s Dilemma’ On Actual Prisoners — And The Results Were Not What You Would Expect (Business Insider Australia)

, 21 July 2013

Alcatraz Jail Prison

The “prisoner’s dilemma” is a familiar concept to just about anybody that took Econ 101.

The basic version goes like this. Two criminals are arrested, but police can’t convict either on the primary charge, so they plan to sentence them to a year in jail on a lesser charge. Each of the prisoners, who can’t communicate with each other, are given the option of testifying against their partner. If they testify, and their partner remains silent, the partner gets 3 years and they go free. If they both testify, both get two. If both remain silent, they each get one.

In game theory, betraying your partner, or “defecting” is always the dominant strategy as it always has a slightly higher payoff in a simultaneous game. It’s what’s known as a “Nash Equilibrium,” after Nobel Prize winning mathematician and A Beautiful Mind subject John Nash.

In sequential games, where players know each other’s previous behaviour and have the opportunity to punish each other, defection is the dominant strategy as well.

However, on a Pareto basis, the best outcome for both players is mutual cooperation.

Yet no one’s ever actually run the experiment on real prisoners before, until two University of Hamburg economists tried it out in a recent study comparing the behaviour of inmates and students.

Surprisingly, for the classic version of the game, prisoners were far more cooperative  than expected.

Menusch Khadjavi and Andreas Lange put the famous game to the test for the first time ever, putting a group of prisoners in Lower Saxony’s primary women’s prison, as well as students through both simultaneous and sequential versions of the game.The payoffs obviously weren’t years off sentences, but euros for students, and the equivalent value in coffee or cigarettes for prisoners.

They expected, building off of game theory and behavioural economic research that show humans are more cooperative than the purely rational model that economists traditionally use, that there would be a fair amount of first-mover cooperation, even in the simultaneous simulation where there’s no way to react to the other player’s decisions.

And even in the sequential game, where you get a higher payoff for betraying a cooperative first mover, a fair amount will still reciprocate.

As for the difference between student and prisoner behaviour, you’d expect that a prison population might be more jaded and distrustful, and therefore more likely to defect.

The results went exactly the other way for the simultaneous game, only 37% of students cooperate. Inmates cooperated 56% of the time.

On a pair basis, only 13% of student pairs managed to get the best mutual outcome and cooperate, whereas 30% of prisoners do.

In the sequential game, way more students (63%) cooperate, so the mutual cooperation rate skyrockets to 39%. For prisoners, it remains about the same.

What’s interesting is that the simultaneous game requires far more blind trust out from both parties, and you don’t have a chance to retaliate or make up for being betrayed later. Yet prisoners are still significantly more cooperative in that scenario.

Obviously the payoffs aren’t as serious as a year or three of your life, but the paper still demonstrates that prisoners aren’t necessarily as calculating, self-interested, and un-trusting as you might expect, and as behavioural economists have argued for years, as mathematically interesting as Nash equilibrium might be, they don’t line up with real behaviour all that well.

Power of Suggestion (The Chronicle of Higher Education)

January 30, 2013

The amazing influence of unconscious cues is among the most fascinating discoveries of our time­—that is, if it’s true

By Tom Bartlett

New Haven, Conn.

Power of SuggestionMark Abramson for The Chronicle Review. John Bargh rocked the world of social psychology with experiments that showed the power of unconscious cues over our behavior.

Aframed print of “The Garden of Earthly Delights” hangs above the moss-green, L-shaped sectional in John Bargh’s office on the third floor of Yale University’s Kirtland Hall. Hieronymus Bosch’s famous triptych imagines a natural environment that is like ours (water, flowers) yet not (enormous spiked and translucent orbs). What precisely the 15th-century Dutch master had in mind is still a mystery, though theories abound. On the left is presumably paradise, in the middle is the world, and on the right is hell, complete with knife-faced monster and human-devouring bird devil.

By Bosch’s standard, it’s too much to say the past year has been hellish for Bargh, but it hasn’t been paradise either. Along with personal upheaval, including a lengthy child-custody battle, he has coped with what amounts to an assault on his life’s work, the research that pushed him into prominence, the studies that Malcolm Gladwell called “fascinating” and Daniel Kahneman deemed “classic.” What was once widely praised is now being pilloried in some quarters as emblematic of the shoddiness and shallowness of social psychology. When Bargh responded to one such salvo with a couple of sarcastic blog posts, he was ridiculed as going on a “one-man rampage.” He took the posts down and regrets writing them, but his frustration and sadness at how he’s been treated remain.

Psychology may be simultaneously at the highest and lowest point in its history. Right now its niftiest findings are routinely simplified and repackaged for a mass audience; if you wish to publish a best seller sans bloodsucking or light bondage, you would be well advised to match a few dozen psychological papers with relatable anecdotes and a grabby, one-word title. That isn’t true across the board. Researchers engaged in more technical work on, say, the role of grapheme units in word recognition must comfort themselves with the knowledge that science is, by its nature, incremental. But a social psychologist with a sexy theory has star potential. In the last decade or so, researchers have made astonishing discoveries about the role of consciousness, the reasons for human behavior, the motivations for why we do what we do. This stuff is anything but incremental.

At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers. Psychology isn’t the only field with fakers, but it has its share. Plus there’s the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another’s work, instead pressing on toward the next headline-making outcome.

Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the “anchoring effect,” which happens, for instance, when a store lists a competitor’s inflated price next to its own to make you think you’re getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient. A small group of skeptical psychologists—let’s call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.

What have they found? Mostly that they can’t get those results. The studies don’t check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.

As in so many other famous psychological experiments, the researcher lies to the subject. After rearranging lists of words into sensible sentences, the subject—a New York University undergraduate—is told that the experiment is about language ability. It is not. In fact, the real test doesn’t begin until the subject exits the room. In the hallway is a graduate student with a stopwatch hidden beneath her coat. She’s pretending to wait for a meeting but really she’s working with the researchers. She times how long it takes the subject to walk from the doorway to a strip of silver tape a little more than 30 feet down the corridor. The experiment hinges on that stopwatch.

The words the subject was asked to rearrange were not random, though they seemed that way (this was confirmed in postexperiment interviews with each subject). They were words like “bingo” and “Florida,” “knits” and “wrinkles,” “bitter” and “alone.” Reading the list, you can almost picture a stooped senior padding around a condo, complaining at the television. A control group unscrambled words that evoked no theme. When the walking times of the two groups were compared, the Florida-knits-alone subjects walked, on average, more slowly than the control group. Words on a page made them act old.

It’s a cute finding. But the more you think about it, the more serious it starts to seem. What if we are constantly being influenced by subtle, unnoticed cues? If “Florida” makes you sluggish, could “cheetah” make you fleet of foot? Forget walking speeds. Is our environment making us meaner or more creative or stupider without our realizing it? We like to think we’re steering the ship of self, but what if we’re actually getting blown about by ghostly gusts?

John Bargh and his co-authors, Mark Chen and Lara Burrows, performed that experiment in 1990 or 1991. They didn’t publish it until 1996. Why sit on such a fascinating result? For starters, they wanted to do it again, which they did. They also wanted to perform similar experiments with different cues. One of those other experiments tested subjects to see if they were more hostile when primed with an African-American face. They were. (The subjects were not African-American.) In the other experiment, the subjects were primed with rude words to see if that would make them more likely to interrupt a conversation. It did.

The researchers waited to publish until other labs had found the same type of results. They knew their finding would be controversial. They knew many people wouldn’t believe it. They were willing to stick their necks out, but they didn’t want to be the only ones.

Since that study was published in the Journal of Personality and Social Psychology,it has been cited more than 2,000 times. Though other researchers did similar work at around the same time, and even before, it was that paper that sparked the priming era. Its authors knew, even before it was published, that the paper was likely to catch fire. They wrote: “The implications for many social psychological phenomena … would appear to be considerable.” Translation: This is a huge deal.

When he was 9 or 10, Bargh decided to become a psychologist. He was in the kitchen of his family’s house in Champaign, Ill., when this revelation came to him. He didn’t know everything that would entail, of course, or what exactly a psychologist did, but he wanted to understand more about human emotion because it was this “mysterious powerful influence on everything.” His dad was an administrator at the University of Illinois, and so he was familiar with university campuses. He liked them. He still does. When he was in high school, he remembers arguing about B.F. Skinner. Everyone else in the class thought Skinner’s ideas were ridiculous. Bargh took the other side, not so much because he embraced the philosophy of radical behaviorism or enjoyed Skinner’s popular writings. It was more because he reveled in contrarianism. “This guy is thinking something nobody else agrees with,” he says now. “Let’s consider that he might be right.”

I met Bargh on a Thursday morning a couple of weeks before Christmas. He was dressed in cable-knit and worn jeans with hiking boots. At 58 he still has a full head of dark, appropriately mussed-up hair. Bargh was reclining on the previously mentioned moss-green sectional while downing coffee to stay alert as he whittled away at a thick stack of finals papers. He rose to greet me, sat back down, and sighed.

The last year has been tough for Bargh. Professionally, the nadir probably came in January, when a failed replication of the famous elderly-walking study was published in the journal PLoS ONE. It was not the first failed replication, but this one stung. In the experiment, the researchers had tried to mirror Bargh’s methods with an important exception: Rather than stopwatches, they used automatic timing devices with infrared sensors to eliminate any potential bias. The words didn’t make subjects act old. They tried the experiment again with stopwatches and added a twist: They told those operating the stopwatches which subjects were expected to walk slowly. Then it worked. The title of their paper tells the story: “Behavioral Priming: It’s All in the Mind, but Whose Mind?”

The paper annoyed Bargh. He thought the researchers didn’t faithfully follow his methods section, despite their claims that they did. But what really set him off was a blog post that explained the results. The post, on the blog Not Exactly Rocket Science, compared what happened in the experiment to the notorious case of Clever Hans, the horse that could supposedly count. It was thought that Hans was a whiz with figures, stomping a hoof in response to mathematical queries. In reality, the horse was picking up on body language from its handler. Bargh was the deluded horse handler in this scenario. That didn’t sit well with him. If the PLoS ONE paper is correct, the significance of his experiment largely dissipates. What’s more, he looks like a fool, tricked by a fairly obvious flaw in the setup.

Bargh responded in two long, detailed posts on his rarely updated Psychology Todayblog. He spelled out the errors he believed were made in the PLoS ONE paper. Most crucially, he wrote, in the original experiment there was no way for the graduate student with the stopwatch to know who was supposed to walk slowly and who wasn’t. The posts were less temperate than most public discourse in science, but they were hardly mouth-foaming rants. He referred to “incompetent or ill-informed researchers,” clearly a shot at the paper’s authors. He mocked the journal where the replication was published as “pay to play” and lacking the oversight of traditional journals. The title of the post, “Nothing in Their Heads,” while perhaps a reference to unconscious behavior, seemed less than collegial.

He also expressed concern for readers who count on “supposedly reputable online media sources for accurate information on psychological science.” This was a dig at the blog post’s author, Ed Yong, who Bargh believes had written an unfair piece. “I was hurt by the things that were said, not just in the article, but in Ed Yong’s coverage of it,” Bargh says now. Yong’s post was more, though, than a credulous summary of the study. He interviewed researchers and provided context. The headline, “Why a classic psychology experiment isn’t what it seemed,” might benefit from softening, but if you’re looking for an example of sloppy journalism, this ain’t it.

While Bargh was dismayed by the paper and the publicity, the authors of the replication were equally taken aback by the severity of Bargh’s reaction. “That really threw us off, that response,” says Axel Cleeremans, a professor of cognitive science at the Université Libre de Bruxelles. “It was obvious that he was so dismissive, it was close to frankly insulting. He described us as amateur experimentalists, which everyone knows we are not.” Nor did they feel that his critique of their methods was valid. Even so, they tried the experiment again, taking into account Bargh’s concerns. It still didn’t work.

Bargh took his blog posts down after they were criticized. Though his views haven’t changed, he feels bad about his tone. In our conversations over the last month or so, Bargh has at times vigorously defended his work, pointing to a review he published recently in Trends in Cognitive Sciences that marshals recent priming studies into a kind of state-of-the-field address. Short version: Science marches on, priming’s doing great.

He complains that he has been a victim of scientific bullying (and some sympathetic toward Bargh use that phrase, too). There are other times, though, when he just seems crushed. “You invest your whole career and life in something, and to have this happen near the end of it—it’s very hard to take,” he says. Priming is what Bargh is known for. When he says “my name is a symbol that stands for these kinds of effects,” he’s not being arrogant. That’s a fact. Before the 1996 paper, he had already published respected and much-cited work on unconscious, automatic mental processes, but priming has defined him. In an interview on the Web site Edge a few years ago, back before the onslaught, he explained his research goals: “We have a trajectory downward, always downward, trying to find simple, basic causes and with big effects. We’re looking for simple things—not anything complicated—simple processes or concepts that then have profound effects.” The article labeled him “the simplifier.”

When I ask if he still believes in these effects, he says yes. They have been replicated in multiple labs. Some of those replications have been exact: stopwatch, the same set of words, and so on. Others have been conceptual. While they explore the same idea, maybe the study is about handwriting rather than walking. Maybe it’s about obesity rather than elderly stereotypes. But the gist is the same. “It’s not just my work that’s under attack here,” Bargh says. “It’s lots of people’s research being attacked and dismissed.” He has moments of doubt. How could he not? It’s deeply unsettling to have someone scrutinizing your old papers, looking for inconsistencies, even if you’re fairly confident about what you’ve accomplished. “Maybe there’s something we were doing that I didn’t realize,” he says, explaining the thoughts that have gone through his head. “You start doing that examination.”

So why not do an actual examination? Set up the same experiments again, with additional safeguards. It wouldn’t be terribly costly. No need for a grant to get undergraduates to unscramble sentences and stroll down a hallway. Bargh says he wouldn’t want to force his graduate students, already worried about their job prospects, to spend time on research that carries a stigma. Also, he is aware that some critics believe he’s been pulling tricks, that he has a “special touch” when it comes to priming, a comment that sounds like a compliment but isn’t. “I don’t think anyone would believe me,” he says.

Harold Pashler wouldn’t. Pashler, a professor of psychology at the University of California at San Diego, is the most prolific of the Replicators. He started trying priming experiments about four years ago because, he says, “I wanted to see these effects for myself.” That’s a diplomatic way of saying he thought they were fishy. He’s tried more than a dozen so far, including the elderly-walking study. He’s never been able to achieve the same results. Not once.

This fall, Daniel Kahneman, the Nobel Prize-winning psychologist, sent an e-mail to a small group of psychologists, including Bargh, warning of a “train wreck looming” in the field because of doubts surrounding priming research. He was blunt: “I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating,” he wrote.

Strongly worded e-mails from Nobel laureates tend to get noticed, and this one did. He sent it after conversations with Bargh about the relentless attacks on priming research. Kahneman cast himself as a mediator, a sort of senior statesman, endeavoring to bring together believers and skeptics. He does have a dog in the fight, though: Kahneman believes in these effects and has written admiringly of Bargh, including in his best seller Thinking, Fast and Slow.

On the heels of that message from on high, an e-mail dialogue began between the two camps. The vibe was more conciliatory than what you hear when researchers are speaking off the cuff and off the record. There was talk of the type of collaboration that Kahneman had floated, researchers from opposing sides combining their efforts in the name of truth. It was very civil, and it didn’t lead anywhere.

In one of those e-mails, Pashler issued a challenge masquerading as a gentle query: “Would you be able to suggest one or two goal priming effects that you think are especially strong and robust, even if they are not particularly well-known?” In other words, put up or shut up. Point me to the stuff you’re certain of and I’ll try to replicate it. This was intended to counter the charge that he and others were cherry-picking the weakest work and then doing a victory dance after demolishing it. He didn’t get the straightforward answer he wanted. “Some suggestions emerged but none were pointing to a concrete example,” he says.

One possible explanation for why these studies continually and bewilderingly fail to replicate is that they have hidden moderators, sensitive conditions that make them a challenge to pull off. Pashler argues that the studies never suggest that. He wrote in that same e-mail: “So from our reading of the literature, it is not clear why the results should be subtle or fragile.”

Bargh contends that we know more about these effects than we did in the 1990s, that they’re more complicated than researchers had originally assumed. That’s not a problem, it’s progress. And if you aren’t familiar with the literature in social psychology, with the numerous experiments that have modified and sharpened those early conclusions, you’re unlikely to successfully replicate them. Then you will trot out your failure as evidence that the study is bogus when really what you’ve proved is that you’re no good at social psychology.

Pashler can’t quite disguise his disdain for such a defense. “That doesn’t make sense to me,” he says. “You published it. That must mean you think it is a repeatable piece of work. Why can’t we do it just the way you did it?”

That’s how David Shanks sees things. He, too, has been trying to replicate well-known priming studies, and he, too, has been unable to do so. In a forthcoming paper, Shanks, a professor of psychology at University College London, recounts his and his several co-authors’ attempts to replicate one of the most intriguing effects, the so-called professor prime. In the study, one group was told to imagine a professor’s life and then list the traits that brought to mind. Another group was told to do the same except with a soccer hooligan rather than a professor.

The groups were then asked questions selected from the board game Trivial Pursuit, questions like “Who painted ‘Guernica’?” and “What is the capital of Bangladesh?” (Picasso and Dhaka, for those playing at home.) Their scores were then tallied. The subjects who imagined the professor scored above a control group that wasn’t primed. The subjects who imagined soccer hooligans scored below the professor group and below the control. Thinking about a professor makes you smart while thinking about a hooligan makes you dumb. The study has been replicated a number of times, including once on Dutch television.

Shanks can’t get the result. And, boy, has he tried. Not once or twice, but nine times.

The skepticism about priming, says Shanks, isn’t limited to those who have committed themselves to reperforming these experiments. It’s not only the Replicators. “I think more people in academic psychology than you would imagine appreciate the historical implausibility of these findings, and it’s just that those are the opinions that they have over the water fountain,” he says. “They’re not the opinions that get into the journalism.”

Like all the skeptics I spoke with, Shanks believes the worst is yet to come for priming, predicting that “over the next two or three years you’re going to see an avalanche of failed replications published.” The avalanche may come sooner than that. There are failed replications in press at the moment and many more that have been completed (Shanks’s paper on the professor prime is in press at PLoS ONE). A couple of researchers I spoke with didn’t want to talk about their results until they had been peer reviewed, but their preliminary results are not encouraging.

Ap Dijksterhuis is the author of the professor-prime paper. At first, Dijksterhuis, a professor of psychology at Radboud University Nij­megen, in the Netherlands, wasn’t sure he wanted to be interviewed for this article. That study is ancient news—it was published in 1998, and he’s moved away from studying unconscious processes in the last couple of years, in part because he wanted to move on to new research on happiness and in part because of the rancor and suspicion that now accompany such work. He’s tired of it.

The outing of Diederik Stapel made the atmosphere worse. Stapel was a social psychologist at Tilburg University, also in the Netherlands, who was found to have committed scientific misconduct in scores of papers. The scope and the depth of the fraud were jaw-dropping, and it changed the conversation. “It wasn’t about research practices that could have been better. It was about fraud,” Dijksterhuis says of the Stapel scandal. “I think that’s playing in the background. It now almost feels as if people who do find significant data are making mistakes, are doing bad research, and maybe even doing fraudulent things.”

In the e-mail discussion spurred by Kahneman’s call to action, Dijk­sterhuis laid out a number of possible explanations for why skeptics were coming up empty when they attempted priming studies. Cultural differences, for example. Studying prejudice in the Netherlands is different from studying it in the United States. Certain subjects are not susceptible to certain primes, particularly a subject who is unusually self-aware. In an interview, he offered another, less charitable possibility. “It could be that they are bad experimenters,” he says. “They may turn out failures to replicate that have been shown by 15 or 20 people already. It basically shows that it’s something with them, and it’s something going on in their labs.”

Joseph Cesario is somewhere between a believer and a skeptic, though these days he’s leaning more skeptic. Cesario is a social psychologist at Michigan State University, and he’s successfully replicated Bargh’s elderly-walking study, discovering in the course of the experiment that the attitude of a subject toward the elderly determined whether the effect worked or not. If you hate old people, you won’t slow down. He is sympathetic to the argument that moderators exist that make these studies hard to replicate, lots of little monkey wrenches ready to ruin the works. But that argument only goes so far. “At some point, it becomes excuse-making,” he says. “We have to have some threshold where we say that it doesn’t exist. It can’t be the case that some small group of people keep hitting on the right moderators over and over again.”

Cesario has been trying to replicate a recent finding of Bargh’s. In that study, published last year in the journal Emotion, Bargh and his co-author, Idit Shalev, asked subjects about their personal hygiene habits—how often they showered and bathed, for how long, how warm they liked the water. They also had subjects take a standard test to determine their degree of social isolation, whether they were lonely or not. What they found is that lonely people took longer and warmer baths and showers, perhaps substituting the warmth of the water for the warmth of regular human interaction.

That isn’t priming, exactly, though it is a related unconscious phenomenon often called embodied cognition. As in the elderly-walking study, the subjects didn’t realize what they were doing, didn’t know they were bathing longer because they were lonely. Can warm water alleviate feelings of isolation? This was a result with real-world applications, and reporters jumped on it. “Wash the loneliness away with a long, hot bath,” read an NBC News headline.

Bargh’s study had 92 subjects. So far Cesario has run more than 2,500 through the same experiment. He’s found absolutely no relationship between bathing and loneliness. Zero. “It’s very worrisome if you have people thinking they can take a shower and they can cure their depression,” he says. And he says Bargh’s data are troublesome. “Extremely small samples, extremely large effects—that’s a red flag,” he says. “It’s not a red flag for people publishing those studies, but it should be.”

Even though he is, in a sense, taking aim at Bargh, Cesario thinks it’s a shame that the debate over priming has become so personal, as if it’s a referendum on one man. “He has the most eye-catching findings. He always has,” Cesario says. “To the extent that some of his effects don’t replicate, because he’s identified as priming, it casts doubt on the entire body of research. He is priming.”

That has been the narrative. Bargh’s research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.

Then along comes Gary Latham.

Latham, an organizational psychologist in the management school at the University of Toronto, thought the research Bargh and others did was crap. That’s the word he used. He told one of his graduate students, Amanda Shantz, that if she tried to apply Bargh’s principles it would be a win-win. If it failed, they could publish a useful takedown. If it succeeded … well, that would be interesting.

They performed a pilot study, which involved showing subjects a photo of a woman winning a race before the subjects took part in a brainstorming task. As Bargh’s research would predict, the photo made them perform better at the brainstorming task. Or seemed to. Latham performed the experiment again in cooperation with another lab. This time the study involved employees in a university fund-raising call center. They were divided into three groups. Each group was given a fact sheet that would be visible while they made phone calls. In the upper left-hand corner of the fact sheet was either a photo of a woman winning a race, a generic photo of employees at a call center, or no photo. Again, consistent with Bargh, the subjects who were primed raised more money. Those with the photo of call-center employees raised the most, while those with the race-winner photo came in second, both outpacing the photo-less control. This was true even though, when questioned afterward, the subjects said they had been too busy to notice the photos.

Latham didn’t want Bargh to be right. “I couldn’t have been more skeptical or more disbelieving when I started the research,” he says. “I nearly fell off my chair when my data” supported Bargh’s findings.

That experiment has changed Latham’s opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives. Are there photos that would make people be safer at work? Are there photos that undermine performance? How should we be fine-tuning the images that surround us? “It’s almost scary in lots of ways that these primes in these environments can affect us without us being aware,” he says. Latham hasn’t stopped there. He’s continued to try experiments using Bargh’s ideas, and those results have only strengthened his confidence in priming. “I’ve got two more that are just mind-blowing,” he says. “And I know John Bargh doesn’t know about them, but he’ll be a happy guy when he sees them.”

Latham doesn’t know why others have had trouble. He only knows what he’s found, and he’s certain about his own data. In the end, Latham thinks Bargh will be vindicated as a pioneer in understanding unconscious motivations. “I’m like a converted Christian,” he says. “I started out as a devout atheist, and now I’m a believer.”

Following his come-to-Jesus transformation, Latham sent an e-mail to Bargh to let him know about the call-center experiment. When I brought this up with Bargh, his face brightened slightly for the first time in our conversation. “You can imagine how that helped me,” he says. He had been feeling isolated, under siege, worried that his legacy was becoming a cautionary tale. “You feel like you’re on an island,” he says.

Though Latham is now a believer, he remains the exception. With more failed replications in the pipeline, Dijksterhuis believes that Kahneman’s looming-train-wreck letter, though well meaning, may become a self-fulfilling prophecy, helping to sink the field rather than save it. Perhaps the perception has already become so negative that further replications, regardless of what they find, won’t matter much. For his part, Bargh is trying to take the long view. “We have to think about 50 or 100 years from now—are people going to believe the same theories?” he says. “Maybe it’s not true. Let’s see if it is or isn’t.”

Tom Bartlett is a senior writer at The Chronicle.

When Leaving Your Wealth to Your Sister’s Sons Makes Sense (Science Daily)

ScienceDaily (Oct. 16, 2012) — To whom a man’s possessions go when he dies is both a matter of cultural norm and evolutionary advantage.

In most human societies, men pass on their worldly goods to their wife’s children. But in about 10 percent of societies, men inexplicably transfer their wealth to their sister’s sons — what’s called “mother’s brother-sister’s son” inheritance. A new study on this unusual form of matrilineal inheritance by Santa Fe Institute reseacher Laura Fortunato has produced insights into this practice.

Her findings appear October 17 in the online edition of Proceedings of the Royal Society B.

“Matrilineal inheritance is puzzling for anthropologists because it causes tension for a man caught between his sisters and wife,” explains Fortunato, who has used game theory to study mother’s brother-sister’s son inheritance. “From an evolutionary perspective it’s also puzzling because you expect an individual to invest in his closest relatives — usually the individual’s own children.”

For decades research on the practice of matrilineal inheritance focused on the probabilities of a man being the biological father of his wife’s children — probabilities that lie on a sliding scale depending on the rate of promiscuity or whether polyandrous marriage (when a woman takes two or more husbands) is practiced.

Of special interest has been the probability value below which man is more closely related to his sister’s children than to his wife’s children. Below this “paternity threshold” a man is better off investing in his sister’s offspring, who are sure to be blood relatives, than his own wife’s children.

In her work modeling the evolutionary payoffs of marriage and inheritance strategies, Fortunato looked beyond the paternity threshold to see, among other things, what payoffs there were for men and women in different marital situations — including polygamy.

“What emerges is quite interesting,” says Fortunato. “Where inheritance is matrilineal, a man with multiple wives ‘wins’ over a man with a single wife.” That’s because wives have brothers, and those brothers will pass on their wealth to the husband’s sons. So more wives means more brothers-in-laws to invest in your sons.

The model also shows an effect for women with multiple husbands. The husband of a woman with multiple husbands is unsure of his paternity, so he may be better off investing in his sister’s offspring.

“A woman does not benefit from multiple husbands where inheritance is matrilineal, however,” Fortunato explains, “because her husbands will invest in their sisters’ kids.” Family structure determines how societies handle relatedness and reproduction issues, Fortunato says. Understanding these practices and their evolutionary implications is a prerequisite for a theory of human behavior.

Journal Reference:

  1. Dr Laura Fortunato. The evolution of matrilineal kinship organizationProceedings of the Royal Society B, October 17, 2012 DOI: 10.1098/rspb.2012.1926

Language is shaped by brain’s desire for clarity and ease (University of Rochester)

Public release date: 15-Oct-2012
By Susan Hagen
University of Rochester

 VIDEO: Translation: “Referee statue pick up. ” Above is one of the 80 animated video clips used to teach an artificial language to study participants. Cognitive scientists are just beginning to use…

Cognitive scientists have good news for linguistic purists terrified about the corruption of their mother tongue.

Using an artificial language in a carefully controlled laboratory experiment, a team from the University of Rochester and Georgetown University has found that many changes to language are simply the brain’s way of ensuring that communication is as precise and concise as possible.

“Our research shows that humans choose to reshape language when the structure is either overly redundant or confusing,” says T. Florian Jaeger, the Wilmot Assistant Professor of the Sciences at Rochester and co-author of a study published in the Proceedings of the National Academy of SciencesOct. 15. “This study suggests that we prefer languages that on average convey information efficiently, striking a balance between effort and clarity.”

The brain’s tendency toward efficient communication may also be an underlying reason that many human languages are structurally similar, says lead author Maryia Fedzechkina, a doctoral candidate at Rochester. Over and over, linguists have identified nearly identical grammatical conventions in seemingly unrelated languages scattered throughout the globe. For decades, linguists have debated the meaning of such similarities: are recurrent structures artifacts of distant common origins, are they simply random accidents, or do they reflect fundamental aspects of human cognition?

This study supports the latter, says co-author Elissa L. Newport, professor of neurology and director of the Center for Brain Plasticity and Recovery at Georgetown, and the former George Eastman Professor of Brain and Cognitive Sciences at Rochester. “The bias language learners have toward efficiency and clarity acts as a filter as languages are transmitted from one generation of learners to another,” she says. Alterations to language are introduced through many avenues, including the influence of other languages and changes in accents or pronunciation. “But this research finds that learners shift the language in ways that make it better – easier to use and more suitable for communication,” says Newport. That process also leads to the recurrent patterns across languages.

To observe the language acquisition process, the team created two miniature artificial languages that use suffixes on nouns to indicate subject or object. These “case markers” are common to Spanish, Russian, and other languages, but not English. In two experiments, 40 undergraduates, whose only language was English, learned the eight verbs, 15 nouns, and grammatical structure of the artificial languages. The training was spaced over four 45-minute sessions and consisted of computer images, short animated clips, and audio recordings. Then participants were asked to describe a novel action clip using their newly learned language.

 VIDEO: Translation: “Singer hunter chop. ” Unlike English, the artificial languages used in the study have free word order. When the subject and object could be easily confused, participants chose to reshape…

When faced with sentence constructions that could be confusing or ambiguous, the language learners in both experiments chose to alter the rules of the language they were taught in order to make their meaning clearer. They used case markers more often when the meaning of the subject and object might otherwise have caused unintended interpretations. So for example, a sentence like “Man hits wall,” is typical because the subject is a person and the object is a thing. But the sentence “Wall hits man,” as when a wall falls on top of a man, is atypical and confusing since the subject is a thing and the object is a person.

The results, write the authors, provide evidence that humans seek a balance between clarity and ease. Participants could have chosen to be maximally clear by always providing the case markers. Alternatively, they could have chosen to be maximally succinct by never providing the case markers. They did neither. Instead, they provided case-markers more often for those sentences that would otherwise have been more likely to be confused.

The findings also support the idea that language learners introduce common patterns, also known as linguistic universals, conclude the authors. The optional case marking that participants introduced in this experiment closely mirrors naturally occurring patterns in Japanese and Korean—when animate objects and inanimate subjects are more likely to receive case markings.

The history of English itself might reflect these deep principles of how we learn language, says Jaeger. Old English had cases and relatively free word order, as is still true for German. But at some point pronunciation changes began to obscure the case endings, creating ambiguity. In contemporary English, word order has become the primary signal by which speakers could decode the meaning, he says.

“Language acquisition can repair changes in languages to insure they don’t undermine communication,” says Fedzechkina. In light of these findings, new generations can perhaps be seen as renewing language, rather than corrupting it, she adds.

By the same token, says Jaeger, many elements of informal speech can be interpreted as rising from the brain’s bias toward efficiency. “When people turn ‘automobile’ into ‘auto,’ use informal contractions, swallow syllables, or take other linguistic shortcuts, the same principles are at work,” he says. Recent research has shown that these types of shortcuts appear only when their meaning is easily inferable from the context, he adds.

Affluent People Less Likely to Reach out to Others in Times of Trouble? (Science Daily)

ScienceDaily (Aug. 30, 2012) — Crises are said to bring people closer together. But a new study from UC Berkeley suggests that while the have-nots reach out to one another in times of trouble, the wealthy are more apt to find comfort in material possessions.

While chaos drives some to seek comfort in friends and family, others gravitate toward money and material possessions, a new study finds. (Credit: iStockphoto/Rob Friedman)

“In times of uncertainty, we see a dramatic polarization, with the rich more focused on holding onto and attaining wealth and the poor spending more time with friends and loved ones,” said Paul Piff, a post-doctoral scholar in psychology at UC Berkeley and lead author of the paper published online this month in the Journal of Personality and Social Psychology.

These new findings add to a growing body of scholarship at UC Berkeley on socio-economic class — defined by both household income and education — and social behavior.

Results from five separate experiments shed new light on how humans from varying socio-economic backgrounds may respond to both natural and human-made disasters, including economic recessions, political instability, earthquakes and hurricanes. They also help explain why, in times of turmoil, people can become more polarized in their responses to uncertainty and chaos.

For example, when asked if they would move across the country for a higher-paying job, study participants from the lower class responded that they would decline in favor of staying close to friends, family and colleagues. By contrast, upper class participants opted to take the job and cut ties with their community.

Although the study does not provide a definitive reason for why the upper class, when stressed, focuses more on worldly goods than relationships, it posits that “material wealth may be a particularly salient, accessible and preferred individual coping mechanism … when they are threatened by perceptions of chaos within the social environment.”

Each experiment was done with a different group of ethnically and socio-economically diverse participants, all of whom reported their social status (household income and education) as well as their level of community mindedness and/or preoccupation with money.

In a lab setting, researchers induced various psychological states in their subjects — such as uncertainty, helplessness or anxiety — so they could accurately assess how social class shapes the likelihood of people turning to others or to wealth in the face of perceived chaos.

Chaos is defined in the study as “the feeling that the world is unknown, unpredictable, seemingly random … a general sense that the world and one’s life have turned uncertain and topsy-turvy.” This uncertainty typically triggers either a fight-or-flight or a “tend-and-befriend” response, which researchers used to assess participants reactions to induced stress.

In the first experiment, a nationwide sample of 76 men and women ranging in age from 18 to 66 were tasked with selecting, online, a visual graph that best reflected the trajectory of economic ups and downs they believed they were likely to face in their lifetimes. The results showed that the upper class and, to a small degree, Caucasian participants, were less likely than the lower class and minorities to anticipate financial instability. Lower-class participants who expected more turmoil in their lives were more likely to turn to community to cope with perceived chaos, the study found.

In the second experiment, 72 college students were asked to write about positive and negative factors that could impact their educational experience. Potential threats that they cited included canceled classes, tuition hikes and academic failures. Again, worries about chaos and helplessness spurred lower class college students — but not the upper class ones — to say they would turn to their community for support. In the third experiment, 77 students were put through computerized tasks in which they rearranged into sentences words that either alluded to chaos or something negative. This exercise was designed to prime certain participants to see their environment as unpredictable and scary. When these participants were offered five minutes to take part in a community building task where they could develop friendships with a group of their peers, only lower class participants jumped at the opportunity.

The fourth experiment had 135 students unscramble similar words into sentences and then report on how much they agreed with such statements as “Money is the only thing I can really count on” and “Time spent not making money is time wasted.” When made to feel as if the world was chaotic, upper class participants consistently agreed more strongly with these statements.

In the fifth experiment, 115 students were given a hypothetical scenario in which an employer offered them a new job for a higher salary, with the caveat that they would need to move, and potentially lose touch with their current network of family, friends and colleagues. Again, when primed with feelings that the world was uncertain and chaotic, upper class participants were more amenable to cutting ties and taking the job, whereas lower class participants opted to stay close to their support networks.

“Given the very different forms of coping that we observe among the upper and lower classes, our research suggests that in times of economic uncertainty and social instability, disparities between the haves and the have-nots could grow ever wider,” Piff said.

Other coauthors of the study are UC Berkeley psychologist Dacher Keltner; Daniel Stancato, a psychologist in Seattle, Wash.; Andres Martinez of George Mason University and Michael Kraus of the University of Illinois, Urbana-Champaign. The research was funded in part by the National Science Foundation.

Journal Reference:

  1. Paul K. Piff, Daniel M. Stancato, Andres G. Martinez, Michael W. Kraus, Dacher Keltner. Class, Chaos, and the Construction of Community.Journal of Personality and Social Psychology, 2012; DOI: 10.1037/a0029673

Beliefs Drive Investors More Than Preferences (Science Daily)

ScienceDaily (Aug. 28, 2012) — If experts thought they knew anything about individual investors, it was this: their emotions lead them to sell winning stocks too soon and hold on to losers too long.

But new research casts doubt on this widely held theory that individual investors’ decisions are driven mainly by their feelings toward losses and gains. In an innovative study, researchers found evidence that individual investors’ decisions are primarily motivated by their beliefs about a stock’s future.

“The story is not about whether an investor hates losing or loves gains — it’s not primarily a story about preferences,” said Itzhak Ben-David, co-author of the study and assistant professor of finance at Ohio State University’s Fisher College of Business.

“It is a story about information and speculation. The investor has a belief about where a stock is headed and that’s what he acts on. Investors act more on their beliefs than their preferences.”

Ben-David conducted the study with David Hirshleifer of the Paul Merage School of Business at the University of California, Irvine. Their results appear in the August 2012 issue of the journal Review of Financial Studies.

The researchers studied stock transactions from more than 77,000 accounts at a large discount broker from 1990 through 1996 and did a variety of analyses that had never been done before. They examined when investors bought individual stocks, when they sold them, and how much they earned or lost with each sale.

The result was a radical rethinking of why individual investors sell winning stocks and hold on to losers.

The findings don’t mean that investors don’t have an aversion to losses and a desire to sell winners, Ben-David said. But the trading data suggests that these feelings aren’t dominating their decisions.

“People have a variety of reasons for trading stocks, which may include tax issues, margin calls, and an aversion to losses. These all may play a role, but what we show that beliefs are dominant for the trading of retail investors.”

The tendency to sell winners too early and to keep losers too long has been called the “disposition effect” by economists.

“The disposition effect has been well-documented. The question is what we make of it. A lot of people look at the data and interpret it as meaning that the typical retail investor is irrational, simply reacting to their feelings about gains and losses,” he said.

“But what we find is that, looking at the data, we can’t really learn about their preferences. We don’t learn about what they like or don’t like. Surely, they don’t like to lose money — but their reasons for selling stocks are more complex than that.”

The simplest test was to see what investors do when a stock is trading just slightly higher or lower than the price they paid — in other words a small winner or a small loser.

If investors really did make stock trades based simply on their pleasure in making money and their aversion to realizing losses, a small winner should lead to more sales than a small loser.

But this study found that investors were not clearly more likely to sell when it was a small winner than when it was a small loser.

Another piece of evidence against the theory that investors’ decisions are driven by their aversion to realizing losses was the fact that, the more a stock lost value, the more likely investors were to sell it.

“If investors had an aversion to realizing losses, larger losses should reduce the probability they would sell, but we found the opposite — larger losses were associated with a higher probability of selling,” Ben-David said.

Interestingly, the stocks that investors sell the least are those that did not have a price change since purchase.

Another clue is the fact that men and frequent traders were more likely than others to sell winning stocks quickly to reap their profits and sell losers quickly to cut their losses.

“Past research has shown that overconfidence in investing is associated with men and frequent traders,” Ben-David said. “They have a belief in their superior knowledge and so you would expect them to buy and sell more quickly than others as they speculate on stock prices. That’s exactly what we found. They are engaged in belief-based trading.”

The researchers also examined when investors were more likely to buy additional shares of a stock that they had previously purchased. They found that the probability of buying additional shares is greater for shares that lost value than it was for shares that gained value.

That shouldn’t happen if investors are really acting on emotions rather than beliefs, Ben-David said.

“If you buy additional shares of a stock that has lost value, that suggests you are acting on your beliefs that the stock is really a winner and other people have just not realized it yet,” he said.

“You wouldn’t buy additional shares of a losing stock if your biggest motivation was to avoid realizing losses.”

However, Ben-David noted that just because investors act on beliefs rather than feelings doesn’t mean they are acting rationally.

“They may be overconfident in their own abilities. It is a different kind of irrationality from being averse to selling losers,” Ben-David said.

This study’s suggestion that investors act more on beliefs than preferences is likely to make waves in the economics profession, he said.

“In economics, these two stories are very different. Beliefs and preferences are very different concepts, and it is important to distinguish them and how they affect investors. Many economists had thought that an irrational aversion to selling losers was crucial for the trading decisions of retail investors.”

Journal Reference:

  1. I. Ben-David, D. Hirshleifer. Are Investors Really Reluctant to Realize Their Losses? Trading Responses to Past Returns and the Disposition EffectReview of Financial Studies, 2012; 25 (8): 2485 DOI:10.1093/rfs/hhs077