Is a Human “Here and Now” Bias Clouding Climate Reasoning? (N.Y. Times)

By ANDREW C. REVKIN
N.Y. Times, Dot Earth – May 8, 2011, 7:36 AM

Here’s a “Your Dot” contribution from Jacob Tanenbaum, a computer technology teacher from Tappan, N.Y., who sent the following thoughts after reading “On Birth Certificates, Climate Risk and an Inconvenient Mind“:

Our lack of ability to perceive and react to climate is not just simply a problem rooted in social norms. It goes far deeper into the evolutionary structure of the human mind. We are an animal that evolved over time somewhere in southern Africa. Our minds are set up to quickly and effectively assess an environment and perceive danger in it. This is what Macolm Gladwell calls “thin slicing” and it is very effective in many situations. What we consider higher thought processes appeared far later in our evolutionary path. When we are facing danger, it makes sense that we rely on those higher processes far less than we rely on our “gut instinct” –- those older processes that kept us safe for so much longer in our species’ history. So how does this help us understand our reactions to something like climate?

Consider this:

1. Once we are accustomed to something, change is very difficult. An animal that understands its environment can pick out subtle changes that indicate danger more effectively. An animal in new environment perceives difference, and so danger, everywhere it looks. Our reaction to climate must involve significant change in how we live our lives. This is difficult for any animal. Even us.

2. Our understanding of danger is event driven. The presence of a predator, or a fire or a storm or flood are all events. Climate is not an event, it is a trend. Weather is an event. To understand climate, you must suspend the belief that what you see outside your window is all that can be a threat to you. To understand climate you must look at the numbers over a long time and a large geographical space. That is how you can “see” a trend. This, unfortunately, may be antithetical to the way that the human animal understands danger since the threat is not immediately in front of us in a way that causes our lower thought processes to perceive a threat, pump us full of adrenalin, and push us to react.

3. Since our understanding of danger is event driven, it makes sense that our understanding of danger is also temporally driven. We are best wired to react to events that are immediate in nature and short in duration. We are wired to react to an event quickly and to make whatever adjustments are needed so that things return to what we perceive as normal. We want a short burst of adrenalin to help us get away from the threat and back to our “comfort zone.” Climate, again, asks us to suspend this part of our understanding of danger and may, again, be antithetical to the way in which we are wired to think about danger. We must react now to avoid a threat that may be several decades away. We must suspend our belief that what we perceive as normal may not be OK. We do, after all, live in an environment that has already undergone change, and our normal way of life is causing that change.

If you couple those facts with a media campaign that encourages denial as well as a media and political structure that largely reflects the way that we are wired and you have a perfect storm. So what we are really being asked to do as a species is evolve. We must evolve the ability to rely on more recent brain constructs, rather than our more primitive ones, to assess and react to danger This means we must evolve in our understanding of danger, of risk, of time, and in our ability control what we have created. But, of course, about half the U.S. does not believe in evolution, so asking us to continue the process may be beyond us. These are the things that keep me up at night.

Tanenbaum’s commentary on climate risk and response, or lack thereof, leads back to the recent Edge.org question: Do we need to bolster our cognitive toolkit?

What’s Missing From Our ‘Cognitive Toolkit’?

By ANDREW C. REVKIN
N.Y. Times, Dot Earth – January 17, 2011, 1:18 PM

This is your brain on words:

It’s clearly a pretty hard-wired system. But can we find ways to use what’s locked in our skulls to better effect? I’ll be writing more soon on that broad question, with a hint of my thoughts provided in a recent Tweet. Some variant on noosphere is clearly nigh.

In the meantime, there’s a rich discussion of aspects of this question on Edge.org, a forum for all manner of minds, curated by the agent and intellectual impressario John Brockman. Once or twice a year since 1998, Edge has tossed provocative questions to variegated batches of scientists, writers, artists and innovators.

Some examples: How is the Internet changing the way you think? What have you changed your mind about? Why? What do you believe is true even though you cannot prove it?

This year’s question, proposed by Steven Pinker and shaped with input from Daniel Kahneman, has been addressed by more than 150 people so far:

What scientific concept would improve everybody’s cognitive toolkit? (The phrase “scientific concept” has a very broad meaning, explained at the link.)

You can read my Edge contribution, centering on a concept I call anthropophilia, below, with links to relevant context added (the Edge format is straight text).

I’m in the early stages of reading the other contributions. There’s much to chew on and enjoy. Here are a few highlights:

Gerd Gigrenzer, a psychologist and director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, is one of several contributors who focus on the need for broader, and better, appreciation of risk:

[M]any parents are unaware that one million U.S. children have unnecessary CT scans annually and that a full body scan can deliver one thousand times the radiation dose of a mammogram, resulting in an estimated 29,000 cancers per year.

I believe that the answer to modern crises is not simply more laws, more bureaucracy, or more money, but, first and foremost, more citizens who are risk literate. This can be achieved by cultivating statistical thinking. [Read on.]

He seems to be endorsing a notion explored on Dot Earth not long ago — that we find a way to go to “risk school.”

Gary Marcus, an associate professor of psychology at New York University, chooses “cognitive humility,” noting, among other things:

[H]uman beings tend almost invariably to be better at remembering evidence that is consistent with their beliefs than evidence that might disconfirm them. [Read on.]

Helen Fisher, an author and anthropologist at Rutgers University, focuses on the opportunities that would arise from a deeper awareness of the four dimensions that shape a human personality — particularly the “temperament dimension.”

We are capable of acting “out of character,” but doing so is tiring. People are biologically inclined to think and act in specific patterns — temperament dimensions. But why would this concept of temperament dimensions be useful in our human cognitive tool kit? Because we are social creatures, and a deeper understanding of who we (and others) are can provide a valuable tool for understanding, pleasing, cajoling, reprimanding, rewarding and loving others — from friends and relatives to world leaders…. [Read on.]

Maybe there’s a research opportunity in Dot Earth’s comment string — a comparative psychological deconstruction of blog commenters’ character?

Haim Harari, a physicist and former president of the Weizmann Institute of Science, writes of the “edge of the circle” in referring to today’s polarized, and largely nonproductive, policy fights:

Societies, preaching for absolute equality among their citizens, always end up with the largest economic gaps. Fanatic extremist proponents of developing only renewable energy sources, with no nuclear power, delay or prevent acceptable interim solutions to global energy issues, just as much as the oil producers. Misuse of animals in biology research is as damaging as the objections of fanatic animal right groups. One can go on and on with illustrations, which are more visible now than they were a decade or two ago. We live on the verge of an age of extremism… [Read on.]

Jay Rosen, an associate professor of journalism at New York University, provides a nice take on normalizing society’s approach to “wicked” problems. (The climate challenge, as as been discussed here before is “beyond super wicked.) Here’s an excerpt:

If we could designate some problems as wicked we might realize that “normal” approaches to problem-solving don’t work. We can’t define the problem, evaluate possible solutions, pick the best one, hire the experts and implement. No matter how much we may want to follow a routine like that, it won’t succeed. Institutions may require it, habit may favor it, the boss may order it, but wicked problems don’t care.

Presidential debates that divided wicked from tame problems would be very different debates. Better, I think. Journalists who covered wicked problems differently than they covered normal problems would be smarter journalists. Institutions that knew when how to distinguish wicked problems from the other kind would eventually learn the limits of command and control.

Wicked problems demand people who are creative, pragmatic, flexible and collaborative. They never invest too much in their ideas because they know they are going to have to alter them. They know there’s no right place to start so they simply start somewhere and see what happens. They accept the fact that they’re more likely to understand the problem after its “solved” than before. They don’t expect to get a good solution; they keep working until they’ve found something that’s good enough. They’re never convinced that they know enough to solve the problem, so they are constantly testing their ideas on different stakeholders. [Read on.]

Hmm. That last section kind of sounds like Dot Earth, or at least some variant on this process. There’s much, much more to read and discuss.

Edge doesn’t have a comment string, so I encourage you to weigh in here with your own answer to the question and evaluation of others.

As promised, here’s what I wrote for Edge (filed on deadline Friday night):

Anthropophilia

To sustain progress on a finite planet that is increasingly under human sway, but also full of surprises, what is needed is a strong dose of anthropophilia. I propose this word as shorthand for a rigorous and dispassionate kind of self regard, even self appreciation, to be employed when individuals or communities face consequential decisions attended by substantial uncertainty and polarizing disagreement.

The term is an intentional echo of Ed Wilson’s valuable effort to nurture biophilia, the part of humanness that values and cares for the facets of the non-human world we call nature. What’s been missing too long is an effort to fully consider, even embrace, the human role within nature and — perhaps more important still — to consider our own inner nature, as well.

Historically, many efforts to propel a durable human approach to advancement were shaped around two organizing ideas: “woe is me” and “shame on us,” with a good dose of “shame on you” thrown in.

The problem?

Woe is paralytic, while blame is both divisive and often misses the real target. (Who’s the bad guy, BP or those of us who drive and heat with oil?)

Discourse framed around those concepts too often produces policy debates that someone once described to me, in the context of climate, as “blah, blah, blah bang.” The same phenomenon can as easily be seen in the unheeded warnings leading to the most recent financial implosion and the attack on the World Trade Center.

More fully considering our nature — both the “divine and felonious” sides, as Bill Bryson has summed us up — could help identify certain kinds of challenges that we know we’ll tend to get wrong.

The simple act of recognizing such tendencies could help refine how choices are made — at least giving slightly better odds of getting things a little less wrong the next time. At the personal level, I know when I cruise into the kitchen tonight I’ll tend to prefer to reach for a cookie instead of an apple. By pre-considering that trait, I might have a slightly better chance of avoiding a couple of hundred unnecessary calories.

Here are a few instances where this concept is relevant on larger scales.

There’s a persistent human pattern of not taking broad lessons from localized disasters. When China’s Sichuan province was rocked by a severe earthquake, tens of thousands of students (and their teachers) died in collapsed schools. Yet the American state of Oregon, where more than a thousand schools are already known to be similarly vulnerable when the great Cascadia fault off the Northwest Coast next heaves, still lags terribly in speeding investments in retrofitting.

Sociologists understand with quite a bit of empirical backing why this disconnect exists even though the example was horrifying and the risk in Oregon is about as clear as any scientific assessment can be. But does that knowledge of human biases toward the “near and now” get taken seriously in the realms where policies are shaped and the money to carry them out is authorized? Rarely, it seems.

Social scientists also know, with decent rigor, that the fight over human-driven global warming — both over the science and policy choices — is largely cultural. As in many other disputes (consider health care) the battle is between two quite fundamental subsets of human communities — communitarians (aka, liberals) and individualists (aka, libertarians). In such situations, a compelling body of research has emerged showing how information is fairly meaningless. Each group selects information to reinforce a position and there are scant instances where information ends up shifting a position.

That’s why no one should expect the next review of climate science from the Intergovernmental Panel on Climate Change to suddenly create a harmonious path forward.

The more such realities are recognized, the more likely it is that innovative approaches to negotiation can build from the middle, instead of arguing endlessly from the edge. The same body of research on climate attitudes, for example, shows far less disagreement on the need for advancing the world’s limited menu of affordable energy choices.

Murray Gell-Mann has spoken often of the need, when faced with multi-dimensional problems, to take a “crude look at the whole” — a process he has even given an acronym, CLAW. It’s imperative, where possible, for that look to include an honest analysis of the species doing the looking, as well.

There will never be a way to invent a replacement for, say, the United Nations or the House of Representatives. But there is a ripe opportunity to try new approaches to constructive discourse and problem solving, with the first step being an acceptance of our humanness, for better and worse.

That’s anthropophilia.

Jesse Ausubel of Rockefeller University has long been fond of saying, “Because the human brain does not change, technology must.”

But many analysts now see the need to consciously intensify efforts to foster innovation — technological, social, and otherwise — to limit regrets in the next few generations.

So far, it’s not clear to me that our existing “cognitive toolkit” has allowed societies to absorb this reality. (A case in point is our “shock to trance” energy policies.)

Whether you embrace Ausubel’s technology imperative or seek ways to shift human values and norms to fit infinite aspirations on a finite planet (or both, as I do), a thorough look in the mirror appears worthwhile.

This leads back the value of the question posed on Edge, and a sustained exploration of the answers.

[Original post here.]