Arquivo mensal: agosto 2012

Doctors Often Don’t Disclose All Possible Risks to Patients Before Treatment (Science Daily)

ScienceDaily (Aug. 7, 2012) — Most informed consent disputes involve disagreements about who said what and when, not stand-offs over whether a particular risk ought to have been disclosed. But doctors may “routinely underestimate the importance of a small set of risks that vex patients” according to international experts writing in this week’s PLoS Medicine.

Increasingly, doctors are expected to advise and empower patients to make rational choices by sharing information that may affect treatment decisions, including risks of adverse outcomes. However, authors from Australia and the US led by David Studdert from the University of Melbourne argue that doctors, especially surgeons, are often unsure which clinical risks they should disclose and discuss with patients before treatment.

To understand more about the clinical circumstances in which disputes arise between doctors and patients in this area, the authors analyzed 481 malpractice claims and patient complaints from Australia involving allegations of deficiencies in the process of obtaining informed consent.

The authors found that 45 (9%) of the cases studied were disputed duty cases — that is, they involved head-to-head disagreements over whether a particular risk ought to have been disclosed before treatment. Two-thirds of these disputed duty cases involved surgical procedures, and the majority (38/45) of them related to five specific outcomes that had quality of life implications for patients, including chronic pain and the need for re-operation.

The authors found that the most common justifications doctors gave for not telling patients about particular risks before treatment were that they considered such risks too rare to warrant discussion or the specific risk was covered by a more general risk that was discussed.

However, nine in ten of the disputes studied centered on factual disagreements — arguments over who said what, and when. The authors say: “Documenting consent discussions in the lead-up to surgical procedures is particularly important, as most informed consent claims and complaints involved factual disagreements over the disclosure of operative risks.”

The authors say: “Our findings suggest that doctors may systematically underestimate the premium patients place on understanding certain risks in advance of treatment.”

They conclude: “Improved understanding of these situations helps to spotlight gaps between what patients want to hear and what doctors perceive patients want — or should want — to hear. It may also be useful information for doctors eager to avoid medico-legal disputes.”

Rooting out Rumors, Epidemics, and Crime — With Math (Science Daily)

ScienceDaily (Aug. 10, 2012) — A team of EPFL scientists has developed an algorithm that can identify the source of an epidemic or information circulating within a network, a method that could also be used to help with criminal investigations.

Investigators are well aware of how difficult it is to trace an unlawful act to its source. The job was arguably easier with old, Mafia-style criminal organizations, as their hierarchical structures more or less resembled predictable family trees.

In the Internet age, however, the networks used by organized criminals have changed. Innumerable nodes and connections escalate the complexity of these networks, making it ever more difficult to root out the guilty party. EPFL researcher Pedro Pinto of the Audiovisual Communications Laboratory and his colleagues have developed an algorithm that could become a valuable ally for investigators, criminal or otherwise, as long as a network is involved. The team’s research was published August 10, 2012, in the journal Physical Review Letters.

Finding the source of a Facebook rumor

“Using our method, we can find the source of all kinds of things circulating in a network just by ‘listening’ to a limited number of members of that network,” explains Pinto. Suppose you come across a rumor about yourself that has spread on Facebook and been sent to 500 people — your friends, or even friends of your friends. How do you find the person who started the rumor? “By looking at the messages received by just 15-20 of your friends, and taking into account the time factor, our algorithm can trace the path of that information back and find the source,” Pinto adds. This method can also be used to identify the origin of a spam message or a computer virus using only a limited number of sensors within the network.

Trace the propagation of an epidemic

Out in the real world, the algorithm can be employed to find the primary source of an infectious disease, such as cholera. “We tested our method with data on an epidemic in South Africa provided by EPFL professor Andrea Rinaldo’s Ecohydrology Laboratory,” says Pinto. “By modeling water networks, river networks, and human transport networks, we were able to find the spot where the first cases of infection appeared by monitoring only a small fraction of the villages.”

The method would also be useful in responding to terrorist attacks, such as the 1995 sarin gas attack in the Tokyo subway, in which poisonous gas released in the city’s subterranean tunnels killed 13 people and injured nearly 1,000 more. “Using this algorithm, it wouldn’t be necessary to equip every station with detectors. A sample would be sufficient to rapidly identify the origin of the attack, and action could be taken before it spreads too far,” says Pinto.

Identifying the brains behind a terrorist attack

Computer simulations of the telephone conversations that could have occurred during the terrorist attacks on September 11, 2001, were used to test Pinto’s system. “By reconstructing the message exchange inside the 9/11 terrorist network extracted from publicly released news, our system spit out the names of three potential suspects — one of whom was found to be the mastermind of the attacks, according to the official enquiry.”

The validity of this method thus has been proven a posteriori. But according to Pinto, it could also be used preventatively — for example, to understand an outbreak before it gets out of control. “By carefully selecting points in the network to test, we could more rapidly detect the spread of an epidemic,” he points out. It could also be a valuable tool for advertisers who use viral marketing strategies by leveraging the Internet and social networks to reach customers. For example, this algorithm would allow them to identify the specific Internet blogs that are the most influential for their target audience and to understand how in these articles spread throughout the online community.

Populations Survive Despite Many Deleterious Mutations: Evolutionary Model of Muller’s Ratchet Explored (Science Daily)

ScienceDaily (Aug. 10, 2012) — From protozoans to mammals, evolution has created more and more complex structures and better-adapted organisms. This is all the more astonishing as most genetic mutations are deleterious. Especially in small asexual populations that do not recombine their genes, unfavourable mutations can accumulate. This process is known as Muller’s ratchet in evolutionary biology. The ratchet, proposed by the American geneticist Hermann Joseph Muller, predicts that the genome deteriorates irreversibly, leaving populations on a one-way street to extinction.

Equilibrium of mutation and selection processes: A population can be divided into groups of individuals that carry different numbers of deleterious mutations. Groups with few mutations are amplified by selection but loose members to other groups by mutation. Groups with many mutations don’t reproduce as much, but gain members by mutation. (Credit: © Richard Neher/MPI for Developmental Biology)

In collaboration with colleagues from the US, Richard Neher from the Max Planck Institute for Developmental Biology has shown mathematically how Muller’s ratchet operates and he has investigated why populations are not inevitably doomed to extinction despite the continuous influx of deleterious mutations.

The great majority of mutations are deleterious. “Due to selection individuals with more favourable genes reproduce more successfully and deleterious mutations disappear again,” explains the population geneticist Richard Neher, leader of an independent Max Planck research group at the Max Planck Institute for Developmental Biology in Tübingen, Germany. However, in small populations such as an asexually reproducing virus early during infection, the situation is not so clear-cut. “It can then happen by chance, by stochastic processes alone, that deleterious mutations in the viruses accumulate and the mutation-free group of individuals goes extinct,” says Richard Neher. This is known as a click of Muller’s ratchet, which is irreversible — at least in Muller’s model.

Muller published his model on the evolutionary significance of deleterious mutations in 1964. Yet to date a quantitative understanding of the ratchet’s processes was lacking. Richard Neher and Boris Shraiman from the University of California in Santa Barbara have now published a new theoretical study on Muller’s ratchet. They chose a comparably simple model with only deleterious mutations all having the same effect on fitness. The scientists assumed selection against those mutations and analysed how fluctuations in the group of the fittest individuals affected the less fit ones and the whole population. Richard Neher and Boris Shraiman discovered that the key to the understanding of Muller’s ratchet lies in a slow response: If the number of the fittest individuals is reduced, the mean fitness decreases only after a delay. “This delayed feedback accelerates Muller’s ratchet,” Richard Neher comments on the results. It clicks more and more frequently.

“Our results are valid for a broad range of conditions and parameter values — for a population of viruses as well as a population of tigers.” However, he does not expect to find the model’s conditions one-to-one in nature. “Models are made to understand the essential aspects, to identify the critical processes,” he explains.

In a second study Richard Neher, Boris Shraiman and several other US-scientists from the University of California in Santa Barbara and Harvard University in Cambridge investigated how a small asexual population could escape Muller’s ratchet. “Such a population can only stay in a steady state for a long time when beneficial mutations continually compensate for the negative ones that accumulate via Muller’s ratchet,” says Richard Neher. For their model the scientists assumed a steady environment and suggest that there can be a mutation-selection balance in every population. They have calculated the rate of favourable mutations required to maintain the balance. The result was surprising: Even under unfavourable conditions, a comparably small proportion in the range of several percent of positive mutations is sufficient to sustain a population.

These findings could explain the long-term maintenance of mitochondria, the so-called power plants of the cell that have their own genome and divide asexually. By and large, evolution is driven by random events or as Richard Neher says: “Evolutionary dynamics are very stochastic.”

Why Do Organisms Build Tissues They Seemingly Never Use? (Science Daily)

ScienceDaily (Aug. 10, 2012) — Why, after millions of years of evolution, do organisms build structures that seemingly serve no purpose?

A study conducted at Michigan State University and published in the current issue of The American Naturalist investigates the evolutionary reasons why organisms go through developmental stages that appear unnecessary.

“Many animals build tissues and structures they don’t appear to use, and then they disappear,” said Jeff Clune, lead author and former doctoral student at MSU’s BEACON Center of Evolution in Action. “It’s comparable to building a roller coaster, razing it and building a skyscraper on the same ground. Why not just skip ahead to building the skyscraper?”

Why humans and other organisms retain seemingly unnecessary stages in their development has been debated between biologists since 1866. This study explains that organisms jump through these extra hoops to avoid disrupting a developmental process that works. Clune’s team called this concept the “developmental disruption force.” But Clune says it also could be described as “if the shoe fits, don’t change a thing.”

“In a developing embryo, each new structure is built in a delicate environment that consists of everything that has already developed,” said Clune, who is now a postdoctoral fellow at Cornell University. “Mutations that alter that environment, such as by eliminating a structure, can thus disrupt later stages of development. Even if a structure is not actually used, it may set the stage for other functional tissues to grow properly.”

Going back to the roller coaster metaphor, even though the roller coaster gets torn down, the organism needs the parts from that teardown to build the skyscraper, he added.

“An engineer would simply skip the roller coaster step, but evolution is more of a tinkerer and less of an engineer,” Clune said. “It uses whatever parts that are lying around, even if the process that generates those parts is inefficient.”

An interesting consequence is that newly evolved traits tend to get added at the end of development, because there is less risk of disrupting anything important. That, in turn, means that there is a similarity between the order things evolve and the order they develop.

A new technology called computational evolution allowed the team to conduct experiments that would be impossible to reproduce in nature.

Rather than observe embryos grow, the team of computer scientists and biologists used BEACON’s Avida software to perform experiments with evolution inside a computer. The Avidians — self-replicating computer programs — mutate, compete for resources and evolve, mimicking natural selection in real-life organisms. Using this software, Clune’s team observed as Avidians evolved to perform logic tasks. They recorded the order that those tasks evolved in a variety of lineages, and then looked at the order those tasks developed in the final, evolved organism.

They were able to help settle an age-old debate that developmental order does resemble evolutionary order, at least in this computationally evolving system. Because in a computer thousands of generations can happen overnight, the team was able to repeat this experiment many times to document that this similarity repeatedly occurs.

Additional MSU researchers contributing to the study included BEACON colleagues Richard Lenski, Robert Pennock and Charles Ofria. The research was funded by the National Science Foundation.

USDA: Ongoing Drought Causes Significant Crop Yield Declines (Science Daily)

ScienceDaily (Aug. 10, 2012) — Corn production will drop 13 percent to a six-year low, the U.S. Agriculture Department said today (Aug. 10), confirming what many farmers already knew — they are having a very bad year, Ohio State University Extension economist Matt Roberts said.

Drought’s impact on corn. (Credit: Image courtesy of OSU Extension)

In its monthly crops report, USDA today cut its projected U.S. corn production to 10.8 billion bushels, down 17 percent from its forecast last month of nearly 13 billion bushels and 13 percent lower than last year. Soybean production is forecast to be down as well, to 2.69 billion bushels, which is 12 percent lower than last year, as well as lower than the 3.05 billion bushels the USDA forecast last month.

The projections mean this year’s corn production will be the lowest production since 2006, with soybeans at its lowest production rate since 2003, Roberts said. The USDA said it expects corn growers to average 123.4 bushels per acre, down 24 bushels from last year, while soybean growers are expected to average 36.1 bushels per acre, down 5.4 bushels from last year.

In Ohio, those numbers translate into a projected 126 bushels per acre yield, which is down 32 bushels per acre from last year for corn, he said. Soybeans are projected at 42 bushels per acre, down from last year’s 47.5 bushels per acre yield.

The impact on growers is going to be tough, Roberts said.

“I don’t think this is a surprise to anyone, especially growers,” he said. “For most farmers, this is the year that they will lose much of the profits they’ve made over five good years.

“I don’t expect to see a lot of bankruptcies, but certainly there will be a lot of belt-tightening among farmers this year. With crop insurance so widespread, it will help ensure that we don’t see a lot of bankruptcies and help farmers weather this storm.”

This as Ohioans have suffered through multiple days of record-setting temperatures of over 100 degrees this summer, with scant rainfall that has resulted in parched crop fields. In fact, with an average temperature of 77.6 degrees, July was the hottest month ever recorded nationwide, breaking a record set during the Dust Bowl of the 1930s, according to the National Climatic Data Center.

Most of Ohio except for some counties near the Kentucky, West Virginia and Pennsylvania borders is experiencing moderate drought, with some counties near the Indiana and Michigan borders experiencing severe and extreme drought as of Aug. 7, according to the most recent U.S. Drought Monitor. Nationwide, 80 percent of the U.S. is experiencing drought conditions, up from 40 percent in May, according to the monitor.

Currently, topsoil moisture in Ohio was rated 45 percent very short, 41 percent short and 14 percent adequate, with no surplus, according to the latest U.S. Department of Agriculture Weekly Crop Report.

The lack of rainfall has decimated many corn crops, which were damaged as a result of not enough rain during its crucial pollination period. So even though growers planted a record acreage of corn this year in anticipation of a strong year with record yields, the lack of enough rainfall has caused yield forecasts to continue to decline, Roberts said.

And while soybeans weren’t as negatively impacted by the lack of rain earlier in the growing season, ongoing drought conditions are taking a toll on crops, which are seeing yield estimates decline as well, he said, noting that further yield declines are likely as the growing season continues.

The corn and soybean forecasts are largely in line with market expectations, Roberts said.

Corn prices through yesterday increased 63 percent since mid-June, reaching an all-time high today (Aug. 10) of $8.49 a bushel on the Chicago Board of Trade.

“Most analysts in February expected a corn yield of 163, meaning there has now been a 40 bushel per acre yield cut from the beginning of the year, with many analysts expecting yields to go below 120 per bushels when it is all said and done,” he said. “That means there’s just a lot less corn around than what we expected.

“That leaves 2.3 billion fewer bushels of corn to be consumed than in 2011, which means that consumption has to be rationed out. And even though ethanol will be down about 10 percent and exports will be down by 25 percent from two years ago, we will still end up with extremely tight inventories.”

For livestock farmers, the situation is even worse, Roberts said.

“Livestock producers will feel more pain from higher feed prices and negative profit margins,” he said. “We will see a lot more stress on the entire livestock end, from poultry all the way up to cows.

“Cow/calf producers are in a very difficult situation because of poor pasture conditions and high hay costs as a result of this historic drought. Overall, it’s going to be a very bad year for the farm economy. While there will be pockets of growers that don’t feel it as bad, livestock farmers will feel it just all around because of the overall feed costs.”

NOAA Raises Hurricane Season Prediction Despite Expected El Niño (Science Daily)

ScienceDaily (Aug. 10, 2012) — This year’s Atlantic hurricane season got off to a busy start, with 6 named storms to date, and may have a busy second half, according to the updated hurricane season outlook issued Aug. 9, 2012 by NOAA’s Climate Prediction Center, a division of the National Weather Service. The updated outlook still indicates a 50 percent chance of a near-normal season, but increases the chance of an above-normal season to 35 percent and decreases the chance of a below-normal season to only 15 percent from the initial outlook issued in May.

Satellite image of Hurricane Ernesto taken on Aug. 7, 2012 in the Gulf of Mexico. (Credit: NOAA)

Across the entire Atlantic Basin for the season — June 1 to November 30 — NOAA’s updated seasonal outlook projects a total (which includes the activity-to-date of tropical storms Alberto, Beryl, Debbie, Florence and hurricanes Chris and Ernesto) of:

  • 12 to 17 named storms (top winds of 39 mph or higher), including:
  • 5 to 8 hurricanes (top winds of 74 mph or higher), of which:
  • 2 to 3 could be major hurricanes (Category 3, 4 or 5; winds of at least 111 mph)

The numbers are higher from the initial outlook in May, which called for 9-15 named storms, 4-8 hurricanes and 1-3 major hurricanes. Based on a 30-year average, a normal Atlantic hurricane season produces 12 named storms, six hurricanes, and three major hurricanes.

“We are increasing the likelihood of an above-normal season because storm-conducive wind patterns and warmer-than-normal sea surface temperatures are now in place in the Atlantic,” said Gerry Bell, Ph.D., lead seasonal hurricane forecaster at the Climate Prediction Center. “These conditions are linked to the ongoing high activity era for Atlantic hurricanes that began in 1995. Also, strong early-season activity is generally indicative of a more active season.”

However, NOAA seasonal climate forecasters also announced today that El Niño will likely develop in August or September.

“El Niño is a competing factor, because it strengthens the vertical wind shear over the Atlantic, which suppresses storm development. However, we don’t expect El Niño’s influence until later in the season,” Bell said.

“We have a long way to go until the end of the season, and we shouldn’t let our guard down,” said Laura Furgione, acting director of NOAA’s National Weather Service. “Hurricanes often bring dangerous inland flooding as we saw a year ago in the Northeast with Hurricane Irene and Tropical Storm Lee. Even people who live hundreds of miles from the coast need to remain vigilant through the remainder of the season.”

“It is never too early to prepare for a hurricane,” said Tim Manning, FEMA’s deputy administrator for protection and national preparedness. “We are in the middle of hurricane season and now is the time to get ready. There are easy steps you can take to get yourself and your family prepared. Visit www.ready.gov to learn more.”

How Computation Can Predict Group Conflict: Fighting Among Captive Pigtailed Macaques Provides Clues (Science Daily)

ScienceDaily (Aug. 13, 2012) — When conflict breaks out in social groups, individuals make strategic decisions about how to behave based on their understanding of alliances and feuds in the group.

Researchers studied fighting among captive pigtailed macaques for clues about behavior and group conflict. (Credit: iStockphoto/Natthaphong Phanthumchinda)

But it’s been challenging to quantify the underlying trends that dictate how individuals make predictions, given they may only have seen a small number of fights or have limited memory.

In a new study, scientists at the Wisconsin Institute for Discovery (WID) at UW-Madison develop a computational approach to determine whether individuals behave predictably. With data from previous fights, the team looked at how much memory individuals in the group would need to make predictions themselves. The analysis proposes a novel estimate of “cognitive burden,” or the minimal amount of information an organism needs to remember to make a prediction.

The research draws from a concept called “sparse coding,” or the brain’s tendency to use fewer visual details and a small number of neurons to stow an image or scene. Previous studies support the idea that neurons in the brain react to a few large details such as the lines, edges and orientations within images rather than many smaller details.

“So what you get is a model where you have to remember fewer things but you still get very high predictive power — that’s what we’re interested in,” says Bryan Daniels, a WID researcher who led the study. “What is the trade-off? What’s the minimum amount of ‘stuff’ an individual has to remember to make good inferences about future events?”

To find out, Daniels — along with WID co-authors Jessica Flack and David Krakauer — drew comparisons from how brains and computers encode information. The results contribute to ongoing discussions about conflict in biological systems and how cognitive organisms understand their environments.

The study, published in the Aug. 13 edition of the Proceedings of the National Academy of Sciences, examined observed bouts of natural fighting in a group of 84 captive pigtailed macaques at the Yerkes National Primate Research Center. By recording individuals’ involvement — or lack thereof — in fights, the group created models that mapped the likelihood any number of individuals would engage in conflict in hypothetical situations.

To confirm the predictive power of the models, the group plugged in other data from the monkey group that was not used to create the models. Then, researchers compared these simulations with what actually happened in the group. One model looked at conflict as combinations of pairs, while another represented fights as sparse combinations of clusters, which proved to be a better tool for predicting fights. From there, by removing information until predictions became worse, Daniels and colleagues calculated the amount of information each individual needed to remember to make the most informed decision whether to fight or flee.

“We know the monkeys are making predictions, but we don’t know how good they are,” says Daniels. “But given this data, we found that the most memory it would take to figure out the regularities is about 1,000 bits of information.”

Sparse coding appears to be a strong candidate for explaining the mechanism at play in the monkey group, but the team points out that it is only one possible way to encode conflict.

Because the statistical modeling and computation frameworks can be applied to different natural datasets, the research has the potential to influence other fields of study, including behavioral science, cognition, computation, game theory and machine learning. Such models might also be useful in studying collective behaviors in other complex systems, ranging from neurons to bird flocks.

Future research will seek to find out how individuals’ knowledge of alliances and feuds fine tunes their own decisions and changes the groups’ collective pattern of conflict.

The research was supported by the National Science Foundation, the John Templeton Foundation through the Santa Fe Institute, and UW-Madison.

Why Are People Overconfident So Often? It’s All About Social Status (Science Daily)

ScienceDaily (Aug. 13, 2012) — Researchers have long known that people are very frequently overconfident — that they tend to believe they are more physically talented, socially adept, and skilled at their job than they actually are. For example, 94% of college professors think they do above average work (which is nearly impossible, statistically speaking). But this overconfidence can also have detrimental effects on their performance and decision-making. So why, in light of these negative consequences, is overconfidence still so pervasive?

The lure of social status promotes overconfidence, explains Haas School Associate Professor Cameron Anderson. He co-authored a new study, “A Status-Enhancement Account of Overconfidence,” with Sebastien Brion, assistant professor of managing people in organizations, IESE Business School, University of Navarra, Haas School colleagues Don Moore, associate professor of management, and Jessica A. Kennedy, now a post-doctoral fellow at the Wharton School of Business. The study will be published in theJournal of Personality and Social Psychology (forthcoming).

“Our studies found that overconfidence helped people attain social status. People who believed they were better than others, even when they weren’t, were given a higher place in the social ladder. And the motive to attain higher social status thus spurred overconfidence,” says Anderson, the Lorraine Tyson Mitchell Chair in Leadership and Communication II at the Haas School.

Social status is the respect, prominence, and influence individuals enjoy in the eyes of others. Within work groups, for example, higher status individuals tend to be more admired, listened to, and have more sway over the group’s discussions and decisions. These “alphas” of the group have more clout and prestige than other members. Anderson says these research findings are important because they help shed light on a longstanding puzzle: why overconfidence is so common, in spite of its risks. His findings suggest that falsely believing one is better than others has profound social benefits for the individual.

Moreover, these findings suggest one reason why in organizational settings, incompetent people are so often promoted over their more competent peers. “In organizations, people are very easily swayed by others’ confidence even when that confidence is unjustified,” says Anderson. “Displays of confidence are given an inordinate amount of weight.”

The studies suggest that organizations would benefit from taking individuals’ confidence with a grain of salt. Yes, confidence can be a sign of a person’s actual abilities, but it is often not a very good sign. Many individuals are confident in their abilities even though they lack true skills or competence.

The authors conducted six experiments to measure why people become overconfident and how overconfidence equates to a rise in social stature. For example:

In Study 2, the researchers examined 242 MBA students in their project teams and asked them to look over a list of historical names, historical events, and books and poems, and then to identify which ones they knew or recognized. Terms included Maximilien Robespierre, Lusitania, Wounded Knee, Pygmalion, and Doctor Faustus. Unbeknownst to the participants, some of the names were made up. These so-called “foils” included Bonnie Prince Lorenzo, Queen Shaddock, Galileo Lovano, Murphy’s Last Ride, and Windemere Wild. The researchers deemed those who picked the most foils the most overly confident because they believed they were more knowledgeable than they actually were. In a survey at the end of the semester, those same overly confident individuals (who said they had recognized the most foils) achieved the highest social status within their groups.

It is important to note that group members did not think of their high status peers as overconfident, but simply that they were terrific. “This overconfidence did not come across as narcissistic,” explains Anderson. “The most overconfident people were considered the most beloved.”

Study 4 sought to discover the types of behaviors that make overconfident people appear to be so wonderful (even when they were not). Behaviors such as body language, vocal tone, rates of participation were captured on video as groups worked together in a laboratory setting. These videos revealed that overconfident individuals spoke more often, spoke with a confident vocal tone, provided more information and answers, and acted calmly and relaxed as they worked with their peers. In fact, overconfident individuals were more convincing in their displays of ability than individuals who were actually highly competent.

“These big participators were not obnoxious, they didn’t say, ‘I’m really good at this.’ Instead, their behavior was much more subtle. They simply participated more and exhibited more comfort with the task — even though they were no more competent than anyone else,” says Anderson.

Two final studies found that it is the “desire” for status that encourages people to be more overconfident. For example, in Study 6, participants read one of two stories and were asked to imagine themselves as the protagonist in the story. The first story was a simple, bland narrative of losing then finding one’s keys. The second story asked the reader to imagine him/herself getting a new job with a prestigious company. The job had many opportunities to obtain higher status, including a promotion, a bonus, and a fast track to the top. Those participants who read the new job scenario rated their desire for status much higher than those who read the story of the lost keys.

After they were finished reading, participants were asked to rate themselves on a number of competencies such as critical thinking skills, intelligence, and the ability to work in teams. Those who had read the new job story (which stimulated their desire for status) rated their skills and talent much higher than did the first group. Their desire for status amplified their overconfidence.

De-emphasizing the natural tendency toward overconfidence may prove difficult but Prof. Anderson hopes this research will give people the incentive to look for more objective indices of ability and merit in others, instead of overvaluing unsubstantiated confidence.

Should Doctors Treat Lack of Exercise as a Medical Condition? Expert Says ‘Yes’ (Science Daily)

ScienceDaily (Aug. 13, 2012) — A sedentary lifestyle is a common cause of obesity, and excessive body weight and fat in turn are considered catalysts for diabetes, high blood pressure, joint damage and other serious health problems. But what if lack of exercise itself were treated as a medical condition? Mayo Clinic physiologist Michael Joyner, M.D., argues that it should be. His commentary is published this month in The Journal of Physiology.

Physical inactivity affects the health not only of many obese patients, but also people of normal weight, such as workers with desk jobs, patients immobilized for long periods after injuries or surgery, and women on extended bed rest during pregnancies, among others, Dr. Joyner says. Prolonged lack of exercise can cause the body to become deconditioned, with wide-ranging structural and metabolic changes: the heart rate may rise excessively during physical activity, bones and muscles atrophy, physical endurance wane, and blood volume decline.

When deconditioned people try to exercise, they may tire quickly and experience dizziness or other discomfort, then give up trying to exercise and find the problem gets worse rather than better.

“I would argue that physical inactivity is the root cause of many of the common problems that we have,” Dr. Joyner says. “If we were to medicalize it, we could then develop a way, just like we’ve done for addiction, cigarettes and other things, to give people treatments, and lifelong treatments, that focus on behavioral modifications and physical activity. And then we can take public health measures, like we did for smoking, drunken driving and other things, to limit physical inactivity and promote physical activity.”

Several chronic medical conditions are associated with poor capacity to exercise, including fibromyalgia, chronic fatigue syndrome and postural orthostatic tachycardia syndrome, better known as POTS, a syndrome marked by an excessive heart rate and flu-like symptoms when standing or a given level of exercise. Too often, medication rather than progressive exercise is prescribed, Dr. Joyner says.

Texas Health Presbyterian Hospital Dallas and University of Texas Southwestern Medical Center researchers found that three months of exercise training can reverse or improve many POTS symptoms, Dr. Joyner notes. That study offers hope for such patients and shows that physicians should consider prescribing carefully monitored exercise before medication, he says.

If physical inactivity were treated as a medical condition itself rather than simply a cause or byproduct of other medical conditions, physicians may become more aware of the value of prescribing supported exercise, and more formal rehabilitation programs that include cognitive and behavioral therapy would develop, Dr. Joyner says.

For those who have been sedentary and are trying to get into exercise, Dr. Joyner advises doing it slowly and progressively.

“You just don’t jump right back into it and try to train for a marathon,” he says. “Start off with achievable goals and do it in small bites.”

There’s no need to join a gym or get a personal trainer: build as much activity as possible into daily life. Even walking just 10 minutes three times a day can go a long way toward working up to the 150 minutes a week of moderate physical activity the typical adult needs, Dr. Joyner says.

Deeply Held Religious Beliefs Prompting Sick Kids to Be Given ‘Futile’ Treatment (Science Daily)

ScienceDaily (Aug. 13, 2012) — Parental hopes of a “miraculous intervention,” prompted by deeply held religious beliefs, are leading to very sick children being subjected to futile care and needless suffering, suggests a small study in the Journal of Medical Ethics.

The authors, who comprise children’s intensive care doctors and a hospital chaplain, emphasise that religious beliefs provide vital support to many parents whose children are seriously ill, as well as to the staff who care for them.

But they have become concerned that deeply held beliefs are increasingly leading parents to insist on the continuation of aggressive treatment that ultimately is not in the best interests of the sick child.

It is time to review the current ethics and legality of these cases, they say.

They base their conclusions on a review of 203 cases which involved end of life decisions over a three year period.

In 186 of these cases, agreement was reached between the parents and healthcare professionals about withdrawing aggressive, but ultimately futile, treatment.

But in the remaining 17 cases, extended discussions with the medical team and local support had failed to resolve differences of opinion with the parents over the best way to continue to care for the very sick child in question.

The parents had insisted on continuing full active medical treatment, while doctors had advocated withdrawing or withholding further intensive care on the basis of the overwhelming medical evidence.

The cases in which withdrawal or withholding of intensive care was considered to be in the child’s best interests were consistent with the Royal College of Paediatrics and Child Health guidance.

Eleven of these cases (65%) involved directly expressed religious claims that intensive care should not be stopped because of the expectation of divine intervention and a complete cure, together with the conviction that the opinion of the medical team was overly pessimistic and wrong.

Various different faiths were represented among the parents, including Christian fundamentalism, Islam, Judaism, and Roman Catholicism.

Five of the 11 cases were resolved after meeting with the relevant religious leaders outside the hospital, and intensive care was withdrawn in a further case after a High Court order.

But five cases were not resolved, so intensive care was continued. Four of these children eventually died; one survived with profound neurological disability.

Six of the 17 cases in which religious belief was not a cited factor, were all resolved without further recourse to legal, ethical, or socio-religious support. Intensive care was withdrawn in all these children, five of whom died and one of whom survived, but with profound neurological disability.

The authors emphasise that parental reluctance to allow treatment to be withdrawn is “completely understandable as [they] are defenders of their children’s rights, and indeed life.”

But they argue that when children are too young to be able to actively subscribe to their parents’ religious beliefs, a default position in which parental religion is not the determining factor might be more appropriate.

They cite Article 3 of the Human Rights Act, which aims to ensure that no one is subjected to torture or inhumane or degrading treatment or punishment.

“Spending a lifetime attached to a mechanical ventilator, having every bodily function supervised and sanitised by a carer or relative, leaving no dignity or privacy to the child and then adult, has been argued as inhumane,” they argue.

And they conclude: “We suggest it is time to reconsider current ethical and legal structures and facilitate rapid default access to courts in such situations when the best interests of the child are compromised in expectation of the miraculous.”

In an accompanying commentary, the journal’s editor, Professor Julian Savulescu, advocates: “Treatment limitation decisions are best made, not in the alleged interests of patients, but on distributive justice grounds.”

In a publicly funded system with limited resources, these should be given to those whose lives could be saved rather than to those who are very unlikely to survive, he argues.

“Faced with the choice between providing an intensive care bed to a [severely brain damaged] child and one who has been at school and was hit by a cricket ball and will return to normal life, we should provide the bed to the child hit by the cricket ball,” he writes.

In further commentaries, Dr Steve Clarke of the Institute for Science and Ethics maintains that doctors should engage with devout parents on their own terms.

“Devout parents, who are hoping for a miracle, may be able to be persuaded, by the lights of their own personal…religious beliefs, that waiting indefinite periods of time for a miracle to occur while a child is suffering, and while scarce medical equipment is being denied to other children, is not the right thing to do,” he writes.

Leading ethicist, Dr Mark Sheehan, argues that these ethical dilemmas are not confined to fervent religious belief, and to polarise the issue as medicine versus religion is unproductive, and something of a “red herring.”

Referring to the title of the paper, Charles Foster, of the University of Oxford, suggests that the authors have asked the wrong question. “The legal and ethical orthodoxy is that no beliefs, religious or secular, should be allowed to stonewall the best interests of the child,” he writes.

How Do They Do It? Predictions Are in for Arctic Sea Ice Low Point (Science Daily)

ScienceDaily (Aug. 14, 2012) — It’s become a sport of sorts, predicting the low point of Arctic sea ice each year. Expert scientists with decades of experience do it but so do enthusiasts, whose guesses are gamely included in a monthly predictions roundup collected by Sea Ice Outlook, an effort supported by the U.S. government.

Arctic sea ice, as seen from an ice breaker. (Credit: Bonnie Light, UW)

When averaged, the predictions have come in remarkably close to the mark in the past two years. But the low and high predictions are off by hundreds of thousands of square kilometers.

Researchers are working hard to improve their ability to more accurately predict how much Arctic sea ice will remain at the end of summer. It’s an important exercise because knowing why sea ice declines could help scientists better understand climate change and how sea ice is evolving.

This year, researchers from the University of Washington’s Polar Science Center are the first to include new NASA sea ice thickness data collected by airplane in a prediction.

They expect 4.4 million square kilometers of remaining ice (about 1.7 million square miles), just barely more than the 4.3 million kilometers in 2007, the lowest year on record for Arctic sea ice. The median of 23 predictions collected by the Sea Ice Outlook and released on Aug. 13 is 4.3 million.

“One drawback to making predictions is historically we’ve had very little information about the thickness of the ice in the current year,” said Ron Lindsay, a climatologist at the Polar Science Center, a department in the UW’s Applied Physics Laboratory.

To make their prediction, Lindsay and Jinlun Zhang, an oceanographer in the Polar Science Center, start with a widely used model pioneered by Zhang and known as the Pan-Arctic Ice Ocean Modeling and Assimilation System. That system combines available observations with a model to track sea ice volume, which includes both ice thickness and extent.

But obtaining observations about current-year ice thickness in order to build their short-term prediction is tough. NASA is currently in the process of designing a new satellite that will replace one that used to deliver ice thickness data but has since failed. In the meantime, NASA is running a program called Operation IceBridge that uses airplanes to survey sea ice as well as Arctic ice sheets.

“This is the first year they made a concerted effort to get the data from the aircraft, process it and get it into hands of scientists in a timely manner,” Lindsay said. “In the past, we’ve gotten data from submarines, moorings or satellites but none of that data was available in a timely manner. It took months or even years.”

There’s a shortcoming to the IceBridge data, however: It’s only available through March. The radar used to measure snow depth on the surface of the ice, an important element in the observation system, has trouble accurately gauging the depth once it has melted and so the data is only collected through the early spring before the thaw.

The UW scientists have developed a method for informing their prediction that is starting to be used by others. Researchers have struggled with how best to forecast the weather in the Arctic, which affects ice melt and distribution.

“Jinlun came up with the idea of using the last seven summers. Because the climate is changing so fast, only the recent summers are probably relevant,” Lindsay said.

The result is seven different possibilities of what might happen. “The average of those is our best guess,” Lindsay said.

Despite the progress in making predictions, the researchers say their abilities to foretell the future will always be limited. Because they can’t forecast the weather very far in advance and because the ice is strongly affected by winds, they have little confidence beyond what the long-term trend tells us in predictions that are made far in advance.

“The accuracy of our prediction really depends on time,” Zhang said. “Our June 1 prediction for the Sept. 15 low point has high uncertainty but as we approach the end of June or July, the uncertainty goes down and the accuracy goes up.”

In hindsight, that’s true historically for the average predictions collected by Study of Environmental Arctic Change’s Sea Ice Outlook, a project funded by the National Science Foundation and the National Oceanic and Atmospheric Administration.

While the competitive aspect of the predictions is fun, the researchers aren’t in it to win it.

“Essentially it’s not for prediction but for understanding,” Zhang said. “We do it to improve our understanding of sea ice processes, in terms of how dynamic processes affect the seasonal evolution of sea ice.”

That may not be entirely the same for the enthusiasts who contribute a prediction. One climate blog polls readers in the summer for their best estimate of the sea ice low point. It’s included among the predictions collected by the Sea Ice Outlook, with an asterisk noting it as a “public outlook.”

The National Science Foundation and NASA fund the UW research into the Arctic sea ice low point.

Nova legislação dará base científica à prevenção de desastres naturais, dizem especialistas (Fapesp)

Lei sancionada em abril obrigará municípios a elaborar carta geotécnica, instrumento multidisciplinar que orientará implantação de sistemas de alerta e planos diretores (Valter Campanato/ABr)

08/08/2012

Por Fábio de Castro

Agência FAPESP – Em janeiro de 2011, enchentes e deslizamentos deixaram cerca de mil mortos e 500 desaparecidos na Região Serrana do Rio de Janeiro. A tragédia evidenciou a precariedade dos sistemas de alerta no Brasil e foi considerada por especialistas como a prova definitiva de que era preciso investir na prevenção de desastres.

O mais importante desdobramento dessa análise foi a Lei 12.608, sancionada em abril, que estabelece a Política Nacional de Proteção e Defesa Civil e cria o sistema de informações e monitoramento de desastres, de acordo com especialistas reunidos no seminário “Caminhos da política nacional de defesa de áreas de risco”, realizado pela Escola Politécnica da Universidade de São Paulo (USP) no dia 6 de agosto.

A nova lei obriga as prefeituras a investir em planejamento urbano na prevenção de desastres do tipo enchentes e deslizamentos de terra. Segundo os especialistas, pela primeira vez a prevenção de desastres poderá ser feita com fundamento técnico e científico sólido, já que a lei determina que, para fazer o planejamento, todas as prefeituras precisarão elaborar cartas geotécnicas dos municípios.

Katia Canil, pesquisadora do Laboratório de Riscos Ambientais do Instituto de Pesquisas Tecnológicas (IPT), disse que as prefeituras terão dois anos para elaborar as cartas geotécnicas para lastrear seus planos diretores, que deverão contemplar ações de prevenção e mitigação de desastres. Os municípios que não apresentarem esse planejamento não receberão recursos federais para obras de prevenção e mitigação.

“As cartas geotécnicas são documentos cartográficos que reúnem informações sobre as características geológicas e geomorfológicas dos municípios, identificando riscos geológicos e facilitando a criação de regras para a ocupação urbana. Com a obrigatoriedade desse instrumento, expressa na lei, poderemos ter estratégias de prevenção de desastres traçadas com base no conhecimento técnico e científico”, disse Canil à Agência FAPESP.

A primeira carta geotécnica do Brasil foi feita em 1979, no município de Santos (SP), mas, ainda assim, o instrumento se manteve pouco difundido no país. Segundo Canil, a institucionalização da ferramenta será um fator importante para a adequação dos planos diretores em relação às características geotécnicas dos terrenos.

“Poucos municípios têm carta geotécnica, porque não era um instrumento obrigatório. Agora, esse panorama deve mudar. Mas a legislação irá gerar uma grande demanda de especialistas em diversas áreas, porque as cartas geotécnicas integram uma gama de dados interdisciplinares”, disse a pesquisadora do IPT.

As cartas geotécnicas reúnem documentos que resultam de levantamentos geológicos e geotécnicos de campo, além de análises laboratoriais, com o objetivo de sintetizar todo o conhecimento disponível sobre o meio físico e sua relação com os processos geológicos e humanos presentes no local. “E tudo isso precisa ser expresso em uma linguagem adequada para que os gestores compreendam”, disse Canil.

As cidades terão que se organizar para elaborar cartas geotécnicas e a capacitação técnica necessária não é trivial. “Não se trata apenas de cruzar mapas. É preciso ter experiência aliada ao treinamento em áreas como geologia, engenharia, engenharia geotécnica, cartografia, geografia, arquitetura e urbanismo”, disse Canil. O IPT já oferece um curso de capacitação para elaboração de cartas geotécnicas.

Uma dificuldade importante para a elaboração das cartas será a carência de mapeamento geológico de base nos municípios brasileiros. “A maior parte dos municípios não tem dados primários, como mapeamentos geomorfológicos, pedológicos e geológicos”, disse Canil.

Plano nacional de prevenção

A tragédia da Região Serrana fluminense, em janeiro de 2011, foi um marco que mudou o rumo das discussões sobre desastres, destacando definitivamente o papel central da prevenção, segundo Carlos Nobre, secretário de Políticas e Programas de Pesquisa e Desenvolvimento do Ministério da Ciência, Tecnologia e Inovação (MCTI).

“Aquele episódio foi um solavanco que chacoalhou a percepção brasileira para o tema dos grandes desastres. Tornou-se óbvio para os gestores e para a população que é preciso enfatizar o eixo da prevenção. Foi um marco que mudou nossa perspectiva para sempre: prevenção é fundamental”, disse durante o evento.

Segundo Nobre, que também é pesquisador do Instituto Nacional de Pesquisas Espaciais (Inpe) e membro da coordenação do Programa FAPESP de Pesquisa sobre Mudanças Climáticas Globais, a experiência internacional mostra que a prevenção pode reduzir em até 90% o número de vítimas fatais em desastres naturais, além de diminuir em cerca de 35% os danos materiais. “Além de poupar vidas, a economia com os prejuízos materiais já compensa com sobras todos os investimentos em prevenção”, disse.

De acordo com Nobre, a engenharia terá um papel cada vez mais importante na prevenção, à medida que os desastres naturais se tornarem mais extremos por consequência das mudanças climáticas.

“O engenheiro do século 21 precisará ser treinado para a engenharia da sustentabilidade – um campo transversal da engenharia que ganhará cada vez mais espaço. A engenharia, se bem conduzida, é central para solucionar alguns dos principais problemas da atualidade”, afirmou.

Segundo Nobre, além da nova legislação, que obrigará o planejamento com base em cartas geotécnicas dos municípios, o Brasil conta com diversas iniciativas na área de prevenção de desastres. Uma delas será anunciada nesta quarta-feira (08/08): o Plano Nacional de Prevenção a Desastres Naturais, que enfatiza as obras voltadas para a instalação de sistemas de alerta.

“Há obras de grande escala necessárias no Brasil, especialmente no que se refere aos sistemas de alerta. Um dos elementos importantes do novo plano é a questão do alerta precoce. Experiências internacionais mostram que um alerta feito até duas horas antes de um deslizamento é capaz de salvar vidas”, disse.

Segundo Nobre, as iniciativas do plano serão coerentes com a nova legislação. O governo federal deverá investir R$ 4,6 bilhões, nos próximos meses, em iniciativas de prevenção de desastres nos estados do Rio de Janeiro, Minas Gerais e Santa Catarina.

Mas, para pleitear verbas federais, o município deverá cumprir uma série de requisitos, como incorporar as ações de proteção e defesa civil no planejamento municipal, identificar e mapear as áreas de risco de desastres naturais, impedir novas ocupações e vistoriar edificações nessas áreas.

Segundo Nobre, outra ação voltada para a prevenção de desastres foi a implantação do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemaden), do MCTI, que começou a operar em dezembro de 2011, no campus do Inpe em Cachoeira Paulista (SP).

“Esse centro já tinha um papel importante na previsão de tempo, mas foi reformulado e contratou 35 profissionais. O Cemaden nasce como um emblema dos novos sistemas de alerta: uma concepção que une geólogos, meteorólogos e especialistas em desastres naturais para identificar vulnerabilidades, algo raro no mundo”, afirmou.

Segundo ele, essa nova estrutura já tem um sistema de alertas em funcionamento. “É um sistema que ainda vai precisar ser avaliado com o tempo. Mas até agora, desde dezembro de 2011, já foram lançados mais de 100 alertas. O país levará vários anos para reduzir as fatalidades como os países que têm bons sistemas de prevenção. Mas estamos no caminho certo”, disse Nobre.

Heatwave turns America’s waterways into rivers of death (The Independent)

Falling water levels are killing fish and harming exports

DAVID USBORNE

SUNDAY 05 AUGUST 2012

The cruel summer heat-wave that continues to scorch agricultural crops across much of the United States and which is prompting comparisons with the severe droughts of the 1930s and 1950s is also leading to record-breaking water temperatures in rivers and streams, including the Mississippi, as well as fast-falling navigation levels.

While in the northern reaches of the Mississippi, near Moline in Illinois, the temperature touched 90 degrees last week – warmer than the Gulf of Mexico around the Florida Keys – towards the river’s southern reaches the US Army Corps of Engineers is dredging around the clock to try to keep barges from grounding as water levels dive.

For scientists the impact of a long, hot summer that has plunged more than two-thirds of the country into drought conditions – sometimes extreme – has been particularly striking in the Great Lakes. According to the Great Lakes Environmental Research Laboratory, all are experiencing unusual spikes in water temperature this year. It is especially the case for Lake Superior, the northernmost, the deepest, and therefore the coolest.

“It’s pretty safe to say that what we’re seeing here is the warmest that we’ve seen in Lake Superior in a century,” said Jay Austin, a professor at the University of Minnesota at Duluth. The average temperature recorded for the lake last week was 68F (20C). That compares with 56F (13C) at this time last year.

It is a boon to shoreline residents who are finding normally chilly waters suddenly inviting for a dip. But the warming of the rivers, in particular, is taking a harsh toll on fish, which are dying in increasingly large numbers. Significant tolls of fresh-water species, from pike to trout, have been reported, most frequently in the Midwest.

“Most problems occur in ponds that are not deep enough for fish to retreat to cooler and more oxygen-rich water,” said Jake Allman of the Missouri Department of Conservation. “Hot water holds less oxygen than cool water. Shallow ponds get warmer than deeper ponds, and with little rain, area ponds are becoming shallower by the day. Evaporation rates are up to 11 inches per month in these conditions.”

In some instances, fish are simply left high and dry as rivers dry up entirely. It is the case of the normally rushing River Platte which has simply petered out over a 100-mile stretch in Nebraska, large parts of which are now federal disaster areas contending with so-called “exceptional drought” conditions.

“This is the worst I’ve ever seen it, and I’ve been on the river since I was a pup,” Dan Kneifel, owner of Geno’s Bait and Tackle Shop, told TheOmahaChannel.com. “The river was full of fish, and to see them all die is a travesty.”

As water levels in the Mississippi ebb, so barge operators are forced to offload cargo to keep their vessels moving. About 60 per cent of exported US corn is conveyed by the Mississippi, which is now 12ft below normal levels in some stretches. Navigation on the Mississippi has not been so severely threatened since the 1988 drought in the US. Few forget, meanwhile, that last summer towns up and down the Mississippi were battling flooding.

One welcome side-effect, however, is data showing that the so-called “dead zone” in the Gulf of Mexico around the Mississippi estuary is far less extensive this summer because the lack of rain and the slow running of the water has led to much less nitrate being washed off farmland and into the system than in normal years. The phenomenon occurs because the nitrates feed blooms of algae in Gulf waters which then decompose, stripping the water of oxygen.

Chronic 2000-04 drought, worst in 800 years, may be the ‘new normal’ (Oregon State Univ)

Public release date: 29-Jul-2012

By Beverly Law

Oregon State University

CORVALLIS, Ore. – The chronic drought that hit western North America from 2000 to 2004 left dying forests and depleted river basins in its wake and was the strongest in 800 years, scientists have concluded, but they say those conditions will become the “new normal” for most of the coming century.

Such climatic extremes have increased as a result of global warming, a group of 10 researchers reported today in Nature Geoscience. And as bad as conditions were during the 2000-04 drought, they may eventually be seen as the good old days.

Climate models and precipitation projections indicate this period will actually be closer to the “wet end” of a drier hydroclimate during the last half of the 21st century, scientists said.

Aside from its impact on forests, crops, rivers and water tables, the drought also cut carbon sequestration by an average of 51 percent in a massive region of the western United States, Canada and Mexico, although some areas were hit much harder than others. As vegetation withered, this released more carbon dioxide into the atmosphere, with the effect of amplifying global warming.

“Climatic extremes such as this will cause more large-scale droughts and forest mortality, and the ability of vegetation to sequester carbon is going to decline,” said Beverly Law, a co-author of the study, professor of global change biology and terrestrial systems science at Oregon State University, and former science director of AmeriFlux, an ecosystem observation network.

“During this drought, carbon sequestration from this region was reduced by half,” Law said. “That’s a huge drop. And if global carbon emissions don’t come down, the future will be even worse.”

This research was supported by the National Science Foundation, NASA, U.S. Department of Energy, and other agencies. The lead author was Christopher Schwalm at Northern Arizona University. Other collaborators were from the University of Colorado, University of California at Berkeley, University of British Columbia, San Diego State University, and other institutions.

It’s not clear whether or not the current drought in the Midwest, now being called one of the worst since the Dust Bowl, is related to these same forces, Law said. This study did not address that, and there are some climate mechanisms in western North America that affect that region more than other parts of the country.

But in the West, this multi-year drought was unlike anything seen in many centuries, based on tree ring data. The last two periods with drought events of similar severity were in the Middle Ages, from 977-981 and 1146-1151. The 2000-04 drought affected precipitation, soil moisture, river levels, crops, forests and grasslands.

Ordinarily, Law said, the land sink in North America is able to sequester the equivalent of about 30 percent of the carbon emitted into the atmosphere by the use of fossil fuels in the same region. However, based on projected changes in precipitation and drought severity, scientists said that this carbon sink, at least in western North America, could disappear by the end of the century.

“Areas that are already dry in the West are expected to get drier,” Law said. “We expect more extremes. And it’s these extreme periods that can really cause ecosystem damage, lead to climate-induced mortality of forests, and may cause some areas to convert from forest into shrublands or grassland.”

During the 2000-04 drought, runoff in the upper Colorado River basin was cut in half. Crop productivity in much of the West fell 5 percent. The productivity of forests and grasslands declined, along with snowpacks. Evapotranspiration decreased the most in evergreen needleleaf forests, about 33 percent.

The effects are driven by human-caused increases in temperature, with associated lower soil moisture and decreased runoff in all major water basins of the western U.S., researchers said in the study.

Although regional precipitations patterns are difficult to forecast, researchers in this report said that climate models are underestimating the extent and severity of drought, compared to actual observations. They say the situation will continue to worsen, and that 80 of the 95 years from 2006 to 2100 will have precipitation levels as low as, or lower than, this “turn of the century” drought from 2000-04.

“Towards the latter half of the 21st century the precipitation regime associated with the turn of the century drought will represent an outlier of extreme wetness,” the scientists wrote in this study.

These long-term trends are consistent with a 21st century “megadrought,” they said.

Sob o céu Guarani (Jornal da Ciência)

JC e-mail 4555, de 06 de Agosto de 2012

Livro lançado na 64ª Reunião da SBPC resgata técnicas da astronomia indígena no Mato Grosso do Sul.

Clarissa Vasconcellos – Jornal da Ciência

Lançado na 64ª Reunião Anual da Sociedade Brasileira para o Progresso da Ciência (SBPC), em São Luís, o livro ‘O Céu dos Índios de Dourados – Mato Grosso do Sul’ (Editora UEMS), de Germano Bruno Afonso e Paulo Souza da Silva, escrito em guarani e português, nasceu com a ideia de recuperar a tradição indígena de observação do céu. Trata-se de uma publicação voltada para o ensino de alunos de cultura indígena (mas não exclusivamente para eles), usada por professores Guarani como referência para mostrar como esses povos procuravam o melhor aproveitamento dos recursos naturais.

A publicação nasceu do projeto ‘Etnoastronomia dos Índios Guarani da Região da Grande Dourados – MS’, cuja meta era reconstruir três observatórios solares em Dourados, dois deles em escolas. “Eram uma espécie de relógio que os Guarani usavam para vários fins, como festejos ou medição das estações, e com isso podiam fazer previsões e criar cronogramas até para a concepção de bebês”, detalha ao Jornal da Ciência Paulo Souza da Silva, professor do curso de Física da Universidade Estadual do Mato Grosso do Sul (UEMS).

As técnicas dos índios também ajudam a explicar as marés e o comportamento da fauna e flora (útil para a caça e cultivo), entre outros fenômenos, mostrando que seu sistema astronômico vai muito mais além do que apenas a observação dos corpos celestes. O que acaba despertando o interesse até dos não índios.

Foi o que constatou o astrônomo do Museu da Amazônia Germano Bruno Afonso na palestra sobre o tema que ofereceu na Reunião da SBPC. “Foi mais gente do que esperávamos. A recepção das pessoas em São Luís me chamou a atenção, embora eu tenha falado bastante dos Tupinambá do Maranhão”, observa o pesquisador. Os Tupinambá, assim como os Tembé e os Guarani, pertencem à família linguística Tupi-Guarani, a maior em número e extensão geográfica do tronco linguístico Tupi.

Diferenças e semelhanças – Os Tupinambá maranhenses, uma etnia já extinta, não são o objeto principal do livro, mas estão presentes porque têm muito em comum com os Guarani do Sul a respeito da observação do céu. Germano conta que Tupinambá e Guarani têm técnicas muito parecidas, baseando-se no trabalho de Claude d’Abbeville, monge capuchinho que passou quatro meses no Maranhão em 1612. Seu livro ‘Histoire de la mission de Peres capucins en l’Isle de Maragnan ET terres circonvoisines’ é considerado uma das fontes mais importantes da etnografia dos Tupi.

“É interessante identificar o mesmo conhecimento com mais três de mil quilômetros de distância e 400 anos de diferença, embora Guarani e Tupinambá pertençam ao mesmo tronco linguístico”, pontua Germano, lembrando que a semelhança de idiomas isso facilitou que o conhecimento fosse repassado. Germano tem origem indígena e até os 17 anos de idade viveu numa aldeia Guarani.

O livro, originalmente uma cartilha, poderia ser complementar a ‘O Céu dos Índios Tembé’, que rendeu a Germano o Prêmio Jabuti de 2000. “Os Tembé são remanescentes dos Tupinambá, pela divisa do Pará com Maranhão, e eles também mantêm esse mesmo sistema astronômico”, conta. Após o livro dos Tembé, ele e Paulo Souza Silva ganharam uma bolsa de pesquisa do CNPq para trabalhar com os Guarani de Dourados, no projeto citado acima.

“Mas sabemos que esse trabalho é adaptável para todos os grupos da família Tupi-Guarani. Por isso fizemos um livro geral para professores, para eles aplicarem e modificarem de acordo com a cultura local. Um Guarani do Rio Grande do Sul não vê o céu da mesma maneira que um do Espírito Santo. A base é a mesma, mas o céu é diferente”, detalha Germano.

“Você tem que despertar o interesse da liderança, resgatar essa cultura”, opina Silva sobre a importância do livro e do projeto. Ele lembra que o indígena é marginalizado em cidades como Dourados, onde a cultura está se perdendo entre os jovens índios. “Muitos nem falam guarani”, lamenta.

Intercâmbio com a astronomia – A investigação desse conhecimento de grupos étnicos ou culturais que não utilizam a chamada ‘astronomia ocidental’ (ou oficial), caso dos povos indígenas do Brasil, deu origem à disciplina etnoastronomia ou astronomia antropológica. Ela requer especialistas em áreas como astronomia, antropologia, biologia e história. Germano conta que vê pouca colaboração entre a etnoastronomia e a astronomia.

“Não vejo troca nenhuma, exatamente por preconceito e falta de informação da astronomia ‘oficial’, pelo desconhecimento dos povos indígenas do próprio Brasil. A gente conhece a cultura dos maias, dos astecas e até dos aborígenes da Austrália, mas aqui temos muito desconhecimento”, lamenta, dizendo que busca a aceitação não apenas da academia, mas também do público leigo. Ele gostaria que o reconhecimento acontecesse conforme ocorreu na botânica e farmácia, disciplinas que aproveitaram muito o conhecimento tradicional desses povos. Para Silva, o preconceito diminuiu um pouco, apesar de haver quem diga que a etnoastronomia “é cultura e não ciência”.  “Como cientistas, temos que estar abertos ao que outros têm a oferecer”, opina o físico.

Atualmente, Germano está em Manaus e pretende passar seis meses em São Gabriel da Cachoeira, noroeste do Amazonas,” onde 95% da população são indígenas, com 27 etnias”. A ideia é fazer outro livro similar, levando em conta as diferenças regionais. “Enquanto no Sul é a temperatura que manda no clima, lá é a chuva. Vamos observar os períodos de chuva e as enchentes dos rios, aspectos climáticos que regem a fauna e flora”, detalha. Já Silva pretende fazer um livro sobre os mitos indígenas do céu, com questões como a formação do mundo.

Os autores desejam que esse conhecimento chegue aos bancos das escolas de todo o País, não apenas as que ensinam cultura indígena. “A mitologia indígena, comparada com a Greco-Romana [usada na astronomia], é muito mais fácil de visualizar no céu”, exemplifica Silva. “Nós explicamos, de uma maneira empírica, assim como os índios fazem, as estações do ano, os pontos cardeais, as fases da lua, as marés e os eclipses, só por meio da observação da natureza. Qualquer criança pode começar a entender isso sem a complicação matemática, então é uma maneira alternativa e prazerosa para ensinar também os não índios, antes de se aplicar a ciência formal”, conclui Germano.

Pai de gêmeos, um negro e outro branco (Extra)

Bruno Cunha

Fonte Extra

Finalmente eles foram reconhecidos no futebol. Enquanto um é zagueiro, tem cabelos crespos e adora doce, o outro é atacante, tem fios louros e prefere salgado. Com as diferenças, ficava difícil perceber que David Evangelista de Oliveira, o branco, e Nícolas, o negro, são irmãos gêmeos.

— Os pais dos coleguinhas do futebol achavam que só um era meu filho e que o outro era um amiguinho dele. E olha que os dois já treinam há um ano e meio. Mas só agora descobriram que são irmãos gêmeos — conta o montador de peças de laboratório Luis Carlos de Oliveira Silva, de 42 anos, pai das crianças.

Fama no bairro

Morador de Campo Grande, Luis tomou um susto quando soube da dupla gravidez da mulher, Audicelia Evangelista, de 45 anos. E outro após o nascimento dos filhos, um negro, como o pai, e outro branco, como a mãe.

— Na época, os colegas brincavam: “ah, esse aí não é seu filho, não!”. Uma vez entrei numa maternidade e o David me chamou de pai. O segurança cochichou: “não é filho dele.” Mas eu penso: os dois puxaram ao pai e à mãe — afirma Luis.

Na porta do quarto, a frase “gêmeos em ação”
Na porta do quarto, a frase “gêmeos em ação” Foto: Nina Lima / Extra

Famosos no sub-bairro Santa Rosa, Nícolas e David, aos 9 anos, já começam a colher os frutos da fama que os levou a um programa de TV ainda recém-nascidos. Outro dia mesmo foram seguidos por duas meninas que descobriram onde moravam.

— Cheguei do trabalho umas 19h30m e peguei o Nícolas passando gel no cabelo e o Davi se arrumando. Logo em seguida, duas meninas gritaram o nome deles aqui no portão. Elas estavam tomando coragem para chamá-los para sair — explica o pai, que se diverte ao saber que os filhos já estão se interessando pelas meninas.

Os gêmeos
Os gêmeos Foto: Acervo pessoal / Divulgação

Estimativa: menos de 1% de chance de incidência

O nascimento de irmãos gêmeos, um negro e outro branco, ainda surpreende. Em 2006, por exemplo, o EXTRA mostrou o caso dos irmãos Pedro e Nathan Henrique Rodrigues, que intrigou Costa Barros.

Um ano depois, o cabeleireiro Carlos Henrique Fonseca, o pai, na época com 26 anos, contou que muitas pessoas ainda estranhavam quando viam Pedro, negro como ele, ao lado de Nathan, branco como mãe, a então frentista Valéria Gomes, de 22 anos.

Diferentes, mas torcem pelo mesmo time
Diferentes, mas torcem pelo mesmo time Foto: Nina Lima / Extra

Miscigenação

A cegonha também foi generosa, em Botafogo, onde vivem as gêmeas Beatriz e Maria Gaia Gerstner, hoje com 8 anos. Uma é morena como a mãe e a outra é branca como o pai, um alemão.

— Quando estou com a branca não acham que é minha filha. E quando o pai está com a morena é a mesma coisa — conta a mãe, Janaína Gaia, de 35 anos, hoje separada do pai delas.

A diretora do Centro Vida — Reprodução Humana Assistida, na Barra, na Zona Oeste, Maria Cecília Erthal, estima que há menos de 1% de chance do nascimento de gêmeos diferentes.

— É a miscigenação que faz com que os genes de pais negros e brancos se encontrem — explica.

*   *   *

Jemima Pompeu enviou o seguinte comentário:

Gêmeos com cores de pele diferentes surpreendem pais, mas não os cientistas. Veja alguns casos no link abaixo:

Renaissance Women Fought Men, and Won (Science Daily)

ScienceDaily (Aug. 14, 2012) — A three-year study into a set of manuscripts compiled and written by one of Britain’s earliest feminist figures has revealed new insights into how women challenged male authority in the 17th century.

Dr Jessica Malay has painstakingly transcribed Lady Anne Clifford’s 600,000-word Great Books of Record, which documents the trials and triumphs of the female aristocrat’s family dynasty over six centuries and her bitter battle to inherit castles and villages across northern England.

Lady Anne, who lived from 1590 to 1676, was, in her childhood, a favourite of Queen Elizabeth I. Her father died when she was 15 but contrary to an agreement that stretched back to the time of Edward II — that the Clifford’s vast estates in Cumbria and Yorkshire should pass to the eldest heir whether male or female ­- the lands were handed over to her uncle.

Following an epic legal struggle in which she defied her father, both her husbands, King James I and Oliver Cromwell, Lady Anne finally took possession of the estates, which included the five castles of Skipton, where she was born, Brougham, Brough, Pendragon and Appleby, aged 53.

Malay, a Reader in English Literature at the University of Huddersfield, is set to publish a new, complete edition of Lady Anne’s Great Books of Record, which contains rich narrative evidence of how women circumvented male authority in order to participate more fully in society.

Malay said: “Lady Anne’s Great Books of Record challenge the notion that women in the 16th and 17th centuries lacked any power or control over their own lives.

“There is this misplaced idea that the feminist movement is predominantly a 1960s invention but debates and campaigns over women’s rights and equality stretch back to the Middle Ages.”

The Great Books of Record comprise three volumes, the last of which came up for auction in 2003. The Cumbria Archives bought the third set and now house all three. In 2010, Malay secured a £158,000 grant from the Leverhulme Trust to study the texts.

Malay said: “Virginia Woolf argued that a woman with Shakespeare’s gifts during the Renaissance Period would have been denied the opportunity to develop her talents due to the social barriers restricting women.

“But Lady Anne is regarded as a literary figure in her own right and when I started studying the Great Books of Record I realised there is a lot more to her writing than we were led to believe.

“I was struck by how much they revealed about the role of women, the importance of family networks and the interaction between lords and tenants over 500 years of social and political life in Britain.”

In her Great Books of Record, Lady Anne presents the case for women to be accepted as inheritors of wealth, by drawing on both documentary evidence and biographies of her female ancestors to reveal that the Clifford lands of the North were brought to them through marriage.

She argued that since many men in the 16th and 17th centuries had inherited their titles of honour from their mothers or grandmothers, it was only right that titles of honour could be passed down to female heirs.

She also contended that women were well suited to the title of Baron since a key duty of office was to provide counsel in Parliament, where women were not allowed. While men were better at fighting wars, women excelled in giving measured advice, she wrote.

Malay said: “Lady Anne appropriates historical texts, arranging and intervening in these in such a way as to prove her inevitable and just rights as heir.

“Her foregrounding of the key contributions of the female to the success of the Clifford dynasty work to support both her own claims to the lands of her inheritance and her decision to resist cultural imperatives that demanded female subservience to male authority.

“Elizabeth I was a strong role model for Lady Anne in her youth. While she was monarch, women had a level of access to the royal court that men could only dream of, which spawned a new sense of confidence among aristocratic women.”

Malay’s research into the Great Books of Record, which contain material from the early 12th century to the early 18th century, reveals the importance of family alliances in forming influential political networks.

It shows that women were integral to the construction of these networks, both regionally and nationally.

Malay said: “The Great Books explain the legal avenues open to women. Married women could call on male friends to act on their behalf. As part of marriage settlements many women had trusts set up to allow them access to their own money which they could in turn use in a variety of business enterprises or to help develop a wide network of social contacts.

“Men would often rely on their wives to access wider familial networks, leading to wives gaining higher prestige in the family.”

Lady Anne was married twice and widowed twice. After her second husband died she moved back to the North and, as hereditary High Sherriff of Westmorland, set about restoring dilapidated castles, almshouses and churches.

Malay said: “Widows enjoyed the same legal rights as men. While the husband was alive then the wife would require his permission to do anything. Widows were free to act on their own without any male guardianship.”

The Great Books also provide a valuable insight into Medieval and Renaissance society, with one document describing a six-year-old girl from the Clifford family being carried to the chapel at Skipton on her wedding day.

Lady Anne also recounted her father’s voyages to the Caribbean and she kept a diary of her own life, which includes summaries of each year from her birth until her death at the age of 86 in 1676.

Malay said: “The books are full of all sorts of life over 600 years, which is what is so exciting about them.”

Malay’s Anne Clifford Project, the Great Books of Record was the catalyst for an exhibition of the Great Books of Record, which are, for the first time, being exhibited in public alongsideThe Great Picture at the Abbot Hall Art Gallery in Kendal.

The Great Picture is a huge (so huge a window of the gallery had to be removed to accommodate its arrival) triptych that marks Lady Anne’s succession to her inheritance.

The left panel depicts Lady Anne at 15, when she was disinherited. The right panel shows Lady Anne in middle age when she finally regained the Clifford estates. The central panel depicts Lady Anne’s parents with her older brothers shortly after Lady Anne had been conceived.

New Book Explores ‘Noah’s Flood’: Says Bible and Science Can Get Along (Science Daily)

ScienceDaily (Aug. 14, 2012) — David Montgomery is a geomorphologist, a geologist who studies changes to topography over time and how geological processes shape landscapes. He has seen firsthand evidence of how the forces that have shaped Earth run counter to some significant religious beliefs.

But the idea that scientific reason and religious faith are somehow at odds with each other, he said, “is, in my view, a false dichotomy.”

In a new book, “The Rocks Don’t Lie: A Geologist Investigates Noah’s Flood” (Aug. 27, 2012, W.W. Norton), Montgomery explores the long history of religious thinking — particularly among Christians — on matters of geological discovery, from the writings of St. Augustine 1,700 years ago to the rise in the mid-20th century of the most recent rendering of creationism.

“The purpose is not to tweak people of faith but to remind everyone about the long history in the faith community of respecting what we can learn from observing the world,” he said.

Many of the earliest geologists were clergy, he said. Nicolas Steno, considered the founder of modern geology, was a 17th century Roman Catholic priest who has achieved three of the four steps to being declared a saint in the church.

“Though there are notable conflicts between religion and science — the famous case of Galileo Galilei, for example — there also is a church tradition of working to reconcile biblical stories with known scientific fact,” Montgomery said.

“What we hear today as the ‘Christian’ positions are really just one slice of a really rich pie,” he said.

For nearly two centuries there has been overwhelming geological evidence that a global flood, as depicted in the story of Noah in the biblical book of Genesis, could not have happened. Not only is there not enough water in the Earth system to account for water levels above the highest mountaintop, but uniformly rising levels would not allow the water to have the erosive capabilities attributed to Noah’s Flood, Montgomery said.

Some rock formations millions of years old show no evidence of such large-scale water erosion. Montgomery is convinced any such flood must have been, at best, a regional event, perhaps a catastrophic deluge in Mesopotamia. There are, in fact, Mesopotamian stories with details very similar, but predating, the biblical story of Noah’s Flood.

“If your world is small enough, all floods are global,” he said.

Perhaps the greatest influence in prompting him to write “The Rocks Don’t Lie” was a 2002 expedition to the Tsangpo River on the Tibetan Plateau. In the fertile river valley he found evidence in sediment layers that a great lake had formed in the valley many centuries ago, not once but numerous times. Downstream he found evidence that a glacier on several occasions advanced far enough to block the river, creating the huge lake.

But ice makes an unstable dam, and over time the ice thinned and finally give way, unleashing a tremendous torrent of water down the deepest gorge in the world. It was only after piecing the story together from geological evidence that Montgomery learned that local oral traditions told of exactly this kind of great flood.

“To learn that the locals knew about it and talked about it for the last thousand years really jolted my thinking. Here was evidence that a folk tale might be reality based,” he said.

He has seen evidence of huge regional floods in the scablands of Eastern Washington, carved by torrents when glacial Lake Missoula breached its ice dam in Montana and raced across the landscape, and he found Native American stories that seem to tell of this catastrophic flood.

Other flood stories dating back to the early inhabitants of the Pacific Northwest and from various islands in the Pacific Ocean, for example, likely tell of inundation by tsunamis after large earthquakes.

But he noted that in some regions of the world — in Africa, for example — there are no flood stories in the oral traditions because there the annual floods help sustain life rather than bring destruction.

Floods are not always responsible for major geological features. Hiking a trail from the floor of the Grand Canyon to its rim, Montgomery saw unmistakable evidence of the canyon being carved over millions of years by the flow of the Colorado River, not by a global flood several thousand years ago as some people still believe.

He describes that hike in detail in “The Rocks Don’t Lie.” He also explores changes in the understanding of where fossils came from, how geologists read Earth history in layers of rock, and the writings of geologists and religious authorities through the centuries.

Montgomery hopes the book might increase science literacy. He noted that a 2001 National Science Foundation survey found that more than half of American adults didn’t realize that dinosaurs were extinct long before humans came along.

But he also would like to coax readers to make sense of the world through both what they believe and through what they can see for themselves, and to keep an open mind to new ideas.

“If you think you know everything, you’ll never learn anything,” he said.

Need an Expert? Try the Crowd (Science Daily)

ScienceDaily (Aug. 14, 2012) — “It’s potentially a new way to do science.”

In 1714, the British government held a contest. They offered a large cash prize to anyone who could solve the vexing “longitude problem” — how to determine a ship’s east/west position on the open ocean — since none of their naval experts had been able to do so.

Lots of people gave it a try. One of them, a self-educated carpenter named John Harrison, invented the marine chronometer — a rugged and highly precise clock — that did the trick. For the first time, sailors could accurately determine their location at sea.

A centuries-old problem was solved. And, arguably, crowdsourcing was born.

Crowdsourcing is basically what it sounds like: posing a question or asking for help from a large group of people. Coined as a term in 2006, crowdsourcing has taken off in the internet era. Think of Wikipedia, and its thousands of unpaid contributors, now vastly larger than the Encyclopedia Britannica.

Crowdsourcing has allowed many problems to be solved that would be impossible for experts alone. Astronomers rely on an army of volunteers to scan for new galaxies. At climateprediction.net, citizens have linked their home computers to yield more than a hundred million hours of climate modeling; it’s the world’s largest forecasting experiment.

But what if experts didn’t simply ask the crowd to donate time or answer questions? What if the crowd was asked to decide what questions to ask in the first place?

Could the crowd itself be the expert?

That’s what a team at the University of Vermont decided to explore — and the answer seems to be yes.

Prediction from the people

Josh Bongard and Paul Hines, professors in UVM’s College of Engineering and Mathematical Sciences, and their students, set out to discover if volunteers who visited two different websites could pose, refine, and answer questions of each other — that could effectively predict the volunteers’ body weight and home electricity use.

The experiment, the first of its kind, was a success: the self-directed questions and answers by visitors to the websites led to computer models that effectively predict user’s monthly electricity consumption and body mass index.

Their results, “Crowdsourcing Predictors of Behavioral Outcomes,” were published in a recent edition of IEEE Transactions: Systems, Man and Cybernetics, a journal of the Institute of Electrical and Electronics Engineers.

“It’s proof of concept that a crowd actually can come up with good questions that lead to good hypotheses,” says Bongard, an expert on machine science.

In other words, the wisdom of the crowd can be harnessed to determine which variables to study, the UVM project shows — and at the same time provide a pool of data by responding to the questions they ask of each other.

“The result is a crowdsourced predictive model,” the Vermont scientists write.

Unexpected angles

Some of the questions the volunteers posed were obvious. For example, on the website dedicated to exploring body weight, visitors came up with the question: “Do you think of yourself as overweight?” And, no surprise, that proved to be the question with the most power to predict people’s body weight.

But some questions posed by the volunteers were less obvious. “We had some eye-openers,” Bongard says. “How often do you masturbate a month?” might not be the first question asked by weight-loss experts, but it proved to be the second-most-predictive question of the volunteer’s self-reported weights — more predictive than “how often do you eat during a day?”

“Sometimes the general public has intuition about stuff that experts miss — there’s a long literature on this,” Hines says.

“It’s those people who are very underweight or very overweight who might have an explanation for why they’re at these extremes — and some of those explanations might not be a simple combination of diet and exercise,” says Bongard. “There might be other things that experts missed.”

Cause and correlation

The researchers are quick to note that the variables revealed by the evolving Q&A on the experimental websites are simply correlated to outcomes — body weight and electricity use — not necessarily the cause.

“We’re not arguing that this study is actually predictive of the causes,” says Hines, “but improvements to this method may lead in that direction.”

Nor do the scientists make claim to being experts on body weight or to be providing recommendations on health or diet (though Hines is an expert on electricity, and the EnergyMinder site he and his students developed for this project has a larger aim to help citizens understand and reduce their household energy use.)

“We’re simply investigating the question: could you involve participants in the hypothesis-generation part of the scientific process?” Bongard says. “Our paper is a demonstration of this methodology.”

“Going forward, this approach may allow us to involve the public in deciding what it is that is interesting to study,” says Hines. “It’s potentially a new way to do science.”

And there are many reasons why this new approach might be helpful. In addition to forces that experts might simply not know about — “can we elicit unexpected predictors that an expert would not have come up with sitting in his office?” Hines asks — experts often have deeply held biases.

Faster discoveries

But the UVM team primarily sees their new approach as potentially helping to accelerate the process of scientific discovery. The need for expert involvement — in shaping, say, what questions to ask on a survey or what variable to change to optimize an engineering design — “can become a bottleneck to new insights,” the scientists write.

“We’re looking for an experimental platform where, instead of waiting to read a journal article every year about what’s been learned about obesity,” Bongard says, “a research site could be changing and updating new findings constantly as people add their questions and insights.”

The goal: “exponential rises,” the UVM scientists write, in the discovery of what causes behaviors and patterns — probably driven by the people who care about them the most. For example, “it might be smokers or people suffering from various diseases,” says Bongard. The team thinks this new approach to science could “mirror the exponential growth found in other online collaborative communities,” they write.

“We’re all problem-solving animals,” says Bongard, “so can we exploit that? Instead of just exploiting the cycles of your computer or your ability to say ‘yes’ or ‘no’ on a survey — can we exploit your creative brain?”

Global Warming’s Terrifying New Math (Rolling Stone)

Three simple numbers that add up to global catastrophe – and that make clear who the real enemy is

by: Bill McKibben

reckoning illoIllustration by Edel Rodriguez

If the pictures of those towering wildfires in Colorado haven’t convinced you, or the size of your AC bill this summer, here are some hard numbers about climate change: June broke or tied 3,215 high-temperature records across the United States. That followed the warmest May on record for the Northern Hemisphere – the 327th consecutive month in which the temperature of the entire globe exceeded the 20th-century average, the odds of which occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.

Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the “largest temperature departure from average of any season on record.” The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet’s history.

Not that our leaders seemed to notice. Last month the world’s nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn’t even attend. It was “a ghost of the glad, confident meeting 20 years ago,” the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls “once thronged by multitudes.” Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I’ve spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we’re losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.

When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn’t yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.

The First Number: 2° Celsius

If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world’s nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the “most important gathering since the Second World War, given what is at stake.” As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: “This is our chance. If we miss it, it could take years before we get a new and better one. If ever.”

In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving “Copenhagen Accord” that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. “Copenhagen is a crime scene tonight,” an angry Greenpeace official declared, “with the guilty men and women fleeing to the airport.” Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.

The accord did contain one important number, however. In Paragraph 1, it formally recognized “the scientific view that the increase in global temperature should be below two degrees Celsius.” And in the very next paragraph, it declared that “we agree that deep cuts in global emissions are required… so as to hold the increase in global temperature below two degrees Celsius.” By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.

Some context: So far, we’ve raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. “Any number much above one degree involves a gamble,” writes Kerry Emanuel of MIT, a leading authority on hurricanes, “and the odds become less and less favorable as the temperature goes up.” Thomas Lovejoy, once the World Bank’s chief biodiversity adviser, puts it like this: “If we’re seeing what we’re seeing today at 0.8 degrees Celsius, two degrees is simply too much.” NASA scientist James Hansen, the planet’s most prominent climatologist, is even blunter: “The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster.” At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: “Some countries will flat-out disappear.” When delegates from developing nations were warned that two degrees would represent a “suicide pact” for drought-stricken Africa, many of them started chanting, “One degree, one Africa.”

Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it’s fair to say that it’s the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world’s carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can’t raise the temperature more than two degrees Celsius – it’s become the bottomest of bottom lines. Two degrees.

The Second Number: 565 Gigatons

Scientists estimate that humans can pour roughly 565 more gigatons of carbon dioxide into the atmosphere by midcentury and still have some reasonable hope of staying below two degrees. (“Reasonable,” in this case, means four chances in five, or somewhat worse odds than playing Russian roulette with a six-shooter.)

This idea of a global “carbon budget” emerged about a decade ago, as scientists began to calculate how much oil, coal and gas could still safely be burned. Since we’ve increased the Earth’s temperature by 0.8 degrees so far, we’re currently less than halfway to the target. But, in fact, computer models calculate that even if we stopped increasing CO2 now, the temperature would likely still rise another 0.8 degrees, as previously released carbon continues to overheat the atmosphere. That means we’re already three-quarters of the way to the two-degree target.

How good are these numbers? No one is insisting that they’re exact, but few dispute that they’re generally right. The 565-gigaton figure was derived from one of the most sophisticated computer-simulation models that have been built by climate scientists around the world over the past few decades. And the number is being further confirmed by the latest climate-simulation models currently being finalized in advance of the next report by the Intergovernmental Panel on Climate Change. “Looking at them as they come in, they hardly differ at all,” says Tom Wigley, an Australian climatologist at the National Center for Atmospheric Research. “There’s maybe 40 models in the data set now, compared with 20 before. But so far the numbers are pretty much the same. We’re just fine-tuning things. I don’t think much has changed over the last decade.” William Collins, a senior climate scientist at the Lawrence Berkeley National Laboratory, agrees. “I think the results of this round of simulations will be quite similar,” he says. “We’re not getting any free lunch from additional understanding of the climate system.”

We’re not getting any free lunch from the world’s economies, either. With only a single year’s lull in 2009 at the height of the financial crisis, we’ve continued to pour record amounts of carbon into the atmosphere, year after year. In late May, the International Energy Agency published its latest figures – CO2 emissions last year rose to 31.6 gigatons, up 3.2 percent from the year before. America had a warm winter and converted more coal-fired power plants to natural gas, so its emissions fell slightly; China kept booming, so its carbon output (which recently surpassed the U.S.) rose 9.3 percent; the Japanese shut down their fleet of nukes post-Fukushima, so their emissions edged up 2.4 percent. “There have been efforts to use more renewable energy and improve energy efficiency,” said Corinne Le Quéré, who runs England’s Tyndall Centre for Climate Change Research. “But what this shows is that so far the effects have been marginal.” In fact, study after study predicts that carbon emissions will keep growing by roughly three percent a year – and at that rate, we’ll blow through our 565-gigaton allowance in 16 years, around the time today’s preschoolers will be graduating from high school. “The new data provide further evidence that the door to a two-degree trajectory is about to close,” said Fatih Birol, the IEA’s chief economist. In fact, he continued, “When I look at this data, the trend is perfectly in line with a temperature increase of about six degrees.” That’s almost 11 degrees Fahrenheit, which would create a planet straight out of science fiction.

So, new data in hand, everyone at the Rio conference renewed their ritual calls for serious international action to move us back to a two-degree trajectory. The charade will continue in November, when the next Conference of the Parties (COP) of the U.N. Framework Convention on Climate Change convenes in Qatar. This will be COP 18 – COP 1 was held in Berlin in 1995, and since then the process has accomplished essentially nothing. Even scientists, who are notoriously reluctant to speak out, are slowly overcoming their natural preference to simply provide data. “The message has been consistent for close to 30 years now,” Collins says with a wry laugh, “and we have the instrumentation and the computer power required to present the evidence in detail. If we choose to continue on our present course of action, it should be done with a full evaluation of the evidence the scientific community has presented.” He pauses, suddenly conscious of being on the record. “I should say, a fuller evaluation of the evidence.”

So far, though, such calls have had little effect. We’re in the same position we’ve been in for a quarter-century: scientific warning followed by political inaction. Among scientists speaking off the record, disgusted candor is the rule. One senior scientist told me, “You know those new cigarette packs, where governments make them put a picture of someone with a hole in their throats? Gas pumps should have something like that.”

The Third Number: 2,795 Gigatons

This number is the scariest of all – one that, for the first time, meshes the political and scientific dimensions of our dilemma. It was highlighted last summer by the Carbon Tracker Initiative, a team of London financial analysts and environmentalists who published a report in an effort to educate investors about the possible risks that climate change poses to their stock portfolios. The number describes the amount of carbon already contained in the proven coal and oil and gas reserves of the fossil-fuel companies, and the countries (think Venezuela or Kuwait) that act like fossil-fuel companies. In short, it’s the fossil fuel we’re currently planning to burn. And the key point is that this new number – 2,795 – is higher than 565. Five times higher.

The Carbon Tracker Initiative – led by James Leaton, an environmentalist who served as an adviser at the accounting giant PricewaterhouseCoopers – combed through proprietary databases to figure out how much oil, gas and coal the world’s major energy companies hold in reserve. The numbers aren’t perfect – they don’t fully reflect the recent surge in unconventional energy sources like shale gas, and they don’t accurately reflect coal reserves, which are subject to less stringent reporting requirements than oil and gas. But for the biggest companies, the figures are quite exact: If you burned everything in the inventories of Russia’s Lukoil and America’s ExxonMobil, for instance, which lead the list of oil and gas companies, each would release more than 40 gigatons of carbon dioxide into the atmosphere.

Which is exactly why this new number, 2,795 gigatons, is such a big deal. Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.

We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.

Yes, this coal and gas and oil is still technically in the soil. But it’s already economically aboveground – it’s figured into share prices, companies are borrowing money against it, nations are basing their budgets on the presumed returns from their patrimony. It explains why the big fossil-fuel companies have fought so hard to prevent the regulation of carbon dioxide – those reserves are their primary asset, the holding that gives their companies their value. It’s why they’ve worked so hard these past years to figure out how to unlock the oil in Canada’s tar sands, or how to drill miles beneath the sea, or how to frack the Appalachians.

If you told Exxon or Lukoil that, in order to avoid wrecking the climate, they couldn’t pump out their reserves, the value of their companies would plummet. John Fullerton, a former managing director at JP Morgan who now runs the Capital Institute, calculates that at today’s market value, those 2,795 gigatons of carbon emissions are worth about $27 trillion. Which is to say, if you paid attention to the scientists and kept 80 percent of it underground, you’d be writing off $20 trillion in assets. The numbers aren’t exact, of course, but that carbon bubble makes the housing bubble look small by comparison. It won’t necessarily burst – we might well burn all that carbon, in which case investors will do fine. But if we do, the planet will crater. You can have a healthy fossil-fuel balance sheet, or a relatively healthy planet – but now that we know the numbers, it looks like you can’t have both. Do the math: 2,795 is five times 565. That’s how the story ends.

So far, as I said at the start, environmental efforts to tackle global warming have failed. The planet’s emissions of carbon dioxide continue to soar, especially as developing countries emulate (and supplant) the industries of the West. Even in rich countries, small reductions in emissions offer no sign of the real break with the status quo we’d need to upend the iron logic of these three numbers. Germany is one of the only big countries that has actually tried hard to change its energy mix; on one sunny Saturday in late May, that northern-latitude nation generated nearly half its power from solar panels within its borders. That’s a small miracle – and it demonstrates that we have the technology to solve our problems. But we lack the will. So far, Germany’s the exception; the rule is ever more carbon.

This record of failure means we know a lot about what strategies don’t work. Green groups, for instance, have spent a lot of time trying to change individual lifestyles: the iconic twisty light bulb has been installed by the millions, but so have a new generation of energy-sucking flatscreen TVs. Most of us are fundamentally ambivalent about going green: We like cheap flights to warm places, and we’re certainly not going to give them up if everyone else is still taking them. Since all of us are in some way the beneficiaries of cheap fossil fuel, tackling climate change has been like trying to build a movement against yourself – it’s as if the gay-rights movement had to be constructed entirely from evangelical preachers, or the abolition movement from slaveholders.

People perceive – correctly – that their individual actions will not make a decisive difference in the atmospheric concentration of CO2; by 2010, a poll found that “while recycling is widespread in America and 73 percent of those polled are paying bills online in order to save paper,” only four percent had reduced their utility use and only three percent had purchased hybrid cars. Given a hundred years, you could conceivably change lifestyles enough to matter – but time is precisely what we lack.

A more efficient method, of course, would be to work through the political system, and environmentalists have tried that, too, with the same limited success. They’ve patiently lobbied leaders, trying to convince them of our peril and assuming that politicians would heed the warnings. Sometimes it has seemed to work. Barack Obama, for instance, campaigned more aggressively about climate change than any president before him – the night he won the nomination, he told supporters that his election would mark the moment “the rise of the oceans began to slow and the planet began to heal.” And he has achieved one significant change: a steady increase in the fuel efficiency mandated for automobiles. It’s the kind of measure, adopted a quarter-century ago, that would have helped enormously. But in light of the numbers I’ve just described, it’s obviously a very small start indeed.

At this point, effective action would require actually keeping most of the carbon the fossil-fuel industry wants to burn safely in the soil, not just changing slightly the speed at which it’s burned. And there the president, apparently haunted by the still-echoing cry of “Drill, baby, drill,” has gone out of his way to frack and mine. His secretary of interior, for instance, opened up a huge swath of the Powder River Basin in Wyoming for coal extraction: The total basin contains some 67.5 gigatons worth of carbon (or more than 10 percent of the available atmospheric space). He’s doing the same thing with Arctic and offshore drilling; in fact, as he explained on the stump in March, “You have my word that we will keep drilling everywhere we can… That’s a commitment that I make.” The next day, in a yard full of oil pipe in Cushing, Oklahoma, the president promised to work on wind and solar energy but, at the same time, to speed up fossil-fuel development: “Producing more oil and gas here at home has been, and will continue to be, a critical part of an all-of-the-above energy strategy.” That is, he’s committed to finding even more stock to add to the 2,795-gigaton inventory of unburned carbon.

Sometimes the irony is almost Borat-scale obvious: In early June, Secretary of State Hillary Clinton traveled on a Norwegian research trawler to see firsthand the growing damage from climate change. “Many of the predictions about warming in the Arctic are being surpassed by the actual data,” she said, describing the sight as “sobering.” But the discussions she traveled to Scandinavia to have with other foreign ministers were mostly about how to make sure Western nations get their share of the estimated $9 trillion in oil (that’s more than 90 billion barrels, or 37 gigatons of carbon) that will become accessible as the Arctic ice melts. Last month, the Obama administration indicated that it would give Shell permission to start drilling in sections of the Arctic.

Almost every government with deposits of hydrocarbons straddles the same divide. Canada, for instance, is a liberal democracy renowned for its internationalism – no wonder, then, that it signed on to the Kyoto treaty, promising to cut its carbon emissions substantially by 2012. But the rising price of oil suddenly made the tar sands of Alberta economically attractive – and since, as NASA climatologist James Hansen pointed out in May, they contain as much as 240 gigatons of carbon (or almost half of the available space if we take the 565 limit seriously), that meant Canada’s commitment to Kyoto was nonsense. In December, the Canadian government withdrew from the treaty before it faced fines for failing to meet its commitments.

The same kind of hypocrisy applies across the ideological board: In his speech to the Copenhagen conference, Venezuela’s Hugo Chavez quoted Rosa Luxemburg, Jean-Jacques Rousseau and “Christ the Redeemer,” insisting that “climate change is undoubtedly the most devastating environmental problem of this century.” But the next spring, in the Simon Bolivar Hall of the state-run oil company, he signed an agreement with a consortium of international players to develop the vast Orinoco tar sands as “the most significant engine for a comprehensive development of the entire territory and Venezuelan population.” The Orinoco deposits are larger than Alberta’s – taken together, they’d fill up the whole available atmospheric space.

So: the paths we have tried to tackle global warming have so far produced only gradual, halting shifts. A rapid, transformative change would require building a movement, and movements require enemies. As John F. Kennedy put it, “The civil rights movement should thank God for Bull Connor. He’s helped it as much as Abraham Lincoln.” And enemies are what climate change has lacked.

But what all these climate numbers make painfully, usefully clear is that the planet does indeed have an enemy – one far more committed to action than governments or individuals. Given this hard math, we need to view the fossil-fuel industry in a new light. It has become a rogue industry, reckless like no other force on Earth. It is Public Enemy Number One to the survival of our planetary civilization. “Lots of companies do rotten things in the course of their business – pay terrible wages, make people work in sweatshops – and we pressure them to change those practices,” says veteran anti-corporate leader Naomi Klein, who is at work on a book about the climate crisis. “But these numbers make clear that with the fossil-fuel industry, wrecking the planet is their business model. It’s what they do.”

According to the Carbon Tracker report, if Exxon burns its current reserves, it would use up more than seven percent of the available atmospheric space between us and the risk of two degrees. BP is just behind, followed by the Russian firm Gazprom, then Chevron, ConocoPhillips and Shell, each of which would fill between three and four percent. Taken together, just these six firms, of the 200 listed in the Carbon Tracker report, would use up more than a quarter of the remaining two-degree budget. Severstal, the Russian mining giant, leads the list of coal companies, followed by firms like BHP Billiton and Peabody. The numbers are simply staggering – this industry, and this industry alone, holds the power to change the physics and chemistry of our planet, and they’re planning to use it.

They’re clearly cognizant of global warming – they employ some of the world’s best scientists, after all, and they’re bidding on all those oil leases made possible by the staggering melt of Arctic ice. And yet they relentlessly search for more hydrocarbons – in early March, Exxon CEO Rex Tillerson told Wall Street analysts that the company plans to spend $37 billion a year through 2016 (about $100 million a day) searching for yet more oil and gas.

There’s not a more reckless man on the planet than Tillerson. Late last month, on the same day the Colorado fires reached their height, he told a New York audience that global warming is real, but dismissed it as an “engineering problem” that has “engineering solutions.” Such as? “Changes to weather patterns that move crop-production areas around – we’ll adapt to that.” This in a week when Kentucky farmers were reporting that corn kernels were “aborting” in record heat, threatening a spike in global food prices. “The fear factor that people want to throw out there to say, ‘We just have to stop this,’ I do not accept,” Tillerson said. Of course not – if he did accept it, he’d have to keep his reserves in the ground. Which would cost him money. It’s not an engineering problem, in other words – it’s a greed problem.

You could argue that this is simply in the nature of these companies – that having found a profitable vein, they’re compelled to keep mining it, more like efficient automatons than people with free will. But as the Supreme Court has made clear, they are people of a sort. In fact, thanks to the size of its bankroll, the fossil-fuel industry has far more free will than the rest of us. These companies don’t simply exist in a world whose hungers they fulfill – they help create the boundaries of that world.

Left to our own devices, citizens might decide to regulate carbon and stop short of the brink; according to a recent poll, nearly two-thirds of Americans would back an international agreement that cut carbon emissions 90 percent by 2050. But we aren’t left to our own devices. The Koch brothers, for instance, have a combined wealth of $50 billion, meaning they trail only Bill Gates on the list of richest Americans. They’ve made most of their money in hydrocarbons, they know any system to regulate carbon would cut those profits, and they reportedly plan to lavish as much as $200 million on this year’s elections. In 2009, for the first time, the U.S. Chamber of Commerce surpassed both the Republican and Democratic National Committees on political spending; the following year, more than 90 percent of the Chamber’s cash went to GOP candidates, many of whom deny the existence of global warming. Not long ago, the Chamber even filed a brief with the EPA urging the agency not to regulate carbon – should the world’s scientists turn out to be right and the planet heats up, the Chamber advised, “populations can acclimatize to warmer climates via a range of behavioral, physiological and technological adaptations.” As radical goes, demanding that we change our physiology seems right up there.

Environmentalists, understandably, have been loath to make the fossil-fuel industry their enemy, respecting its political power and hoping instead to convince these giants that they should turn away from coal, oil and gas and transform themselves more broadly into “energy companies.” Sometimes that strategy appeared to be working – emphasis on appeared. Around the turn of the century, for instance, BP made a brief attempt to restyle itself as “Beyond Petroleum,” adapting a logo that looked like the sun and sticking solar panels on some of its gas stations. But its investments in alternative energy were never more than a tiny fraction of its budget for hydrocarbon exploration, and after a few years, many of those were wound down as new CEOs insisted on returning to the company’s “core business.” In December, BP finally closed its solar division. Shell shut down its solar and wind efforts in 2009. The five biggest oil companies have made more than $1 trillion in profits since the millennium – there’s simply too much money to be made on oil and gas and coal to go chasing after zephyrs and sunbeams.

Much of that profit stems from a single historical accident: Alone among businesses, the fossil-fuel industry is allowed to dump its main waste, carbon dioxide, for free. Nobody else gets that break – if you own a restaurant, you have to pay someone to cart away your trash, since piling it in the street would breed rats. But the fossil-fuel industry is different, and for sound historical reasons: Until a quarter-century ago, almost no one knew that CO2 was dangerous. But now that we understand that carbon is heating the planet and acidifying the oceans, its price becomes the central issue.

If you put a price on carbon, through a direct tax or other methods, it would enlist markets in the fight against global warming. Once Exxon has to pay for the damage its carbon is doing to the atmosphere, the price of its products would rise. Consumers would get a strong signal to use less fossil fuel – every time they stopped at the pump, they’d be reminded that you don’t need a semimilitary vehicle to go to the grocery store. The economic playing field would now be a level one for nonpolluting energy sources. And you could do it all without bankrupting citizens – a so-called “fee-and-dividend” scheme would put a hefty tax on coal and gas and oil, then simply divide up the proceeds, sending everyone in the country a check each month for their share of the added costs of carbon. By switching to cleaner energy sources, most people would actually come out ahead.

There’s only one problem: Putting a price on carbon would reduce the profitability of the fossil-fuel industry. After all, the answer to the question “How high should the price of carbon be?” is “High enough to keep those carbon reserves that would take us past two degrees safely in the ground.” The higher the price on carbon, the more of those reserves would be worthless. The fight, in the end, is about whether the industry will succeed in its fight to keep its special pollution break alive past the point of climate catastrophe, or whether, in the economists’ parlance, we’ll make them internalize those externalities.

It’s not clear, of course, that the power of the fossil-fuel industry can be broken. The U.K. analysts who wrote the Carbon Tracker report and drew attention to these numbers had a relatively modest goal – they simply wanted to remind investors that climate change poses a very real risk to the stock prices of energy companies. Say something so big finally happens (a giant hurricane swamps Manhattan, a megadrought wipes out Midwest agriculture) that even the political power of the industry is inadequate to restrain legislators, who manage to regulate carbon. Suddenly those Chevron reserves would be a lot less valuable, and the stock would tank. Given that risk, the Carbon Tracker report warned investors to lessen their exposure, hedge it with some big plays in alternative energy.

“The regular process of economic evolution is that businesses are left with stranded assets all the time,” says Nick Robins, who runs HSBC’s Climate Change Centre. “Think of film cameras, or typewriters. The question is not whether this will happen. It will. Pension systems have been hit by the dot-com and credit crunch. They’ll be hit by this.” Still, it hasn’t been easy to convince investors, who have shared in the oil industry’s record profits. “The reason you get bubbles,” sighs Leaton, “is that everyone thinks they’re the best analyst – that they’ll go to the edge of the cliff and then jump back when everyone else goes over.”

So pure self-interest probably won’t spark a transformative challenge to fossil fuel. But moral outrage just might – and that’s the real meaning of this new math. It could, plausibly, give rise to a real movement.

Once, in recent corporate history, anger forced an industry to make basic changes. That was the campaign in the 1980s demanding divestment from companies doing business in South Africa. It rose first on college campuses and then spread to municipal and state governments; 155 campuses eventually divested, and by the end of the decade, more than 80 cities, 25 states and 19 counties had taken some form of binding economic action against companies connected to the apartheid regime. “The end of apartheid stands as one of the crowning accomplishments of the past century,” as Archbishop Desmond Tutu put it, “but we would not have succeeded without the help of international pressure,” especially from “the divestment movement of the 1980s.”

The fossil-fuel industry is obviously a tougher opponent, and even if you could force the hand of particular companies, you’d still have to figure out a strategy for dealing with all the sovereign nations that, in effect, act as fossil-fuel companies. But the link for college students is even more obvious in this case. If their college’s endowment portfolio has fossil-fuel stock, then their educations are being subsidized by investments that guarantee they won’t have much of a planet on which to make use of their degree. (The same logic applies to the world’s largest investors, pension funds, which are also theoretically interested in the future – that’s when their members will “enjoy their retirement.”) “Given the severity of the climate crisis, a comparable demand that our institutions dump stock from companies that are destroying the planet would not only be appropriate but effective,” says Bob Massie, a former anti-apartheid activist who helped found the Investor Network on Climate Risk. “The message is simple: We have had enough. We must sever the ties with those who profit from climate change – now.”

Movements rarely have predictable outcomes. But any campaign that weakens the fossil-fuel industry’s political standing clearly increases the chances of retiring its special breaks. Consider President Obama’s signal achievement in the climate fight, the large increase he won in mileage requirements for cars. Scientists, environmentalists and engineers had advocated such policies for decades, but until Detroit came under severe financial pressure, it was politically powerful enough to fend them off. If people come to understand the cold, mathematical truth – that the fossil-fuel industry is systematically undermining the planet’s physical systems – it might weaken it enough to matter politically. Exxon and their ilk might drop their opposition to a fee-and-dividend solution; they might even decide to become true energy companies, this time for real.

Even if such a campaign is possible, however, we may have waited too long to start it. To make a real difference – to keep us under a temperature increase of two degrees – you’d need to change carbon pricing in Washington, and then use that victory to leverage similar shifts around the world. At this point, what happens in the U.S. is most important for how it will influence China and India, where emissions are growing fastest. (In early June, researchers concluded that China has probably under-reported its emissions by up to 20 percent.) The three numbers I’ve described are daunting – they may define an essentially impossible future. But at least they provide intellectual clarity about the greatest challenge humans have ever faced. We know how much we can burn, and we know who’s planning to burn more. Climate change operates on a geological scale and time frame, but it’s not an impersonal force of nature; the more carefully you do the math, the more thoroughly you realize that this is, at bottom, a moral issue; we have met the enemy and they is Shell.

Meanwhile the tide of numbers continues. The week after the Rio conference limped to its conclusion, Arctic sea ice hit the lowest level ever recorded for that date. Last month, on a single weekend, Tropical Storm Debby dumped more than 20 inches of rain on Florida – the earliest the season’s fourth-named cyclone has ever arrived. At the same time, the largest fire in New Mexico history burned on, and the most destructive fire in Colorado’s annals claimed 346 homes in Colorado Springs – breaking a record set the week before in Fort Collins. This month, scientists issued a new study concluding that global warming has dramatically increased the likelihood of severe heat and drought – days after a heat wave across the Plains and Midwest broke records that had stood since the Dust Bowl, threatening this year’s harvest. You want a big number? In the course of this month, a quadrillion kernels of corn need to pollinate across the grain belt, something they can’t do if temperatures remain off the charts. Just like us, our crops are adapted to the Holocene, the 11,000-year period of climatic stability we’re now leaving… in the dust.

This story is from the August 2nd, 2012 issue of Rolling Stone.

Climate models that predict more droughts win further scientific support (Washington Post)

The drought of 2012: It has been more than a half-century since a drought this extensive hit the United States, NOAA reported July 16. The effects are growing and may cost the U.S. economy $50 billion.

By Hristio Boytchev, Published: August 13

The United States will suffer a series of severe droughts in the next two decades, according to a new study published in the journal Nature Climate Change. Moreover, global warming will play an increasingly important role in their abundance and severity, claims Aiguo Dai, the study’s author.

His findings bolster conclusions from climate models used by researchers around the globe that have predicted severe and widespread droughts in coming decades over many land areas. Those models had been questioned because they did not fully reflect actual drought patterns when they were applied to conditions in the past. However, using a statistical method with data about sea surface temperatures, Dai, a climate researcher at the federally funded National Center for Atmospheric Research, found that the model accurately portrayed historic climate events.

“We can now be more confident that the models are correct,” Dai said, “but unfortunately, their predictions are dire.”

In the United States, the main culprit currently is a cold cycle in the surface temperature of the eastern Pacific Ocean. It decreases precipitation, especially over the western part of the country. “We had a similar situation in the Dust Bowl era of the 1930s,” said Dai, who works at the research center’s headquarters in Boulder, Colo.

While current models cannot predict the severity of a drought in a given year, they can assess its probability. “Considering the current trend, I was not surprised by the 2012 drought,” Dai said.

The Pacific cycle is expected to last for the next one or two decades, bringing more aridity. On top of that comes climate change. “Global warming has a subtle effect on drought at the moment,” Dai said, “but by the end of the cold cycle, global warming might take over and continue to cause dryness.”

While the variations in sea temperatures primarily influence precipitation, global warming is expected to bring droughts by increasing evaporation over land. Additionally, Dai predicts more dryness in South America, Southern Europe and Africa.

“The similarity between the observed droughts and the projections from climate models here is striking,” said Peter Cox, a professor of climate system dynamics at Britain’s University of Exeter, who was not involved in Dai’s research. He said he also agrees that the latest models suggest increasing drought to be consistent with man-made climate change.

Lost Letter Experiment Suggests Wealthy London Neighborhoods Are ‘More Altruistic’ (Science Daily)

ScienceDaily (Aug. 15, 2012) — Neighbourhood income deprivation has a strong negative effect on altruistic behaviour when measured by a ‘lost letter’ experiment, according to new UCL research published August 15 in PLoS One.

Researchers from UCL Anthropology used the lost letter technique to measure altruism across 20 London neighbourhoods by dropping 300 letters on the pavement and recording whether they arrived at their destination. The stamped letters were addressed by hand to a study author’s home address with a gender neutral name, and were dropped face-up and during rain free weekdays.

The results show a strong negative effect of neighbourhood income deprivation on altruistic behaviour, with an average of 87% of letters dropped in the wealthier neighbourhoods being returned compared to only an average 37% return rate in poorer neighbourhoods.

Co-author Jo Holland said: “This is the first large scale study investigating cooperation in an urban environment using the lost letter technique. This technique, first used in the 1960s by the American social psychologist Stanley Milgram, remains one of the best ways of measuring truly altruistic behaviour, as returning the letter doesn’t benefit that person and actually incurs the small hassle of taking the letter to a post box.

Co-author Professor Ruth Mace added: “Our study attempts to understand how the socio-economic characteristics of a neighbourhood affect the likelihood of people in a neighbourhood acting altruistically towards a stranger. The results show a clear trend, with letters dropped in the poorest neighbourhoods having 91% lower odds of being returned than letters dropped in the wealthiest neighbourhoods. This suggests that those living in poor neighbourhoods are less inclined to behave altruistically toward their neighbours.”

As well as measuring the number of letters returned, the researchers also looked at how other neighbourhood characteristics may help to explain the variation in altruistic behaviour — including ethnic composition and population density — but did not find them to be good predictors of lost letter return.

Corresponding author Antonio Silva said: “The fact that ethnic composition does not play a role on the likelihood of a letter being returned is particularly interesting, as other studies have suggested that ethnic mixing negatively affects social cohesion, but in our sampled London neighbourhoods this does not appear to be true.

“The level of altruism observed in a population is likely to vary according to its context. Our hypothesis that area level socio-economic characteristics could determine the levels of altruism found in individuals living in an area is confirmed by our results. Our overall findings replicate and expand on previous studies which use similar methodology.

“We show in this study that individuals living in poor neighbourhoods are less altruistic than individuals in wealthier neighbourhoods. However, the effect of income deprivation may be confounded by crime, as the poorer neighbourhoods tend to have higher rates crime which may lead to people in those neighbourhoods being generally more suspicious and therefore less likely to pick up a lost letter.

“Further research should focus on attempting to disentangle these two factors, possibly by comparing equally deprived neighbourhoods with different levels of crime. Although this study uses only one measure of altruism and therefore we should be careful in interpreting these findings, it does give us an interesting perspective on altruism in an urban context and provides a sound experimental model on which to base future studies.”

Programa de computador mimetiza evolução humana (Fapesp)

Software desenvolvido na USP de São Carlos cria e seleciona programas geradores de Árvores de Decisão, ferramentas capazes de fazer previsões. Pesquisa foi premiada nos Estados Unidos, no maior evento de computação evolutiva (Wikimedia)

16/08/2012

Por Karina Toledo

Agência FAPESP – Árvores de Decisão são ferramentas computacionais que conferem às máquinas a capacidade de fazer previsões com base na análise de dados históricos. A técnica pode, por exemplo, auxiliar o diagnóstico médico ou a análise de risco de aplicações financeiras.

Mas, para ter a melhor previsão, é necessário o melhor programa gerador de Árvores de Decisão. Para alcançar esse objetivo, pesquisadores do Instituto de Ciências Matemáticas e de Computação (ICMC) da Universidade de São Paulo (USP), em São Carlos, se inspiraram na teoria evolucionista de Charles Darwin.

“Desenvolvemos um algoritmo evolutivo, ou seja, que mimetiza o processo de evolução humana para gerar soluções”, disse Rodrigo Coelho Barros, doutorando do Laboratório de Computação Bioinspirada (BioCom) do ICMC e bolsista da FAPESP.

A computação evolutiva, explicou Barros, é uma das várias técnicas bioinspiradas, ou seja, que buscam na natureza soluções para problemas computacionais. “É notável como a natureza encontra soluções para problemas extremamente complicados. Não há dúvidas de que precisamos aprender com ela”, disse Barros.

Segundo Barros, o software desenvolvido em seu doutorado é capaz de criar automaticamente programas geradores de Árvores de Decisão. Para isso, faz cruzamentos aleatórios entre os códigos de programas já existentes gerando “filhos”.

“Esses ‘filhos’ podem eventualmente sofrer mutações e evoluir. Após um tempo, é esperado que os programas de geração de Árvores de Decisão evoluídos sejam cada vez melhores e nosso algoritmo seleciona o melhor de todos”, afirmou Barros.

Mas enquanto o processo de seleção natural na espécie humana leva centenas ou até milhares de anos, na computação dura apenas algumas horas, dependendo do problema a ser resolvido. “Estabelecemos cem gerações como limite do processo evolutivo”, contou Barros.

Inteligência artificial

Em Ciência da Computação, é denominada heurística a capacidade de um sistema fazer inovações e desenvolver técnicas para alcançar um determinado fim.

O software desenvolvido por Barros se insere na área de hiper-heurísticas, tópico recente na área de computação evolutiva que tem como objetivo a geração automática de heurísticas personalizadas para uma determinada aplicação ou conjunto de aplicações.

“É um passo preliminar em direção ao grande objetivo da inteligência artificial: o de criar máquinas capazes de desenvolver soluções para problemas sem que sejam explicitamente programadas para tal”, detalhou Barros.

O trabalho deu origem ao artigo A Hyper-Heuristic Evolutionary Algorithm for Automatically Designing Decision-Tree Algorithms, premiado em três categorias na Genetic and Evolutionary Computation Conference (GECCO), maior evento da área de computação evolutiva do mundo, realizado em julho na Filadélfia, Estados Unidos.

Além de Barros, também são autores do artigo os professores André Carlos Ponce de Leon Ferreira de Carvalho, orientador da pesquisa no ICMC, Márcio Porto Basgalupp, da Universidade Federal de São Paulo (Unifesp), e Alex Freitas, da University of Kent, no Reino Unido, que assumiu a co-orientação.

Os autores foram convidados a submeter o artigo para a revista Evolutionary Computation Journal, publicada pelo Instituto de Tecnologia de Massachusetts (MIT). “O trabalho ainda passará por revisão, mas, como foi submetido a convite, tem grande chance de ser aceito”, disse Barros.

A pesquisa, que deve ser concluída somente em 2013, também deu origem a um artigo publicado a convite no Journal of the Brazilian Computer Society, após ser eleito como melhor trabalho no Encontro Nacional de Inteligência Artificial de 2011.

Outro artigo, apresentado na 11ª International Conference on Intelligent Systems Design and Applications, realizada na Espanha em 2011, rendeu convite para publicação na revistaNeurocomputing.

Why Are Elderly Duped? Area in Brain Where Doubt Arises Changes With Age (Science Daily)

ScienceDaily (Aug. 16, 2012) — Everyone knows the adage: “If something sounds too good to be true, then it probably is.” Why, then, do some people fall for scams and why are older folks especially prone to being duped?

An answer, it seems, is because a specific area of the brain has deteriorated or is damaged, according to researchers at the University of Iowa. By examining patients with various forms of brain damage, the researchers report they’ve pinpointed the precise location in the human brain, called the ventromedial prefrontal cortex, that controls belief and doubt, and which explains why some of us are more gullible than others.

“The current study provides the first direct evidence beyond anecdotal reports that damage to the vmPFC (ventromedial prefrontal cortex) increases credulity. Indeed, this specific deficit may explain why highly intelligent vmPFC patients can fall victim to seemingly obvious fraud schemes,” the researchers wrote in the paper published in a special issue of the journal Frontiers in Neuroscience.

A study conducted for the National Institute of Justice in 2009 concluded that nearly 12 percent of Americans 60 and older had been exploited financially by a family member or a stranger. And, a report last year by insurer MetLife Inc. estimated the annual loss by victims of elder financial abuse at $2.9 billion.

The authors point out their research can explain why the elderly are vulnerable.

“In our theory, the more effortful process of disbelief (to items initially believed) is mediated by the vmPFC, which, in old age, tends to disproportionately lose structural integrity and associated functionality,” they wrote. “Thus, we suggest that vulnerability to misleading information, outright deception and fraud in older adults is the specific result of a deficit in the doubt process that is mediated by the vmPFC.”

The ventromedial prefrontal cortex is an oval-shaped lobe about the size of a softball lodged in the front of the human head, right above the eyes. It’s part of a larger area known to scientists since the extraordinary case of Phineas Gage that controls a range of emotions and behaviors, from impulsivity to poor planning. But brain scientists have struggled to identify which regions of the prefrontal cortex govern specific emotions and behaviors, including the cognitive seesaw between belief and doubt.

The UI team drew from its Neurological Patient Registry, which was established in 1982 and has more than 500 active members with various forms of damage to one or more regions in the brain. From that pool, the researchers chose 18 patients with damage to the ventromedial prefrontal cortex and 21 patients with damage outside the prefrontal cortex. Those patients, along with people with no brain damage, were shown advertisements mimicking ones flagged as misleading by the Federal Trade Commission to test how much they believed or doubted the ads. The deception in the ads was subtle; for example, an ad for “Legacy Luggage” that trumpets the gear as “American Quality” turned on the consumer’s ability to distinguish whether the luggage was manufactured in the United States versus inspected in the country.

Each participant was asked to gauge how much he or she believed the deceptive ad and how likely he or she would buy the item if it were available. The researchers found that the patients with damage to the ventromedial prefrontal cortex were roughly twice as likely to believe a given ad, even when given disclaimer information pointing out it was misleading. And, they were more likely to buy the item, regardless of whether misleading information had been corrected.

“Behaviorally, they fail the test to the greatest extent,” says Natalie Denburg, assistant professor in neurology who devised the ad tests. “They believe the ads the most, and they demonstrate the highest purchase intention. Taken together, it makes them the most vulnerable to being deceived.” She added the sample size is small and further studies are warranted.

Apart from being damaged, the ventromedial prefrontal cortex begins to deteriorate as people reach age 60 and older, although the onset and the pace of deterioration varies, says Daniel Tranel, neurology and psychology professor at the UI and corresponding author on the paper. He thinks the finding will enable doctors, caregivers, and relatives to be more understanding of decision making by the elderly.

“And maybe protective,” Tranel adds. “Instead of saying, ‘How would you do something silly and transparently stupid,’ people may have a better appreciation of the fact that older people have lost the biological mechanism that allows them to see the disadvantageous nature of their decisions.”

The finding corroborates an idea studied by the paper’s first author, Erik Asp, who wondered why damage to the prefrontal cortex would impair the ability to doubt but not the initial belief as well. Asp created a model, which he called the False Tagging Theory, to separate the two notions and confirm that doubt is housed in the prefrontal cortex.

“This study is strong empirical evidence suggesting that the False Tagging Theory is correct,” says Asp, who earned his doctorate in neuroscience from the UI in May and is now at the University of Chicago.

Kenneth Manzel, Bryan Koestner, and Catherine Cole from the UI are contributing authors on the paper. The National Institute on Aging and the National Institute of Neurological Disorders and Stroke funded the research.