Arquivo da tag: Matemática

Tool Accurately Predicts Whether A Kickstarter Project Will Bomb (Popular Science)

At about 76 percent accuracy, a new prediction model is the best yet. “Your chances of success are at 8 percent. Commence panic.”

By Colin Lecher

Posted 10.16.2013 at 2:00 pm

 

Ouya, A Popular Kickstarter Project 

Well, here’s something either very discouraging or very exciting for crowdfunding hopefuls: a Swiss team can predict, with about 76 percent accuracy and within only four hours of launch, whether a Kickstarter project will succeed.

The team, from the university École Polytechnique Fédérale de Lausanne, laid out a system in a paper presented at the Conference on Online Social Networks. By mining data on more than 16,000 Kickstarter campaigns and more than 1.3 million users, they created a prediction model based on the project’s popularity on Twitter, the rate of cash it’s getting, how many first-time backers it has, and the previous projects supporters have backed.

A previous, similar model built by Americans could predict a Kicktarter project’s success with 68 percent accuracy–impressive, but the Swiss project has another advantage: it’s dynamic. While the American model could only make a prediction before the project launched, the Swiss project monitors projects in real time. They’ve even built a tool, called Sidekick, that monitors projects and displays their chances of success.

Other sites, like Kicktraq, offer similar services, but the predictions aren’t as accurate as the Swiss team claims theirs are. If you peruse Sidekick, you can see how confident the algorithm is in its pass/fail predictions: almost all of the projects are either above 90 percent or below 10 percent. Sort of scary, probably, if you’re launching a project. Although there’s always a chance you could pull yourself out of the hole, it’s like a genie asking if you want to know how you die: Do you really want that information?

[Guardian]

The Reasons Behind Crime (Science Daily)

Oct. 10, 2013 — More punishment does not necessarily lead to less crime, say researchers at ETH Zurich who have been studying the origins of crime with a computer model. In order to fight crime, more attention should be paid to the social and economic backgrounds that encourage crime.

Whether a person turns criminal and commits a robbery depends greatly on the socio-economic circumstances in which he lives. (Credit: © koszivu / Fotolia)

People have been stealing, betraying others and committing murder for ages. In fact, humans have never succeeded in eradicating crime, although — according to the rational choice theory in economics — this should be possible in principle. The theory states that humans turn criminal if it is worthwhile. Stealing or evading taxes, for instance, pays off if the prospects of unlawful gains outweigh the expected punishment. Therefore, if a state sets the penalties high enough and ensures that lawbreakers are brought to justice, it should be possible to eliminate crime completely.

This theory is largely oversimplified, says Dirk Helbing, a professor of sociology. The USA, for example, often have far more drastic penalties than European countries. But despite the death penalty in some American states, the homicide rate in the USA is five times higher than in Western Europe. Furthermore, ten times more people sit in American prisons than in many European countries. More repression, however, can sometimes even lead to more crime, says Helbing. Ever since the USA declared the “war on terror” around the globe, the number of terrorist attacks worldwide has increased, not fallen. “The classic approach, where criminals merely need to be pursued and punished more strictly to curb crime, often does not work.” Nonetheless, this approach dominates the public discussion.

More realistic model

In order to better understand the origins of crime, Helbing and his colleagues have developed a new so-called agent-based model that takes the network of social interactions into account and is more realistic than previous models. Not only does it include criminals and law enforcers, like many previous models, but also honest citizens as a third group. Parameters such as the penalties size and prosecution costs can be varied in the model. Moreover, it also considers spatial dependencies. The representatives of the three groups do not interact with one another randomly, but only if they encounter each other in space and time. In particular, individual agents imitate the behaviour of agents from other groups, if this is promising.

Cycles of crime

Using the model, the scientists were able to demonstrate that tougher punishments do not necessarily lead to less crime and, if so, then at least not to the extent the punishment effort is increased. The researchers were also able to simulate how crime can suddenly break out and calm down again. Like the pig cycle we know from the economic sciences or the predator-prey cycles from ecology, crime is cyclical as well. This explains observations made, for instance, in the USA: according to the FBI’s Uniform Crime Reporting Program, cyclical changes in the frequency of criminal offences can be found in several American states. “If a state increases the investments in its punitive system to an extent that is no longer cost-effective, politicians will cut the law enforcement budget,” says Helbing. “As a result, there is more room for crime to spread again.”

“Many crimes have a socio-economic background”

But would there be a different way of combatting crime, if not with repression? The focus should be on the socio-economic context, says Helbing. As we know from the milieu theory in sociology, the environment plays a pivotal role in the behaviour of individuals. The majority of criminal acts have a social background, claims Helbing. For example, if an individual feels that all the friends and neighbours are cheating the state, it will inevitably wonder whether it should be the last honest person to fill in the tax declaration correctly.

“If we want to reduce the crime rate, we have to keep an eye on the socio-economic circumstances under which people live,” says Helbing. We must not confuse this with soft justice. However, a state’s response to crime has to be differentiated: besides the police and court, economic and social institutions are relevant as well — and, in fact, every individual when it comes to the integration of others. “Improving social conditions and integrating people socially can probably combat crime much more effectively than building new prisons.”

Journal Reference:

  1. Matjaž Perc, Karsten Donnay, Dirk Helbing. Understanding Recurrent Crime as System-Immanent Collective BehaviorPLoS ONE, 2013; 8 (10): e76063 DOI:10.1371/journal.pone.0076063

Unlocking Biology With Math (Science Daily)

Oct. 7, 2013 — Scientists at USC have created a mathematical model that explains and predicts the biological process that creates antibody diversity — the phenomenon that keeps us healthy by generating robust immune systems through hypermutation.

The work is a collaboration between Myron Goodman, professor of biological sciences and chemistry at the USC Dornsife College of Letters, Arts and Sciences; and Chi Mak, professor of chemistry at USC Dornsife.

“To me, it was the holy grail,” Goodman said. “We can now predict the motion of a key enzyme that initiates hypermutations in immunoglobulin (Ig) genes.”

Goodman first described the process that creates antibody diversity two years ago. In short, an enzyme called “activation-induced deoxycytidine deaminase” (or AID) moves up and down single-stranded DNA that encodes the pattern for antibodies and sporadically alters the strand by converting one nitrogen base to another, which is called “deamination.” The change creates DNA with a different pattern — a mutation.

These mutations, which AID creates a million-fold times more often than would otherwise occur, generate antibodies of all different sorts — giving you protection against germs that your body hasn’t even seen yet.

“It’s why when I sneeze, you don’t die,” Goodman said.

In studying the seemingly random motion of AID up and down DNA, Goodman wanted to understand why it moved how it did, and why it deaminated in some places much more than others.

“We looked at the raw data and asked what the enzyme was doing to create that,” Goodman said. He and his team were able to develop statistical models whose probabilities roughly matched the data well, and were even able to trace individual enzymes visually and watch them work. But they were all just approximations, albeit reasonable ones.

Collaborating with Mak, however, offered something better: a rigorous mathematical model that describes the enzyme’s motion and interaction with the DNA and an algorithm for directly reading out AID’s dynamics from the mutation patterns.

At the time, Mak was working on the mathematics of quantum mechanics. Using similar techniques, Mak was able to help generate the model, which has been shown through testing to be accurate.

“Mathematics is the universal language behind physical science, but its central role in interpreting biology is just beginning to be recognized,” Mak said. Goodman and Mak collaborated on the research with Phuong Pham, assistant research professor, and Samir Afif, a graduate student at USC Dornsife. An article on their work, which will appear in print in the Journal of Biological Chemistry on October 11, was selected by the journal as a “paper of the week.”

Next, the team will generalize the mathematical model to study the “real life” action of AID as it initiates mutations during the transcription of Ig variable and constant regions, which is the process needed to generate immunodiversity in human B-cells.

Journal Reference:

  1. C. H. Mak, P. Pham, S. A. Afif, M. F. Goodman. A Mathematical Model for Scanning and Catalysis on Single-stranded DNA, Illustrated with Activation-induced Deoxycytidine DeaminaseJournal of Biological Chemistry, 2013; DOI: 10.1074/jbc.M113.506550

Vikings May Have Been More Social Than Savage (Science Daily)

Oct. 1, 2013 — Academics at Coventry University have uncovered complex social networks within age-old Icelandic sagas, which challenge the stereotypical image of Vikings as unworldly, violent savages.

Replica of Viking ship. Academics have uncovered complex social networks within age-old Icelandic sagas, which challenge the stereotypical image of Vikings as unworldly, violent savages. (Credit: © pemabild / Fotolia)

Pádraig Mac Carron and Ralph Kenna from the University’s Applied Mathematics Research Centre have carried out a detailed analysis of the relationships described in ancient Icelandic manuscripts to shed new light on Viking society.

In a study published in the European Physical Journal, Mac Carron and Kenna have asked whether remnants of reality could lurk within the pages of the documents in which Viking sagas were preserved.

They applied methods from statistical physics to social networks — in which nodes (connection points) represent individuals and links represent interactions between them — to home in on the relationships between the characters and societies depicted therein.

The academics used the Sagas of Icelanders — a unique corpus of medieval literature from the period around the settlement of Iceland a thousand years ago — as the basis for their investigation.

Although the historicity of these tales is often questioned, some believe they may contain fictionalised distortions of real societies, and Mac Carron’s and Kenna’s research bolsters this hypothesis.

They mapped out the interactions between over 1,500 characters that appear in 18 sagas including five particularly famous epic tales. Their analyses show, for example, that although an ‘outlaw tale’ has similar properties to other European heroic epics, and the ‘family sagas’ of Icelandic literature are quite distinct, the overall network of saga society is consistent with real social networks.

Moreover, although it is acknowledged that J. R. R. Tolkien was strongly influenced by Nordic literature, the Viking sagas have a different network structure to the Lord of the Rings and other works of fiction.

Professor Ralph Kenna from Coventry University’s Applied Mathematics Research Centre said: “This quantitative investigation is very different to traditional approaches to comparative studies of ancient texts, which focus on qualitative aspects. Rather than individuals and events, the new approach looks at interactions and reveals new insights — that the Icelandic sagas have similar properties to those of real-world social networks.

Journal Reference:

  1. P. Mac Carron, R. Kenna. Network analysis of the Íslendinga sögur – the Sagas of IcelandersThe European Physical Journal B, 2013; 86 (10) DOI:10.1140/epjb/e2013-40583-3

Math Explains History: Simulation Accurately Captures the Evolution of Ancient Complex Societies (Science Daily)

Sep. 23, 2013 — The question of how human societies evolve from small groups to the huge, anonymous and complex societies of today has been answered mathematically, accurately matching the historical record on the emergence of complex states in the ancient world.

A section of over 8000 Terracotta Warriors in the mausoleum of the first Qin emperor outside Xian, China. Intense warfare is the evolutionary driver of large complex societies, according to a new mathematical model whose findings accurately match those of the historical record in the ancient world. (Credit: iStockphoto)

Intense warfare is the evolutionary driver of large complex societies, according to new research from a trans-disciplinary team at the University of Connecticut, the University of Exeter in England, and the National Institute for Mathematical and Biological Synthesis (NIMBioS). The study appears this week as an open-access article in the journal Proceedings of the National Academy of Sciences.

The study’s cultural evolutionary model predicts where and when the largest-scale complex societies arose in human history.

Simulated within a realistic landscape of the Afro-Eurasian landmass during 1,500 BCE to 1,500 CE, the mathematical model was tested against the historical record. During the time period, horse-related military innovations, such as chariots and cavalry, dominated warfare within Afro-Eurasia. Geography also mattered, as nomads living in the Eurasian Steppe influenced nearby agrarian societies, thereby spreading intense forms of offensive warfare out from the steppe belt.

The study focuses on the interaction of ecology and geography as well as the spread of military innovations and predicts that selection for ultra-social institutions that allow for cooperation in huge groups of genetically unrelated individuals and large-scale complex states, is greater where warfare is more intense.

While existing theories on why there is so much variation in the ability of different human populations to construct viable states are usually formulated verbally, by contrast, the authors’ work leads to sharply defined quantitative predictions, which can be tested empirically.

The model-predicted spread of large-scale societies was very similar to the observed one; the model was able to explain two-thirds of the variation in determining the rise of large-scale societies.

“What’s so exciting about this area of research is that instead of just telling stories or describing what occurred, we can now explain general historical patterns with quantitative accuracy. Explaining historical events helps us better understand the present, and ultimately may help us predict the future,” said the study’s co-author Sergey Gavrilets, NIMBioS director for scientific activities.

Journal Reference:

  1. Turchin P, Currie T, Turner E, Gavrilets S. War, space, and the evolution of Old World complex societiesPNAS, 2013 DOI: 10.1073/pnas.1308825110

Is War Really Disappearing? New Analysis Suggests Not (Science Daily)

Aug. 29, 2013 — While some researchers have claimed that war between nations is in decline, a new analysis suggests we shouldn’t be too quick to celebrate a more peaceful world.

The study finds that there is no clear trend indicating that nations are less eager to wage war, said Bear Braumoeller, author of the study and associate professor of political science at The Ohio State University.

Conflict does appear to be less common than it had been in the past, he said. But that’s due more to an inability to fight than to an unwillingness to do so.

“As empires fragment, the world has split up into countries that are smaller, weaker and farther apart, so they are less able to fight each other,” Braumoeller said.

“Once you control for their ability to fight each other, the proclivity to go to war hasn’t really changed over the last two centuries.”

Braumoeller presented his research Aug. 29 in Chicago at the annual meeting of the American Political Science Association.

Several researchers have claimed in recent years that war is in decline, most notably Steven Pinker in his 2011 book The Better Angels of Our Nature: Why Violence Has Declined.

As evidence, Pinker points to a decline in war deaths per capita. But Braumoeller said he believes that is a flawed measure.

“That accurately reflects the average citizen’s risk from death in war, but countries’ calculations in war are more complicated than that,” he said.

Moreover, since population grows exponentially, it would be hard for war deaths to keep up with the booming number of people in the world.

Because we cannot predict whether wars will be quick and easy or long and drawn-out (“Remember ‘Mission Accomplished?'” Braumoeller says) a better measure of how warlike we as humans are is to start with how often countries use force — such as missile strikes or armed border skirmishes — against other countries, he said.

“Any one of these uses of force could conceivably start a war, so their frequency is a good indication of how war prone we are at any particular time,” he said.

Braumoeller used the Correlates of War Militarized Interstate Dispute database, which scholars from around the world study to measure uses of force up to and including war.

The data shows that the uses of force held more or less constant through World War I, but then increased steadily thereafter.

This trend is consistent with the growth in the number of countries over the course of the last two centuries.

But just looking at the number of conflicts per pair of countries is misleading, he said, because countries won’t go to war if they aren’t “politically relevant” to each other.

Military power and geography play a big role in relevance; it is unlikely that a small, weak country in South America would start a war with a small, weak country in Africa.

Once Braumoeller took into account both the number of countries and their political relevance to one another, the results showed essentially no change to the trend of the use of force over the last 200 years.

While researchers such as Pinker have suggested that countries are actually less inclined to fight than they once were, Braumoeller said these results suggest a different reason for the recent decline in war.

“With countries being smaller, weaker and more distant from each other, they certainly have less ability to fight. But we as humans shouldn’t get credit for being more peaceful just because we’re not as able fight as we once were,” he said.

“There is no indication that we actually have less proclivity to wage war.”

They Finally Tested The ‘Prisoner’s Dilemma’ On Actual Prisoners — And The Results Were Not What You Would Expect (Business Insider Australia)

, 21 July 2013

Alcatraz Jail Prison

The “prisoner’s dilemma” is a familiar concept to just about anybody that took Econ 101.

The basic version goes like this. Two criminals are arrested, but police can’t convict either on the primary charge, so they plan to sentence them to a year in jail on a lesser charge. Each of the prisoners, who can’t communicate with each other, are given the option of testifying against their partner. If they testify, and their partner remains silent, the partner gets 3 years and they go free. If they both testify, both get two. If both remain silent, they each get one.

In game theory, betraying your partner, or “defecting” is always the dominant strategy as it always has a slightly higher payoff in a simultaneous game. It’s what’s known as a “Nash Equilibrium,” after Nobel Prize winning mathematician and A Beautiful Mind subject John Nash.

In sequential games, where players know each other’s previous behaviour and have the opportunity to punish each other, defection is the dominant strategy as well.

However, on a Pareto basis, the best outcome for both players is mutual cooperation.

Yet no one’s ever actually run the experiment on real prisoners before, until two University of Hamburg economists tried it out in a recent study comparing the behaviour of inmates and students.

Surprisingly, for the classic version of the game, prisoners were far more cooperative  than expected.

Menusch Khadjavi and Andreas Lange put the famous game to the test for the first time ever, putting a group of prisoners in Lower Saxony’s primary women’s prison, as well as students through both simultaneous and sequential versions of the game.The payoffs obviously weren’t years off sentences, but euros for students, and the equivalent value in coffee or cigarettes for prisoners.

They expected, building off of game theory and behavioural economic research that show humans are more cooperative than the purely rational model that economists traditionally use, that there would be a fair amount of first-mover cooperation, even in the simultaneous simulation where there’s no way to react to the other player’s decisions.

And even in the sequential game, where you get a higher payoff for betraying a cooperative first mover, a fair amount will still reciprocate.

As for the difference between student and prisoner behaviour, you’d expect that a prison population might be more jaded and distrustful, and therefore more likely to defect.

The results went exactly the other way for the simultaneous game, only 37% of students cooperate. Inmates cooperated 56% of the time.

On a pair basis, only 13% of student pairs managed to get the best mutual outcome and cooperate, whereas 30% of prisoners do.

In the sequential game, way more students (63%) cooperate, so the mutual cooperation rate skyrockets to 39%. For prisoners, it remains about the same.

What’s interesting is that the simultaneous game requires far more blind trust out from both parties, and you don’t have a chance to retaliate or make up for being betrayed later. Yet prisoners are still significantly more cooperative in that scenario.

Obviously the payoffs aren’t as serious as a year or three of your life, but the paper still demonstrates that prisoners aren’t necessarily as calculating, self-interested, and un-trusting as you might expect, and as behavioural economists have argued for years, as mathematically interesting as Nash equilibrium might be, they don’t line up with real behaviour all that well.

Climate Researchers Discover New Rhythm for El Niño (Science Daily)

May 27, 2013 — El Niño wreaks havoc across the globe, shifting weather patterns that spawn droughts in some regions and floods in others. The impacts of this tropical Pacific climate phenomenon are well known and documented.

This is a schematic figure for the suggested generation mechanism of the combination tone: The annual cycle (Tone 1), together with the El Niño sea surface temperature anomalies (Tone 2) produce the combination tone. (Credit: Malte Stuecker)

A mystery, however, has remained despite decades of research: Why does El Niño always peak around Christmas and end quickly by February to April?

Now there is an answer: An unusual wind pattern that straddles the equatorial Pacific during strong El Niño events and swings back and forth with a period of 15 months explains El Niño’s close ties to the annual cycle. This finding is reported in the May 26, 2013, online issue of Nature Geoscience by scientists from the University of Hawai’i at Manoa Meteorology Department and International Pacific Research Center.

“This atmospheric pattern peaks in February and triggers some of the well-known El Niño impacts, such as droughts in the Philippines and across Micronesia and heavy rainfall over French Polynesia,” says lead author Malte Stuecker.

When anomalous trade winds shift south they can terminate an El Niño by generating eastward propagating equatorial Kelvin waves that eventually resume upwelling of cold water in the eastern equatorial Pacific. This wind shift is part of the larger, unusual atmospheric pattern accompanying El Niño events, in which a high-pressure system hovers over the Philippines and the major rain band of the South Pacific rapidly shifts equatorward.

With the help of numerical atmospheric models, the scientists discovered that this unusual pattern originates from an interaction between El Niño and the seasonal evolution of temperatures in the western tropical Pacific warm pool.

“Not all El Niño events are accompanied by this unusual wind pattern” notes Malte Stuecker, “but once El Niño conditions reach a certain threshold amplitude during the right time of the year, it is like a jack-in-the-box whose lid pops open.”

A study of the evolution of the anomalous wind pattern in the model reveals a rhythm of about 15 months accompanying strong El Niño events, which is considerably faster than the three- to five-year timetable for El Niño events, but slower than the annual cycle.

“This type of variability is known in physics as a combination tone,” says Fei-Fei Jin, professor of Meteorology and co-author of the study. Combination tones have been known for more than three centuries. They where discovered by violin builder Tartini, who realized that our ear can create a third tone, even though only two tones are played on a violin.

“The unusual wind pattern straddling the equator during an El Niño is such a combination tone between El Niño events and the seasonal march of the sun across the equator” says co-author Axel Timmermann, climate scientist at the International Pacific Research Center and professor at the Department of Oceanography, University of Hawai’i. He adds, “It turns out that many climate models have difficulties creating the correct combination tone, which is likely to impact their ability to simulate and predict El Niño events and their global impacts.”

The scientists are convinced that a better representation of the 15-month tropical Pacific wind pattern in climate models will improve El Niño forecasts. Moreover, they say the latest climate model projections suggest that El Niño events will be accompanied more often by this combination tone wind pattern, which will also change the characteristics of future El Niño rainfall patterns.

Journal Reference:

  1. Malte F. Stuecker, Axel Timmermann, Fei-Fei Jin, Shayne McGregor, Hong-Li Ren. A combination mode of the annual cycle and the El Niño/Southern Oscillation.Nature Geoscience, 2013; DOI: 10.1038/ngeo1826

Mathematical Models Out-Perform Doctors in Predicting Cancer Patients’ Responses to Treatment (Science Daily)

Apr. 19, 2013 — Mathematical prediction models are better than doctors at predicting the outcomes and responses of lung cancer patients to treatment, according to new research presented today (Saturday) at the 2nd Forum of the European Society for Radiotherapy and Oncology (ESTRO).

These differences apply even after the doctor has seen the patient, which can provide extra information, and knows what the treatment plan and radiation dose will be.

“The number of treatment options available for lung cancer patients are increasing, as well as the amount of information available to the individual patient. It is evident that this will complicate the task of the doctor in the future,” said the presenter, Dr Cary Oberije, a postdoctoral researcher at the MAASTRO Clinic, Maastricht University Medical Center, Maastricht, The Netherlands. “If models based on patient, tumour and treatment characteristics already out-perform the doctors, then it is unethical to make treatment decisions based solely on the doctors’ opinions. We believe models should be implemented in clinical practice to guide decisions.”

Dr Oberije and her colleagues in The Netherlands used mathematical prediction models that had already been tested and published. The models use information from previous patients to create a statistical formula that can be used to predict the probability of outcome and responses to treatment using radiotherapy with or without chemotherapy for future patients.

Having obtained predictions from the mathematical models, the researchers asked experienced radiation oncologists to predict the likelihood of lung cancer patients surviving for two years, or suffering from shortness of breath (dyspnea) and difficulty swallowing (dysphagia) at two points in time:

1) after they had seen the patient for the first time, and

2) after the treatment plan was made. At the first time point, the doctors predicted two-year survival for 121 patients, dyspnea for 139 and dysphagia for 146 patients.

At the second time point, predictions were only available for 35, 39 and 41 patients respectively.

For all three predictions and at both time points, the mathematical models substantially outperformed the doctors’ predictions, with the doctors’ predictions being little better than those expected by chance.

The researchers plotted the results on a special graph [1] on which the area below the plotted line is used for measuring the accuracy of predictions; 1 represents a perfect prediction, while 0.5 represents predictions that were right in 50% of cases, i.e. the same as chance. They found that the model predictions at the first time point were 0.71 for two-year survival, 0.76 for dyspnea and 0.72 for dysphagia. In contrast, the doctors’ predictions were 0.56, 0.59 and 0.52 respectively.

The models had a better positive predictive value (PPV) — a measure of the proportion of patients who were correctly assessed as being at risk of dying within two years or suffering from dyspnea and dysphagia — than the doctors. The negative predictive value (NPV) — a measure of the proportion of patients that would not die within two years or suffer from dyspnea and dysphagia — was comparable between the models and the doctors.

“This indicates that the models were better at identifying high risk patients that have a very low chance of surviving or a very high chance of developing severe dyspnea or dysphagia,” said Dr Oberije.

The researchers say that it is important that further research is carried out into how prediction models can be integrated into standard clinical care. In addition, further improvement of the models by incorporating all the latest advances in areas such as genetics, imaging and other factors, is important. This will make it possible to tailor treatment to the individual patient’s biological make-up and tumour type

“In our opinion, individualised treatment can only succeed if prediction models are used in clinical practice. We have shown that current models already outperform doctors. Therefore, this study can be used as a strong argument in favour of using prediction models and changing current clinical practice,” said Dr Oberije.

“Correct prediction of outcomes is important for several reasons,” she continued. “First, it offers the possibility to discuss treatment options with patients. If survival chances are very low, some patients might opt for a less aggressive treatment with fewer side-effects and better quality of life. Second, it could be used to assess which patients are eligible for a specific clinical trial. Third, correct predictions make it possible to improve and optimise the treatment. Currently, treatment guidelines are applied to the whole lung cancer population, but we know that some patients are cured while others are not and some patients suffer from severe side-effects while others don’t. We know that there are many factors that play a role in the prognosis of patients and prediction models can combine them all.”

At present, prediction models are not used as widely as they could be by doctors. Dr Oberije says there are a number of reasons: some models lack clinical credibility; others have not yet been tested; the models need to be available and easy to use by doctors; and many doctors still think that seeing a patient gives them information that cannot be captured in a model. “Our study shows that it is very unlikely that a doctor can outperform a model,” she concluded.

President of ESTRO, Professor Vincenzo Valentini, a radiation oncologist at the Policlinico Universitario A. Gemelli, Rome, Italy, commented: “The booming growth of biological, imaging and clinical information will challenge the decision capacity of every oncologist. The understanding of the knowledge management sciences is becoming a priority for radiation oncologists in order for them to tailor their choices to cure and care for individual patients.”

[1] For the mathematicians among you, the graph is known as an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC).

[2] This work was partially funded by grants from the Dutch Cancer Society (KWF), the European Fund for Regional Development (INTERREG/EFRO), and the Center for Translational Molecular Medicine (CTMM).

Mathematics Provides a Shortcut to Timely, Cost-Effective Interventions for HIV (Science Daily)

Apr. 15, 2013 — Mathematical estimates of treatment outcomes can cut costs and provide faster delivery of preventative measures.

South Africa is home to the largest HIV epidemic in the world with a total of 5.6 million people living with HIV. Large-scale clinical trials evaluating combination methods of prevention and treatment are often prohibitively expensive and take years to complete. In the absence of such trials, mathematical models can help assess the effectiveness of different HIV intervention combinations, as demonstrated in a new study by Elisa Long and Robert Stavert from Yale University in the US. Their findings appear in the Journal of General Internal Medicine, published by Springer.

Currently 60 percent of individuals in need of treatment for HIV in South Africa do not receive it. The allocation of scant resources to fight the HIV epidemic means each strategy must be measured in terms of cost versus benefit. A number of new clinical trials have presented evidence supporting a range of biomedical interventions that reduce transmission of HIV. These include voluntary male circumcision — now recommended by the World Health Organization and Joint United Nations Programme on HIV/AIDS as a preventive strategy — as well as vaginal microbicides and oral pre-exposure prophylaxis, all of which confer only partial protection against HIV. Long and Stavert show that a combination portfolio of multiple interventions could not only prevent up to two-thirds of future HIV infections, but is also cost-effective in a resource-limited setting such as South Africa.

The authors developed a mathematical model accounting for disease progression, mortality, morbidity and the heterosexual transmission of HIV to help forecast future trends in the disease. Using data specific for South Africa, the authors estimated the health benefits and cost-effectiveness of a “combination approach” using all three of the above methods in tandem with current levels of antiretroviral therapy, screening and counseling.

For each intervention, they calculated the HIV incidence and prevalence over 10 years. At present rates of screening and treatment, the researchers predict that HIV prevalence will decline from 19 percent to 14 percent of the population in the next 10 years. However, they calculate that their combination approach including male circumcision, vaginal microbicides and oral pre-exposure prophylaxis could further reduce HIV prevalence to 10 percent over that time scale — preventing 1.5 million HIV infection over 10 years — even if screening and antiretroviral therapy are kept at current levels. Increasing antiretroviral therapy use and HIV screening frequency in addition could avert more than 2 million HIV infections over 10 years, or 60 percent of the projected total.

The researchers also determined a hierarchy of effectiveness versus cost for these intervention strategies. Where budgets are limited, they suggest money should be allocated first to increasing male circumcision, then to more frequent HIV screening, use of vaginal microbicides and increasing antiretroviral therapy. Additionally, they calculate that omitting pre-exposure prophylaxis from their combination strategy could offer 90 percent of the benefits of treatment for less than 25 percent of the costs.

The authors conclude: “In the absence of multi-intervention randomized clinical or observational trials, a mathematical HIV epidemic model provides useful insights about the aggregate benefit of implementing a portfolio of biomedical, diagnostic and treatment programs. Allocating limited available resources for HIV control in South Africa is a key priority, and our study indicates that a multi-intervention HIV portfolio could avert nearly two-thirds of projected new HIV infections, and is a cost-effective use of resources.”

Journal Reference:

  1. Long, E.F. and Stavert, R.R. Portfolios of biomedical HIV interventions in South Africa: a cost-effectiveness analysisJournal of General Internal Medicine, 2013 DOI:10.1007/s11606-013-2417-1

In Big Data, We Hope and Distrust (Huffington Post)

By Robert Hall

Posted: 04/03/2013 6:57 pm

“In God we trust. All others must bring data.” — W. Edwards Deming, statistician, quality guru

Big data helped reelect a pesident, find Osama bin Laden, and contributed to the meltdown of our financial system. We are in the midst of a data revolution where social media introduces new terms like Arab Spring, Facebook Depression and Twitter anxiety that reflect a new reality: Big data is changing the social and relationship fabric of our culture.

We spend hours installing and learning how to use the latest versions of our ever-expanding technology while enduring a never-ending battle to protect our information. Then we labor while developing practices to rid ourselves of technology — rules for turning devices off during meetings or movies, legislation to outlaw texting while driving, restrictions in classrooms to prevent cheating, and scheduling meals or family time where devices are turned off. Information and technology: We love it, hate it, can’t live with it, can’t live without it, use it voraciously, and distrust it immensely. I am schizophrenic and so am I.

Big data is not only big but growing rapidly. According to IBM, we create 2.5 quintillion bytes a day and that “ninety percent of the data in the world has been created in the last two years.” Vast new computing capacity can analyze Web-browsing trails that track our every click, sensor signals from every conceivable device, GPS tracking and social network traffic. It is now possible to measure and monitor people and machines to an astonishing degree. How exciting, how promising. And how scary.

This is not our first data rodeo. The early stages of the customer relationship management movement were filled with hope and with hype. Large data warehouses were going to provide the kind of information that would make companies masters of customer relationships. There were just two problems. First, getting the data out of the warehouse wasn’t nearly as hard as getting it into the person or device interacting with the customers in a way that added value, trust and expanded relationships. We seem to always underestimate the speed of technology and overestimate the speed at which we can absorb it and socialize around it.

Second, unfortunately the customers didn’t get the memo and mostly decided in their own rich wisdom they did not need or want “masters.” In fact as providers became masters of knowing all the details about our lives, consumers became more concerned. So while many organizations were trying to learn more about customer histories, behaviors and future needs — customers and even their governments were busy trying to protect privacy, security, and access. Anyone attempting to help an adult friend or family member with mental health issues has probably run into well-intentioned HIPAA rules (regulations that ensure privacy of medical records) that unfortunately also restrict the ways you can assist them. Big data gives and the fear of big data takes away.

Big data does not big relationships make. Over the last 20 years as our data keeps getting stronger, our customer relationships keep getting weaker. Eighty-six percent of consumers trust corporations less than they did five years ago. Customer retention across industries has fallen about 30 percent in recent years. Is it actually possible that we have unwittingly contributed in the undermining of our customer relationships? How could that be? For one thing, as companies keep getting better at targeting messages to specific groups and those groups keep getting better at blocking their messages. As usual, the power to resist trumps the power to exert.

No matter how powerful big data becomes, if it is to realize its potential, it must build trust on three levels. First, customers must trust our intentions. Data that can be used for us can also be used against us. There is growing fear institutions will become a part of a “surveillance state.” While organizations have gone to great length to promote protection of our data — the numbers reflect a fair amount of doubt. For example, according to MainStreet, “87 percent of Americans do not feel large banks are transparent and 68 percent do not feel their bank is on their side.:

Second, customers must trust our actions. Even if they trust our intentions, they might still fear that our actions put them at risk. Our private information can be hacked, then misused and disclosed in damaging and embarrassing ways. After the Sandy Hook tragedy a New York newspaper published the names and addresses of over 33,000 licensed gun owners along with an interactive map that showed exactly where they lived. In response names and addresses of the newspaper editor and writers were published on-line along with information about their children. No one, including retired judges, law enforcement officers and FBI agents expected their private information to be published in the midst of a very high decibel controversy.

Third, customers must trust the outcome — that sharing data will benefit them. Even with positive intentions and constructive actions, the results may range from disappointing to damaging. Most of us have provided email addresses or other contact data — around a customer service issue or such — and then started receiving email, phone or online solicitations. I know a retired executive who helps hard-to-hire people. She spent one evening surfing the Internet to research about expunging criminal records for released felons. Years later, Amazon greets her with books targeted to the felon it believes she is. Even with opt-out options, we felt used. Or, we provide specific information, only to repeat it in the next transaction or interaction — not getting the hoped for benefit of saving our time.

It will be challenging to grow the trust at anywhere near the rate we grow the data. Information develops rapidly, competence and trust develop slowly. Investing heavily in big data and scrimping on trust will have the opposite effect desired. To quote Dolly Parton who knows a thing or two about big: “It costs a lot of money to look this cheap.”

The Mathematics of Averting the Next Big Network Failure (Wired)

BY NATALIE WOLCHOVER, SIMONS SCIENCE NEWS

03.19.13 – 9:30 AM

Data: Courtesy of Marc Imhoff of NASA GSFC and Christopher Elvidge of NOAA NGDC; Image: Craig Mayhew and Robert Simmon of NASA GSFC

Gene Stanley never walks down stairs without holding the handrail. For a fit 71-year-old, he is deathly afraid of breaking his hip. In the elderly, such breaks can trigger fatal complications, and Stanley, a professor of physics at Boston University, thinks he knows why.

“Everything depends on everything else,” he said.

Original story reprinted with permission from Simons Science News, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Three years ago, Stanley and his colleagues discovered the mathematics behind what he calls “the extreme fragility of interdependency.” In a system of interconnected networks like the economy, city infrastructure or the human body, their model indicates that a small outage in one network can cascade through the entire system, touching off a sudden, catastrophic failure.

First reported in 2010 in the journal Nature, the finding spawned more than 200 related studies, including analyses of the nationwide blackout in Italy in 2003, the global food-price crisis of 2007 and 2008, and the “flash crash” of the United States stock market on May 6, 2010.

“In isolated networks, a little damage will only lead to a little more,” said Shlomo Havlin, a physicist at Bar-Ilan University in Israel who co-authored the 2010 paper. “Now we know that because of dependency between networks, you can have an abrupt collapse.”

While scientists remain cautious about using the results of simplified mathematical models to re-engineer real-world systems, some recommendations are beginning to emerge. Based on data-driven refinements, new models suggest interconnected networks should have backups, mechanisms for severing their connections in times of crisis, and stricter regulations to forestall widespread failure.

“There’s hopefully some sweet spot where you benefit from all the things that networks of networks bring you without being overwhelmed by risk,” said Raissa D’Souza, a complex systems theorist at the University of California, Davis.

Power, gas, water, telecommunications and transportation networks are often interlinked. When nodes in one network depend on nodes in another, node failures in any of the networks can trigger a system-wide collapse. (Illustration: Leonardo Dueñas-Osorio)

To understand the vulnerability in having nodes in one network depend on nodes in another, consider the “smart grid,” an infrastructure system in which power stations are controlled by a telecommunications network that in turn requires power from the network of stations. In isolation, removing a few nodes from either network would do little harm, because signals could route around the outage and reach most of the remaining nodes. But in coupled networks, downed nodes in one automatically knock out dependent nodes in the other, which knock out other dependent nodes in the first, and so on. Scientists model this cascading process by calculating the size of the largest cluster of connected nodes in each network, where the answer depends on the size of the largest cluster in the other network. With the clusters interrelated in this way, a decrease in the size of one of them sets off a back-and-forth cascade of shrinking clusters.

When damage to a system reaches a “critical point,” Stanley, Havlin and their colleagues find that the failure of one more node drops all the network clusters to zero, instantly killing connectivity throughout the system. This critical point will vary depending on a system’s architecture. In one of the team’s most realistic coupled-network models, an outage of just 8 percent of the nodes in one network — a plausible level of damage in many real systems — brings the system to its critical point. “The fragility that’s implied by this interdependency is very frightening,” Stanley said.

However, in another model recently studied by D’Souza and her colleagues, sparse links between separate networks actually help suppress large-scale cascades, demonstrating that network models are not one-size-fits-all. To assess the behavior of smart grids, financial markets, transportation systems and other real interdependent networks, “we have to start from the data-driven, engineered world and come up with the mathematical models that capture the real systems instead of using models because they are pretty and analytically tractable,” D’Souza said.

In a series of papers in the March issue of Nature Physics, economists and physicists used the science of interconnected networks to pinpoint risk within the financial system. In one study, an interdisciplinary group of researchers including the Nobel Prize-winning economist Joseph Stiglitz found inherent instabilities within the highly complex, multitrillion-dollar derivatives market and suggested regulations that could help stabilize it.

Irena Vodenska, a professor of finance at Boston University who collaborates with Stanley, custom-fit a coupled network model around data from the 2008 financial crisis. Her and her colleagues’ analysis, published in February in Scientific Reports, showed that modeling the financial system as a network of two networks — banks and bank assets, where each bank is linked to the assets it held in 2007 — correctly predicted which banks would fail 78 percent of the time.

“We consider this model as potentially useful for systemic risk stress testing for financial systems,” said Vodenska, whose research is financially supported by the European Union’s Forecasting Financial Crisis program. As globalization further entangles financial networks, she said, regulatory agencies must monitor “sources of contagion” — concentrations in certain assets, for example — before they can cause epidemics of failure. To identify these sources, “it’s imperative to think in the sense of networks of networks,” she said.

Leonardo Dueñas-Osorio, a civil engineer at Rice, visited a damaged high-voltage substation in Chile after a major earthquake in 2010 to gather information about the power grid’s response to the crisis. (Photo: Courtesy of Leonardo Dueñas-Osorio)

Scientists are applying similar thinking to infrastructure assessment. Leonardo Dueñas-Osorio, a civil engineer at Rice University, is analyzing how lifeline systems responded to recent natural disasters. When a magnitude 8.8 earthquake struck Chile in 2010, for example, most of the power grid was restored after just two days, aiding emergency workers. The swift recovery, Dueñas-Osorio’s researchsuggests, occurred because Chile’s power stations immediately decoupled from the centralized telecommunications system that usually controlled the flow of electricity through the grid, but which was down in some areas. Power stations were operated locally until the damage in other parts of the system subsided.

“After an abnormal event, the majority of the detrimental effects occur in the very first cycles of mutual interaction,” said Dueñas-Osorio, who is also studying New York City’s response to Hurricane Sandy last October. “So when something goes wrong, we need to have the ability to decouple networks to prevent the back-and-forth effects between them.”

D’Souza and Dueñas-Osorio are collaborating to build accurate models of infrastructure systems in Houston, Memphis and other American cities in order to identify system weaknesses. “Models are useful for helping us explore alternative configurations that could be more effective,” Dueñas-Osorio explained. And as interdependency between networks naturally increases in many places, “we can model that higher integration and see what happens.”

Scientists are also looking to their models for answers on how to fix systems when they fail. “We are in the process of studying what is the optimal way to recover a network,” Havlin said. “When networks fail, which node do you fix first?”

The hope is that networks of networks might be unexpectedly resilient for the same reason that they are vulnerable. As Dueñas-Osorio put it, “By making strategic improvements, can we have what amounts to positive cascades, where a small improvement propagates much larger benefits?”

These open questions have the attention of governments around the world. In the U.S., the Defense Threat Reduction Agency, an organization tasked with safeguarding national infrastructure against weapons of mass destruction, considers the study of interdependent networks its “top mission priority” in the category of basic research. Some defense applications have emerged already, such as a new design for electrical network systems at military bases. But much of the research aims at sorting through the mathematical subtleties of network interaction.

“We’re not yet at the ‘let’s engineer the internet differently’ level,” said Robin Burk, an information scientist and former DTRA program manager who led the agency’s focus on interdependent networks research. “A fair amount of it is still basic science — desperately needed science.”

Original story reprinted with permission from Simons Science News, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Treating Disease by the Numbers (Science Daily)

Sep. 20, 2012 — Mathematical modeling being tested by researchers at the School of Science at Indiana University-Purdue University Indianapolis (IUPUI) and the IU School of Medicine has the potential to impact the knowledge and treatment of several diseases that continue to challenge scientists across the world.

Mathematical modeling allows researchers to closely mirror patient data, which is helpful in determining the cause and effect of certain risk factors. (Credit: Image courtesy of Indiana University-Purdue University Indianapolis School of Science)

The National Science Foundation recently recognized the work led by Drs. Giovanna Guidoboni, associate professor of mathematics in the School of Science, and Alon Harris, professor of ophthalmology and director of clinical research at the Eugene and Marilyn Glick Eye Institute, for its new approach to understanding what actually causes debilitating diseases like glaucoma. Their research could translate to more efficient treatments for diseases like diabetes and hypertension as well.

Glaucoma is the second-leading cause of blindness in the world, yet the only primary form of treatment is to reduce pressure in the patient’s eye. However, as many as one-third of the glaucoma patients have no elevated eye pressure, and the current inability to better understand what risk factors led to the disease can hinder treatment options.

Mathematical modeling, which creates an abstract model using mathematical language to describe the behavior of a system, allows doctors to better measure things like blood flow and oxygen levels in fine detail in the eye, the easiest human organ to study without invasive procedures. Models also can be used to estimate what cannot be measured directly, such as the pressure in the ocular vessels.

Through simulations, the mathematical model can help doctors determine the cause and effect of reduced blood flow, cell death and ocular pressure and how those risk factors affect one another in the presence of glaucoma. A better understanding of these factors — and the ability to accurately measure their interaction — could greatly improve doctors’ ability to treat the root causes of disease, Harris said.

“This is a unique, fresh approach to research and treatment,” Harris said. “We’re talking about the ability to identify tailor-made treatments for individual patients for diseases that are multi-factorial and where it’s difficult to isolate the path and physicality of the disease.”

Harris and Guidoboni have worked together for the past 18 months on the project. Dr. Julia Arciero, assistant professor of mathematical sciences at IUPUI, is a principle investigator on the project as well with expertise in mathematical modeling of blood flow.

The preliminary findings have been published in the British Journal of Ophthalmology and the research currently is under review in the Journal of Mathematical Biosciences and Engineering and the European Journal of Ophthalmology. The NSF recognized their work on Aug. 30 with a three-year grant to continue their research.

The pair also presented their findings at the 2012 annual meeting of the Association for Research in Vision and Ophthalmology (ARVO). Harris suggested that, out of the 12,000 ARVO participants, their group might have been the only research group to include mathematicians, which speaks highly of the cross-disciplinary collaboration occurring regularly at IUPUI.

“We approached this as a pure math question, where you try to solve a certain problem with the data you have,” said Guidoboni, co-director of the School of Science Institute for Mathematical Modeling and Computational Science (iM2CS) at IUPUI, a research center dedicated to using modeling methods to solve problems in medicine, the environment and computer science.

Guidoboni has expertise in applied mathematics. She also has a background in engineering, which she said helps her to approach medical research from a tactical standpoint where the data and feedback determine the model. She previously used modeling to better understand blood flow from the heart.

Harris said the potential impact has created quite a stir in the ocular research community.

“The response among our peers has been unheard of. The scientific community has been accepting of this new method and they are embracing it,” Harris added.

The group will seek additional research funding through the National Institute of Health, The Glaucoma Foundation and other medical entities that might benefit from the research. The initial success of their collaboration should lead to more cross-disciplinary projects in the future, Guidoboni said.

Also contributing are graduate students in mathematics, Lucia Carichino and Simone Cassani, and researchers in the department of ophthalmology, including Drs. Brent Siesky, Annahita Amireskandari and Leslie Tobe.

Ao menos 70% das espécies da Terra são desconhecidas (Fapesp)

Dando início ao Ciclo de Conferências 2013 do BIOTA-FAPESP Educação, Thomas Lewinsohn (Unicamp) falou sobre o tempo e o custo estimado para descrever todas as espécies do planeta (foto:Léo Ramos)

25/02/2013

Por Karina Toledo

Agência FAPESP – Embora o conhecimento sobre a biodiversidade do planeta ainda esteja muito fragmentado, estima-se que já tenham sido descritos aproximadamente 1,75 milhão de espécies diferentes de seres vivos – incluindo microrganismos, plantas e animais. O número pode impressionar os mais desavisados, mas representa, nas hipóteses mais otimistas, apenas 30% das formas de vida existentes na Terra.

“Estima-se que existam outros 12 milhões de espécies ainda por serem descobertas”, disse Thomas Lewinsohn, professor do Departamento de Biologia Animal da Universidade Estadual de Campinas (Unicamp), durante a apresentação que deu início ao Ciclo de Conferências 2013 organizado pelo programa BIOTA-FAPESP com o intuito de contribuir para o aperfeiçoamento do ensino de ciência.

Mas como avaliar o tamanho do desconhecimento sobre a biodiversidade? “Para isso, fazemos extrapolações, tomando como base os grupos de organismos mais bem estudados para avaliar os menos estudados. Regiões ou países em que a biota é bem conhecida para avaliar onde é menos conhecida. Por regra de três chegamos a essas estimativas”, explicou.

Técnicas mais recentes, segundo Lewinsohn, usam fórmulas estatísticas sofisticadas e se baseiam nas taxas de descobertas e de descrição de novas espécies. Os valores são ajustados de acordo com a força de trabalho existente, ou seja, o número de taxonomistas em atividade.

“No entanto, o mais importante a dizer é: não há consenso. As estimativas podem chegar a mais de 100 milhões de espécies desconhecidas. Não sabemos nem a ordem de grandeza e isso é espantoso”, disse.

Lewinsohn avalia que, para descrever todas as espécies que se estima haver no Brasil, seriam necessários cerca de 2 mil anos. “Para descrever todas as espécies do mundo o número seria parecido. Mas não temos esse tempo”, disse.

Algumas técnicas recentes de taxonomia molecular, como código de barras de DNA, podem ajudar a acelerar o trabalho, pois permitem identificar organismos por meio da análise de seu material genético. Por esse método, cadeias diferentes de DNA diferenciam as espécies, enquanto na taxonomia clássica a classificação é baseada na morfologia dos seres vivos, o que é bem mais trabalhoso.

“Dá para fazer? Sim, mas qual é o custo?”, questionou Lewinsohn. Um artigo publicado recentemente na revista Science apontou que seriam necessários de US$ 500 milhões a US$ 1 bilhão por ano, durante 50 anos, para descrever a maioria das espécies do planeta.

Novamente, o número pode assustar os desavisados, mas, de acordo com Lewinsohn, o montante corresponde ao que se gasta no mundo com armamento em apenas cinco dias. “Somente em 2011 foram gastos US$ 1,7 trilhão com a compra de armas. É preciso colocar as coisas em perspectiva”, defendeu.

Definindo prioridades

Muitas dessas espécies desconhecidas, porém, podem desaparecer do planeta antes mesmo que o homem tenha tempo e dinheiro suficiente para estudá-las. Segundo dados apresentados por Jean Paul Metzger, professor do Instituto de Biociências da Universidade de São Paulo (USP), mais de 50% da superfície terrestre já foi transformada pelo homem.

Essa alteração na paisagem tem muitas consequências e Metzger abordou duas delas na segunda apresentação do dia: a perda de habitat e a fragmentação.

“São conceitos diferentes, que muitas vezes se confundem. Fragmentação é a subdivisão de um habitat e pode não ocorrer quando o processo de degradação ocorre nas bordas da mata. Já a construção de uma estrada, por exemplo, cria fragmentos isolados dentro do habitat”, explicou.

Para Metzger, a fragmentação é a principal ameaça à biodiversidade, pois altera o equilíbrio entre os processos naturais de extinção de espécies e de colonização. Quanto menor e mais isolado é o fragmento, maior é a taxa de extinção e menor é a de colonização.

“Cada espécie tem uma quantidade mínima de habitat que precisa para sobreviver e se reproduzir. Não conhecemos bem esses limiares de extinção”, alertou.

Metzger acredita que esse limiar pode variar de acordo com a configuração da paisagem, ou seja, quanto mais fragmentado estiver o habitat, maior o risco de extinção de espécies. Como exemplo, ele citou as áreas remanescentes de Mata Atlântica do Estado de São Paulo, onde 95% dos fragmentos têm menos de 100 hectares.

“Estima-se que ao perder 90% do habitat, deveríamos perder 50% das espécies endêmicas. Na Mata Atlântica, há cerca de 16% de floresta remanescente. O esperado seria uma extinção em massa, mas nosso registro tem poucos casos. Ou nossa teoria está errada, ou não estamos detectando as extinções, pois as espécies nem sequer eram conhecidas”, afirmou Metzger.

Há, no entanto, um fator complicador: o período de latência entre a mudança na estrutura paisagem e mudança na estrutura da comunidade. Enquanto as espécies com ciclo curto de vida podem desaparecer rapidamente, aquelas com ciclo de vida longo podem responder à perda de habitat em escala centenária.

“Cria-se um débito de extinção e, mesmo que a alteração na paisagem seja interrompida, algumas espécies ficam fadadas a desaparecer com o tempo”, disse Metzger.

Mas a boa notícia é que as paisagens também se regeneram naturalmente e além do débito de extinção existe o crédito de recuperação. O período de latência representa, portanto, uma oportunidade de conservação.

“Hoje, temos evidências de que não adianta restaurar em qualquer lugar. É preciso definir áreas prioritárias para restauração que otimizem a conectividade e facilitem o fluxo biológico entre os fragmentos”, defendeu Metzger.

Colhendo frutos

Ao longo dos 13 anos de existência do BIOTA-FAPESP, a definição de áreas prioritárias de conservação e de recuperação no Estado de São Paulo foi uma das principais preocupações dos pesquisadores.

Os resultados desses estudos foram usados pela Secretaria Estadual do Meio Ambiente para embasar políticas públicas, como lembrou o coordenador do programa e professor do Instituto de Biologia da Unicamp, Carlos Alfredo Joly, na terceira e última apresentação do dia.

“Atualmente, pelo menos 20 instrumentos legais, entre leis, decretos e resoluções, citam nominalmente os resultados do BIOTA-FAPESP”, disse Joly.

Entre 1999 e 2009, disse o coordenador, houve um investimento anual de R$ 8 milhões no programa. Isso ajudou a financiar 94 projetos de pesquisa e resultou em mais de 700 artigos publicados em 181 periódicos, entre eles Nature e Science.

A equipe do programa também publicou 16 livros e dois atlas, descreveu mais de 2 mil novas espécies, produziu e armazenou informações sobre 12 mil espécies, disponibilizou e conectou digitalmente 35 coleções biológicas paulistas.

“Desde que foi renovado o apoio da FAPESP ao programa, em 2009, a questão da educação se tornou prioridade em nosso plano estratégico. O objetivo deste ciclo de conferências é justamente ampliar a comunicação com públicos além do meio científico, especialmente professores e estudantes”, disse Joly.

A segunda etapa do ciclo de palestras está marcada para 21 de março e terá como tema o “Bioma Pampa”. No dia 18 de abril, será a vez do “Bioma Pantanal”. Em 16 de maio, o tema será “Bioma Cerrado”. Em 20 de junho, será abordado o “Bioma Caatinga”.

Em 22 de agosto, será o “Bioma Mata Atlântica”. Em 19 de setembro, é a vez do “Bioma Amazônia”. Em 24 de outubro, o tema será “Ambientes Marinhos e Costeiros”. Finalizando o ciclo, em 21 de novembro, o tema será “Biodiversidade em Ambientes Antrópicos – Urbanos e Rurais”.

Programação do ciclo: www.fapesp.br/7487

Flap Over Study Linking Poverty to Biology Exposes Gulfs Among Disciplines (Chronicle of Higher Education)

February 1, 2013

Flap Over Study Linking Poverty to Biology Exposes Gulfs Among Disciplines 1

 Photo: iStock.

A study by two economists that used genetic diversity as a proxy for ethnic and cultural diversity has drawn fierce rebuttals from anthropologists and geneticists.

By Paul Voosen

Oded Galor and Quamrul Ashraf once thought their research into the causes of societal wealth would be seen as a celebration of diversity. However it has been described, though, it has certainly not been celebrated. Instead, it has sparked a dispute among scholars in several disciplines, many of whom are dubious of any work linking societal behavior to genetics. In the latest installment of the debate, 18 Harvard University scientists have called their work “seriously flawed on both factual and methodological grounds.”

Mr. Galor and Mr. Ashraf, economists at Brown University and Williams College, respectively, have long been fascinated by the historical roots of poverty. Six years ago, they began to wonder if a society’s diversity, in any way, could explain its wealth. They probed tracts of interdisciplinary data and decided they could use records of genetic diversity as a proxy for ethnic and cultural diversity. And after doing so, they found that, yes, a bit of genetic diversity did seem to help a society’s economic growth.

Since last fall, when the pair’s work began to filter out into the broader scientific world, their study has exposed deep rifts in how economists, anthropologists, and geneticists talk—and think. It has provoked calls for caution in how economists use genetic data, and calls of persecution in response. And all of this happened before the study was finally published, in the American Economic Review this month.

“Through this analysis, we’re getting a better understanding of how the world operates in order to alleviate poverty,” Mr. Ashraf said. Any other characterization, he added, is a “gross misunderstanding.”

‘Ethical Quagmires’

A barrage of criticism has been aimed at the study since last fall by a team of anthropologists and geneticists at Harvard. The critique began with a short, stern letter, followed by a rejoinder from the economists; now an expanded version of the Harvard critique will appear in February inCurrent Anthropology.

Fundamentally, the dispute comes down to issues of data selection and statistical power. The paper is a case of “garbage in, garbage out,” the Harvard group says. The indicators of genetic diversity that the economists use stem from only four or five independent points. All the regression analysis in the world can’t change that, said Nick Patterson, a computational biologist at Harvard and MIT’s Broad Institute.

“The data just won’t stand for what you’re claiming,” Mr. Patterson said. “Technical statistical analysis can only do so much for you. … I will bet you that they can’t find a single geneticist in the world who will tell them what they did was right.”

In some respects, the study has become an exemplar for how the nascent field of “genoeconomics,” a discipline that seeks to twin the power of gene sequencing and economics, can go awry. Connections between behavior and genetics rightly need to clear high bars of evidence, said Daniel Benjamin, an economist at Cornell University and a leader in the field who has frequently called for improved rigor.

“It’s an area that’s fraught with an unfortunate history and ethical quagmires,” he said. Mr. Galor and Mr. Ashraf had a creative idea, he added, even if all their analysis doesn’t pass muster.

“I’d like to see more data before I’m convinced that their [theory] is true,” said Mr. Benjamin, who was not affiliated with the study or the critique. The Harvard critics make all sorts of complaints, many of which are valid, he said. “But fundamentally the issue is that there’s just not that much independent data.”

Claims of ‘Outsiders’

The dispute also exposes issues inside anthropology, added Carl Lipo, an anthropologist at California State University at Long Beach who is known for his study of Easter Island. “Anthropologists have long tried to walk the line whereby we argue that there are biological origins to much of what makes us human, without putting much weight that any particular attribute has its origins in genetics [or] biology,” he said.

The debate often erupts in lower-profile ways and ends with a flurry of anthropologists’ putting down claims by “outsiders,” Mr. Lipo said. (Mr. Ashraf and Mr. Galor are “out on a limb” with their conclusions, he added.) The angry reaction speaks to the limits of anthropology, which has been unable to delineate how genetics reaches up through the idiosyncratic circumstances of culture and history to influence human behavior, he said.

Certainly, that reaction has been painful for the newest pair of outsiders.

Mr. Galor is well known for studying the connections between history and economic development. And like much scientific work, his recent research began in reaction to claims made by Jared Diamond, the famed geographer at the University of California at Los Angeles, that the development of agriculture gave some societies a head start. What other factors could help explain that distribution of wealth? Mr. Galor wondered.

Since records of ethnic or cultural diversity do not exist for the distant past, they chose to use genetic diversity as a proxy. (There is little evidence that it can, or can’t, serve as such a proxy, however.) Teasing out the connection to economics was difficult—diversity could follow growth, or vice versa—but they gave it a shot, Mr. Galor said.

“We had to find some root causes of the [economic] diversity we see across the globe,” he said.

They were acquainted with the “Out of Africa” hypothesis, which explains how modern human beings migrated from Africa in several waves to Asia and, eventually, the Americas. Due to simple genetic laws, those serial waves meant that people in Africa have a higher genetic diversity than those in the Americas. It’s an idea that found support in genetic sequencing of native populations, if only at the continental scale.

Combining the genetics with population-density estimates—data the Harvard group says are outdated—along with deep statistical analysis, the economists found that the low and high diversity found among Native Americans and Africans, respectively, was detrimental to development. Meanwhile, they found a sweet spot of diversity in Europe and Asia. And they stated the link in sometimes strong, causal language, prompting another bitter discussion with the Harvard group over correlation and causation.

An ‘Artifact’ of the Data?

The list of flaws found by the Harvard group is long, but it boils down to the fact that no one has ever made a solid connection between genes and poverty before, even if genetics are used only as a proxy, said Jade d’Alpoim Guedes, a graduate student in anthropology at Harvard and the critique’s lead author.

“If my research comes up with findings that change everything we know,” Ms. d’Alpoim Guedes said, “I’d really check all of my input sources. … Can I honestly say that this pattern that I see is true and not an artifact of the input data?”

Mr. Ashraf and Mr. Galor found the response to their study, which they had previewed many times over the years to other economists, to be puzzling and emotionally charged. Their critics refused to engage, they said. They would have loved to present their work to a lecture hall full of anthropologists at Harvard. (Mr. Ashraf, who’s married to an anthropologist, is a visiting scholar this year at Harvard’s Kennedy School.) Their gestures were spurned, they said.

“We really felt like it was an inquisition,” Mr. Galor said. “The tone and level of these arguments were really so unscientific.”

Mr. Patterson, the computational biologist, doesn’t quite agree. The conflict has many roots but derives in large part from differing standards for publication. Submit the same paper to a leading genetics journal, he said, and it would not have even reached review.

“They’d laugh at you,” Mr. Patterson said. “This doesn’t even remotely meet the cut.”

In the end, it’s unfortunate the economists chose genetic diversity as their proxy for ethnic diversity, added Mr. Benjamin, the Cornell economist. They’re trying to get at an interesting point. “The genetics is really secondary, and not really that important,” he said. “It’s just something that they’re using as a measure of the amount of ethnic diversity.”

Mr. Benjamin also wishes they had used more care in their language and presentation.

“It’s not enough to be careful in the way we use genetic data,” he said. “We need to bend over backwards being careful in the way we talk about what the data means; how we interpret findings that relate to genetic data; and how we communicate those findings to readers and the public.”

Mr. Ashraf and Mr. Galor have not decided whether to respond to the Harvard critique. They say they can, point by point, but that ultimately, the American Economic Review’s decision to publish the paper as its lead study validates their work. They want to push forward on their research. They’ve just released a draft study that probes deeper into the connections between genetic diversity and cultural fragmentation, Mr. Ashraf said.

“There is much more to learn from this data,” he said. “It is certainly not the final word.”

New Research Shows Complexity of Global Warming (Science Daily)

Jan. 30, 2013 — Global warming from greenhouse gases affects rainfall patterns in the world differently than that from solar heating, according to a study by an international team of scientists in the January 31 issue of Nature. Using computer model simulations, the scientists, led by Jian Liu (Chinese Academy of Sciences) and Bin Wang (International Pacific Research Center, University of Hawaii at Manoa), showed that global rainfall has increased less over the present-day warming period than during the Medieval Warm Period, even though temperatures are higher today than they were then.

Clouds over the Pacific Ocean. (Credit: Shang-Ping Xie)

The team examined global precipitation changes over the last millennium and future projection to the end of 21st century, comparing natural changes from solar heating and volcanism with changes from human-made greenhouse gas emissions. Using an atmosphere-ocean coupled climate model that simulates realistically both past and present-day climate conditions, the scientists found that for every degree rise in global temperature, the global rainfall rate since the Industrial Revolution has increased less by about 40% than during past warming phases of Earth.

Why does warming from solar heating and from greenhouse gases have such different effects on global precipitation?

“Our climate model simulations show that this difference results from different sea surface temperature patterns. When warming is due to increased greenhouse gases, the gradient of sea surface temperature (SST) across the tropical Pacific weakens, but when it is due to increased solar radiation, the gradient increases. For the same average global surface temperature increase, the weaker SST gradient produces less rainfall, especially over tropical land,” says co-author Bin Wang, professor of meteorology.

But why does warming from greenhouse gases and from solar heating affect the tropical Pacific SST gradient differently?

“Adding long-wave absorbers, that is heat-trapping greenhouse gases, to the atmosphere decreases the usual temperature difference between the surface and the top of the atmosphere, making the atmosphere more stable,” explains lead-author Jian Liu. “The increased atmospheric stability weakens the trade winds, resulting in stronger warming in the eastern than the western Pacific, thus reducing the usual SST gradient — a situation similar to El Niño.”

Solar radiation, on the other hand, heats Earth’s surface, increasing the usual temperature difference between the surface and the top of the atmosphere without weakening the trade winds. The result is that heating warms the western Pacific, while the eastern Pacific remains cool from the usual ocean upwelling.

“While during past global warming from solar heating the steeper tropical east-west SST pattern has won out, we suggest that with future warming from greenhouse gases, the weaker gradient and smaller increase in yearly rainfall rate will win out,” concludes Wang.

Journal Reference:

  1. Jian Liu, Bin Wang, Mark A. Cane, So-Young Yim, June-Yi Lee. Divergent global precipitation changes induced by natural versus anthropogenic forcingNature, 2013; 493 (7434): 656 DOI: 10.1038/nature11784

Understanding the Historical Probability of Drought (Science Daily)

Jan. 30, 2013 — Droughts can severely limit crop growth, causing yearly losses of around $8 billion in the United States. But it may be possible to minimize those losses if farmers can synchronize the growth of crops with periods of time when drought is less likely to occur. Researchers from Oklahoma State University are working to create a reliable “calendar” of seasonal drought patterns that could help farmers optimize crop production by avoiding days prone to drought.

Historical probabilities of drought, which can point to days on which crop water stress is likely, are often calculated using atmospheric data such as rainfall and temperatures. However, those measurements do not consider the soil properties of individual fields or sites.

“Atmospheric variables do not take into account soil moisture,” explains Tyson Ochsner, lead author of the study. “And soil moisture can provide an important buffer against short-term precipitation deficits.”

In an attempt to more accurately assess drought probabilities, Ochsner and co-authors, Guilherme Torres and Romulo Lollato, used 15 years of soil moisture measurements from eight locations across Oklahoma to calculate soil water deficits and determine the days on which dry conditions would be likely. Results of the study, which began as a student-led class research project, were published online Jan. 29 inAgronomy Journal. The researchers found that soil water deficits more successfully identified periods during which plants were likely to be water stressed than did traditional atmospheric measurements when used as proposed by previous research.

Soil water deficit is defined in the study as the difference between the capacity of the soil to hold water and the actual water content calculated from long-term soil moisture measurements. Researchers then compared that soil water deficit to a threshold at which plants would experience water stress and, therefore, drought conditions. The threshold was determined for each study site since available water, a factor used to calculate threshold, is affected by specific soil characteristics.

“The soil water contents differ across sites and depths depending on the sand, silt, and clay contents,” says Ochsner. “Readily available water is a site- and depth-specific parameter.”

Upon calculating soil water deficits and stress thresholds for the study sites, the research team compared their assessment of drought probability to assessments made using atmospheric data. They found that a previously developed method using atmospheric data often underestimated drought conditions, while soil water deficits measurements more accurately and consistently assessed drought probabilities. Therefore, the researchers suggest that soil water data be used whenever it is available to create a picture of the days on which drought conditions are likely.

If soil measurements are not available, however, the researchers recommend that the calculations used for atmospheric assessments be reconfigured to be more accurate. The authors made two such changes in their study. First, they decreased the threshold at which plants were deemed stressed, thus allowing a smaller deficit to be considered a drought condition. They also increased the number of days over which atmospheric deficits were summed. Those two changes provided estimates that better agreed with soil water deficit probabilities.

Further research is needed, says Ochsner, to optimize atmospheric calculations and provide accurate estimations for those without soil water data. “We are in a time of rapid increase in the availability of soil moisture data, but many users will still have to rely on the atmospheric water deficit method for locations where soil moisture data are insufficient.”

Regardless of the method used, Ochsner and his team hope that their research will help farmers better plan the cultivation of their crops and avoid costly losses to drought conditions.

Journal Reference:

  1. Guilherme M. Torres, Romulo P. Lollato, Tyson E. Ochsner.Comparison of Drought Probability Assessments Based on Atmospheric Water Deficit and Soil Water Deficit.Agronomy Journal, 2013; DOI: 10.2134/agronj2012.0295

The Storm That Never Was: Why Meteorologists Are Often Wrong (Science Daily)

Jan. 24, 2013 — Have you ever woken up to a sunny forecast only to get soaked on your way to the office? On days like that it’s easy to blame the weatherman.

BYU engineering professor Julie Crockett studies waves in the ocean and the atmosphere. (Credit: Image courtesy of Brigham Young University)

But BYU mechanical engineering professor Julie Crockett doesn’t get mad at meteorologists. She understands something that very few people know: it’s not the weatherman’s fault he’s wrong so often.

According to Crockett, forecasters make mistakes because the models they use for predicting weather can’t accurately track highly influential elements called internal waves.

Atmospheric internal waves are waves that propagate between layers of low-density and high-density air. Although hard to describe, almost everyone has seen or felt these waves. Cloud patterns made up of repeating lines are the result of internal waves, and airplane turbulence happens when internal waves run into each other and break.

“Internal waves are difficult to capture and quantify as they propagate, deposit energy and move energy around,” Crockett said. “When forecasters don’t account for them on a small scale, then the large scale picture becomes a little bit off, and sometimes being just a bit off is enough to be completely wrong about the weather.”

One such example may have happened in 2011, when Utah meteorologists predicted an enormous winter storm prior to Thanksgiving. Schools across the state cancelled classes and sent people home early to avoid the storm. Though it’s impossible to say for sure, internal waves may have been driving stronger circulations, breaking up the storm and causing it to never materialize.

“When internal waves deposit their energy it can force the wind faster or slow the wind down such that it can enhance large scale weather patterns or extreme kinds of events,” Crockett said. “We are trying to get a better feel for where that wave energy is going.”

Internal waves also exist in oceans between layers of low-density and high-density water. These waves, often visible from space, affect the general circulation of the ocean and phenomena like the Gulf Stream and Jet Stream.

Both oceanic and atmospheric internal waves carry a significant amount of energy that can alter climates.

Crockett’s latest wave research, which appears in a recent issue of the International Journal of Geophysics, details how the relationship between large-scale and small-scale internal waves influences the altitude where wave energy is ultimately deposited.

To track wave energy, Crockett and her students generate waves in a tank in her lab and study every aspect of their behavior. She and her colleagues are trying to pinpoint exactly how climate changes affect waves and how those waves then affect weather.

Based on this, Crockett can then develop a better linear wave model with both 3D and 2D modeling that will allow forecasters to improve their weather forecasting.

“Understanding how waves move energy around is very important to large scale climate events,” Crockett said. “Our research is very important to this problem, but it hasn’t solved it completely.”

Journal Reference:

  1. B. Casaday, J. Crockett. Investigation of High-Frequency Internal Wave Interactions with an Enveloped Inertia WaveInternational Journal of Geophysics, 2012; 2012: 1 DOI: 10.1155/2012/863792

Physicist Happens Upon Rain Data Breakthrough (Science Daily)

John Lane looks over data recorded from his laser system as he refines his process and formula to calibrate measurements of raindrops. (Credit: NASA/Jim Grossmann)

Dec. 3, 2012 — A physicist and researcher who set out to develop a formula to protect Apollo sites on the moon from rocket exhaust may have happened upon a way to improve weather forecasting on Earth.

Working in his backyard during rain showers and storms, John Lane, a physicist at NASA’s Kennedy Space Center in Florida, found that the laser and reflector he was developing to track lunar dust also could determine accurately the size of raindrops, something weather radar and other meteorological systems estimate, but don’t measure.

The special quantity measured by the laser system is called the “second moment of the size distribution,” which results in the average cross-section area of raindrops passing through the laser beam.

“It’s not often that you’re studying lunar dust and it ends up producing benefits in weather forecasting,” said Phil Metzger, a physicist who leads the Granular Mechanics and Regolith Operations Lab, part of the Surface Systems Office at Kennedy.

Lane said the additional piece of information would be useful in filling out the complex computer calculations used to determine the current conditions and forecast the weather.

“We may be able to refine (computer weather) models to make them more accurate,” Lane said. “Weather radar data analysis makes assumptions about raindrop size, so I think this could improve the overall drop size distribution estimates.”

The breakthrough came because Metzger and Lane were looking for a way to calibrate a laser sensor to pick up the fine particles of blowing lunar dust and soil. It turns out that rain is a good stand-in for flying lunar soil.

“I was pretty skeptical in the beginning that the numbers would come out anywhere close,” Lane said. “Anytime you do something new, it’s a risk that you’re just wasting your time.”

The genesis of the research was the need to find out how much damage would be done by robotic landers getting too close to the six places on the moon where Apollo astronauts landed, lived and worked.

NASA fears that dust and soil particles thrown up by the rocket exhaust of a lander will scour and perhaps puncture the metal skin of the lunar module descent stages and experiment hardware left behind by the astronauts from 1969 to 1972.

“It’s like sandblasting, if you have something coming down like a rocket engine, and it lifts up this dust, there’s not air, so it just keeps going fast,” Lane said. “Some of the stuff can actually reach escape velocity and go into orbit.”

Such impacts to those materials could ruin their scientific value to researchers on Earth who want to know what happens to human-made materials left on another world for more than 40 years.

“The Apollo sites have value scientifically and from an engineering perspective because they are a record of how these materials on the moon have interacted with the solar system over 40 years,” Metzger said. “They are witness plates to the environment.”

There also are numerous bags of waste from the astronauts laying up there that biologists want to examine simply to see if living organisms can survive on the moon for almost five decades where there is no air and there is a constant bombardment of cosmic radiation.

“If anybody goes back and sprays stuff on the bags or touches the bags, they ruin the experiment,” Metzger said. “It’s not just the scientific and engineering value. They believe the Apollo sites are the most important archaeological sites in the human sphere, more important than the pyramids because it’s the first place humans stepped off the planet. And from a national point of view, these are symbols of our country and we don’t want them to be damaged by wanton ransacking.”

Current thinking anticipates placing a laser sensor on the bottom of one of the landers taking part in the Google X-Prize competition. The sensor should be able to pick up the blowing dust and soil and give researchers a clear set of results so they can formulate restrictions for other landers, such as how far away from the Apollo sites new landers can touch down.

As research continues into the laser sensor, Lane expects the work to continue on the weather forecasting side of the equation, too. Lane already presented some of his findings at a meteorological conference and is working on a research paper to detail the work. “This is one of those topics that span a lot of areas of science,” Lane said.

When data prediction is a game, the experts lose out (New Scientist)

Specialist Knowledge Is Useless and Unhelpful

By |Posted Saturday, Dec. 8, 2012, at 7:45 AM ET

 Airplanes at an airport.Airplanes at an airport. iStockphoto/Thinkstock.

Jeremy Howard founded email company FastMail and the Optimal Decisions Group, which helps insurance companies set premiums. He is now president and chief scientist of Kaggle, which has turned data prediction into sport.

Peter Aldhous: Kaggle has been described as “an online marketplace for brains.” Tell me about it.
Jeremy Howard: It’s a website that hosts competitions for data prediction. We’ve run a whole bunch of amazing competitions. One asked competitors to develop algorithms to mark students’ essays. One that finished recently challenged competitors to develop a gesture-learning system for the Microsoft Kinect. The idea was to show the controller a gesture just once, and the algorithm would recognize it in future. Another competition predicted the biological properties of small molecules being screened as potential drugs.

PA: How exactly do these competitions work?
JH: They rely on techniques like data mining and machine learning to predict future trends from current data. Companies, governments, and researchers present data sets and problems, and offer prize money for the best solutions. Anyone can enter: We have nearly 64,000 registered users. We’ve discovered that creative-data scientists can solve problems in every field better than experts in those fields can.

PA: These competitions deal with very specialized subjects. Do experts enter?
JH: Oh yes. Every time a new competition comes out, the experts say: “We’ve built a whole industry around this. We know the answers.” And after a couple of weeks, they get blown out of the water.

PA: So who does well in the competitions?
JH: People who can just see what the data is actually telling them without being distracted by industry assumptions or specialist knowledge. Jason Tigg, who runs a pretty big hedge fund in London, has done well again and again. So has Xavier Conort, who runs a predictive analytics consultancy in Singapore.

PA: You were once on the leader board yourself. How did you get involved?
JH: It was a long and strange path. I majored in philosophy in Australia, worked in management consultancy for eight years, and then in 1999 I founded two start-ups—one an email company, the other helping insurers optimize risks and profits. By 2010, I had sold them both. I started learning Chinese and building amplifiers and speakers because I hadn’t made anything with my hands. I travelled. But it wasn’t intellectually challenging enough. Then, at a meeting of statistics users in Melbourne, somebody told me about Kaggle. I thought: “That looks intimidating and really interesting.”

PA: How did your first competition go?
JH: Setting my expectations low, my goal was to not come last. But I actually won it. It was on forecasting tourist arrivals and departures at different destinations. By the time I went to the next statistics meeting I had won two out of the three competitions I entered. Anthony Goldbloom, the founder of Kaggle, was there. He said: “You’re not Jeremy Howard, are you? We’ve never had anybody win two out of three competitions before.”

PA: How did you become Kaggle’s chief scientist?
JH: I offered to become an angel investor. But I just couldn’t keep my hands off the business. I told Anthony that the site was running slowly and rewrote all the code from scratch. Then Anthony and I spent three months in America last year, trying to raise money. That was where things got really serious, because we raised $11 million. I had to move to San Francisco and commit to doing this full-time.

PA: Do you still compete?
JH: I am allowed to compete, but I can’t win prizes. In practice, I’ve been too busy.

PA: What explains Kaggle’s success in solving problems in predictive analytics?
JH: The competitive aspect is important. The more people who take part in these competitions, the better they get at predictive modeling. There is no other place in the world I’m aware of, outside professional sport, where you get such raw, harsh, unfettered feedback about how well you’re doing. It’s clear what’s working and what’s not. It’s a kind of evolutionary process, accelerating the survival of the fittest, and we’re watching it happen right in front of us. More and more, our top competitors are also teaming up with each other.

PA: Which statistical methods work best?
JH: One that crops up again and again is called the random forest. This takes multiple small random samples of the data and makes a “decision tree” for each one, which branches according to the questions asked about the data. Each tree, by itself, has little predictive power. But take an “average” of all of them and you end up with a powerful model. It’s a totally black-box, brainless approach. You don’t have to think—it just works.

PA: What separates the winners from the also-rans?
JH: The difference between the good participants and the bad is the information they feed to the algorithms. You have to decide what to abstract from the data. Winners of Kaggle competitions tend to be curious and creative people. They come up with a dozen totally new ways to think about the problem. The nice thing about algorithms like the random forest is that you can chuck as many crazy ideas at them as you like, and the algorithms figure out which ones work.

PA: That sounds very different from the traditional approach to building predictive models. How have experts reacted?
JH: The messages are uncomfortable for a lot of people. It’s controversial because we’re telling them: “Your decades of specialist knowledge are not only useless, they’re actually unhelpful; your sophisticated techniques are worse than generic methods.” It’s difficult for people who are used to that old type of science. They spend so much time discussing whether an idea makes sense. They check the visualizations and noodle over it. That is all actively unhelpful.

PA: Is there any role for expert knowledge?
JH: Some kinds of experts are required early on, for when you’re trying to work out what problem you’re trying to solve. The expertise you need is strategy expertise in answering these questions.

PA: Can you see any downsides to the data-driven, black-box approach that dominates on Kaggle?
JH: Some people take the view that you don’t end up with a richer understanding of the problem. But that’s just not true: The algorithms tell you what’s important and what’s not. You might ask why those things are important, but I think that’s less interesting. You end up with a predictive model that works. There’s not too much to argue about there.

Mathematical Counseling for All Who Wonder Why Their Relationship Is Like a Sinus Wave (Science Daily)

ScienceDaily (Nov. 15, 2012) — Neuroinformaticians from Radboud University Nijmegen provide a mathematical model for efficient communication in relationships. Love affair dynamics can look like a sinus wave: a smooth repetitive oscillation of highs and lows. For some couples these waves grow out of control, leading to breakup, while for others they smooth into a state of peace and quietness. Natalia Bielczyk and her colleagues show that the ‘relationship-sinus’ depends on the time partners take to form their emotional reactions towards each other.

The publication in Applied Mathematics and Computation is now available online.

An example of a modeled relationship, in this case between Romeo (solid lines) and Juliet (dashed lines). The tau (τ) above the individual figures indicates the delay in reactivity. Delays that are too short (<0,83) cause instability, just like delays that are too long (>2,364). Delays in the range of 0,83-2,364 cause stability in Romeo and Juliet’s relationship. (Credit: Image courtesy of Radboud University Nijmegen)

In 1988, Steven Strogatz was the first to describe romantic relationships with mathematical dynamical systems. He constructed a two-dimensional model describing two hypothetical partners that interact emotionally. He used a well known example: the changes of Romeo’s and Juliet’s love (and hate) over time. His model became famous and inspired others to analyze (fictional) relationship case studies like Jack and Rose in the Titanic movie. However, the Strogatz model does not include delays in the partner’s responses to one another. Therefore it is only a good start for fruitful studies on human emotions and relationships.

That is why Natalia Bielczyk adjusted Strogatz to a more life-like model by considering the time necessary for processing and forming the complex emotions in relationships. The reactivity in the relationship model is based on four parameters: both partners have a personal history (their ‘past’), and a certain reactivity to their partner and his/her history. Depending on these parameters, different classes of relationships can be found: some seem doomed to break regardless of the partners promptness to one another while others are solid enough to always be stable. In the calculated models, stability occurs when both partners reach a stable level of satisfaction and the sinus wave disappears. The paper concludes that for a broad class of relationships, delays in reactivity can bring stability to couples that are originally unstable.

These results are pretty intuitive: too prompt or too delayed responses evoke trouble. Below a certain value, delays caused instability and above this value they caused stability, showing that some minimum level of sloth can be beneficial for a relationship. The fact that too fast emotional reactivity can lead to destabilization, shows that reflecting each other’s moods is not enough for a stable relationship: a certain time range is necessary for compound emotions to form. Summarized, the publication offers mathematical justification for intuitive phenomena in social psychology. Working on good communication, studying each other’s emotions and working out the right timing can improve your relationship, even without trying to change your partners traits (which is harder and takes more time).

Journal Reference:

  1. Natalia Bielczyk, Marek Bodnar, Urszula Foryś. Delay can stabilize: Love affairs dynamicsApplied Mathematics and Computation, 2012; DOI: 10.1016/j.amc.2012.10.028

Nate Silver’s ‘Signal and the Noise’ Examines Predictions (N.Y.Times)

Mining Truth From Data Babel

By LEONARD MLODINOW

Published: October 23, 2012

A friend who was a pioneer in the computer games business used to marvel at how her company handled its projections of costs and revenue. “We performed exhaustive calculations, analyses and revisions,” she would tell me. “And we somehow always ended with numbers that justified our hiring the people and producing the games we had wanted to all along.” Those forecasts rarely proved accurate, but as long as the games were reasonably profitable, she said, you’d keep your job and get to create more unfounded projections for the next endeavor.

Alessandra Montalto/The New York Times

THE SIGNAL AND THE NOISE

Why So Many Predictions Fail — but Some Don’t

By Nate Silver

Illustrated. 534 pages. The Penguin Press. $27.95.

This doesn’t seem like any way to run a business — or a country. Yet, as Nate Silver, a blogger for The New York Times, points out in his book, “The Signal and the Noise,” studies show that from the stock pickers on Wall Street to the political pundits on our news channels, predictions offered with great certainty and voluminous justification prove, when evaluated later, to have had no predictive power at all. They are the equivalent of monkeys tossing darts.

As one who has both taught and written about such phenomena, I have long felt like leaning out my window to shout, “Network”-style, “I’m as mad as hell and I’m not going to take this anymore!” Judging by Mr. Silver’s lively prose — from energetic to outraged — I think he feels the same way.

Nate Silver. Robert Gauldin

The book’s title comes from electrical engineering, where a signal is something that conveys information, while noise is an unwanted, unmeaningful or random addition to the signal. Problems arise when the noise is as strong as, or stronger than, the signal. How do you recognize which is which?

Today the data we have available to make predictions has grown almost unimaginably large: it represents 2.5 quintillion bytes of data each day, Mr. Silver tells us, enough zeros and ones to fill a billion books of 10 million pages each. Our ability to tease the signal from the noise has not grown nearly as fast. As a result, we have plenty of data but lack the ability to extract truth from it and to build models that accurately predict the future that data portends.

Mr. Silver, just 34, is an expert at finding signal in noise. He is modest about his accomplishments, but he achieved a high profile when he created a brilliant and innovative computer program for forecasting the performance of baseball players, and later a system for predicting the outcome of political races. His political work had such success in the 2008 presidential election that it brought him extensive media coverage as well as a home at The Times for his blog, FiveThiryEight.com, though some conservatives have been critical of his methods during this election cycle.

His knack wasn’t lost on book publishers, who, as he puts it, approached him “to capitalize on the success of books such as ‘Moneyball’ and ‘Freakonomics.’ ” Publishers are notorious for pronouncing that Book A will sell just a thousand copies, while Book B will sell a million, and then proving to have gotten everything right except for which was A and which was B. In this case, to judge by early sales, they forecast Mr. Silver’s potential correctly, and to judge by the friendly tone of the book, it couldn’t have happened to a nicer guy.

Healthily peppered throughout the book are answers to its subtitle, “Why So Many Predictions Fail — but Some Don’t”: we are fooled into thinking that random patterns are meaningful; we build models that are far more sensitive to our initial assumptions than we realize; we make approximations that are cruder than we realize; we focus on what is easiest to measure rather than on what is important; we are overconfident; we build models that rely too heavily on statistics, without enough theoretical understanding; and we unconsciously let biases based on expectation or self-interest affect our analysis.

Regarding why models do succeed, Mr. Silver provides just bits of advice (other than to avoid the failings listed above). Mostly he stresses an approach to statistics named after the British mathematician Thomas Bayes, who created a theory of how to adjust a subjective degree of belief rationally when new evidence presents itself.

Suppose that after reading a review, you initially believe that there is a 75 percent chance that you will like a certain book. Then, in a bookstore, you read the book’s first 10 pages. What, then, are the chances that you will like the book, given the additional information that you liked (or did not like) what you read? Bayes’s theory tells you how to update your initial guess in light of that new data. This may sound like an exercise that only a character in “The Big Bang Theory” would engage in, but neuroscientists have found that, on an unconscious level, our brains do naturally use Bayesian prediction.

Mr. Silver illustrates his dos and don’ts through a series of interesting essays that examine how predictions are made in fields including chess, baseball, weather forecasting, earthquake analysis and politics. A chapter on poker reveals a strange world in which a small number of inept but big-spending “fish” feed a much larger community of highly skilled sharks competing to make their living off the fish; a chapter on global warming is one of the most objective and honest analyses I’ve seen. (Mr. Silver concludes that the greenhouse effect almost certainly exists and will be exacerbated by man-made CO2 emissions.)

So with all this going for the book, as my mother would say, what’s not to like?

The main problem emerges immediately, in the introduction, where I found my innately Bayesian brain wondering: Where is this going? The same question came to mind in later essays: I wondered how what I was reading related to the larger thesis. At times Mr. Silver reports in depth on a topic of lesser importance, or he skates over an important topic only to return to it in a later chapter, where it is again discussed only briefly.

As a result, I found myself losing the signal for the noise. Fortunately, you will not be tested on whether you have properly grasped the signal, and even the noise makes for a good read.

Leonard Mlodinow is the author of “Subliminal: How Your Unconscious Mind Rules Your Behavior” and “The Drunkard’s Walk: How Randomness Rules Our Lives.”

ONU quer garantir que temperatura global não se eleve mais que 2ºC (Globo Natureza)

JC e-mail 4582, de 13 de Setembro de 2012

As negociações climáticas da Organização das Nações Unidas (ONU) devem continuar pressionando por atitudes mais ambiciosas para garantir que o aquecimento global não ultrapasse os 2 graus, disse um negociador da União Europeia nesta semana, um mês depois de os EUA terem sido acusados de apresentar um retrocesso na meta.

Quase 200 países concordaram em 2010 em limitar o aumento das temperaturas para abaixo de 2 graus Celsius, acima da era pré-industrial para evitar os impactos perigos da mudança climática, como enchentes, secas e elevação do nível das marés.

Para desacelerar o ritmo do aquecimento global, as conversações climáticas da ONU na África do Sul concordaram em desenvolver um acordo climático legalmente vinculante até 2015, que poderia entrar em vigor no máximo até 2020.

Entretanto, especialistas advertem que a chance de limitar o aumento da temperatura global para menos de 2 graus está ficando cada vez menor, à medida que aumenta a emissão dos gases de efeito estufa por causa da queima de combustíveis fósseis.

“Está muito claro que devemos pressionar nas negociações de que a meta de 2 graus não é suficiente. A razão pela qual não estamos fazendo o bastante se deve à situação política em algumas partes do mundo”, disse Peter Betts, o diretor para mudança climática internacional da Grã-Bretanha e negociador sênior da UE, a um grupo de mudança climática no Parlamento britânico.

Na última semana, cientistas e diplomatas se reuniram em Bangcoc para a reunião da Convenção da ONU sobre Mudança Climática (UNFCCC, na sigla em inglês), a última antes do encontro anual que será realizado entre novembro e dezembro em Doha, no Qatar.

Flexibilidade nas metas – No mês passado, os EUA foram criticados por dizer que apoiavam uma abordagem mais flexível para um novo acordo climático – que não necessariamente manteria o limite de 2 graus -, mas depois acrescentaram que a flexibilidade daria ao mundo uma chance maior de chegar a um novo acordo.

Diversos países, incluindo alguns dos mais vulneráveis à mudança climática, dizem que o limite de 2 graus não é suficiente e que um limite de 1,5 graus seria mais seguro. As emissões do principal gás de efeito estufa, o dióxido de carbono, subiram 3,1% em 2011, em um recorde de alta. A China foi a maior emissora do mundo, seguida pelos EUA.

As negociações para a criação de um novo acordo global para o clima, nos mesmos moldes de Kyoto, já iniciaram. Na última conferência climática foi aprovada uma série de medidas que estabelece metas para países desenvolvidos e em desenvolvimento.

O documento denominado “Plataforma de Durban para Ação Aumentada” aponta uma série de medidas que deverão ser implementadas, mas na prática, não há medidas efetivas urgentes para conter em todo o planeta o aumento dos níveis de poluição nos próximos oito anos.

Obrigação para todos no futuro – Ele prevê a criação de um acordo global climático que vai compreender todos os países integrantes da UNFCCC e irá substituir o Protocolo de Kyoto. Será desenhado pelos países “um protocolo, outro instrumento legal ou um resultado acordado com força legal” para combater as mudanças climáticas.

Isso quer dizer que metas de redução de gases serão definidas para todas as nações, incluindo Estados Unidos e China, que não aceitavam qualquer tipo de negociação se uma das partes não fosse incluída nas obrigações de redução.

O delineamento deste novo plano começará a ser feito a partir das próximas negociações da ONU, o que inclui a COP 18, que vai acontecer em 2012 no Catar. O documento afirma que um grupo de trabalho será criado e que deve concluir o novo plano em 2015.

As medidas de contenção da poluição só deverão ser implementadas pelos países a partir de 2020, prazo estabelecido na Plataforma de Durban, e deverão levar em conta as recomendações do relatório do Painel Intergovernamental sobre Mudanças Climáticas (IPCC, na sigla em inglês), que será divulgado entre 2014 e 2015.

Em 2007, o organismo divulgou um documento que apontava para um aumento médio global das temperaturas entre 1,8 ºC e 4,0 ºC até 2100, com possibilidade de alta para 6,4 ºC se a população e a economia continuarem crescendo rapidamente e se for mantido o consumo intenso dos combustíveis fósseis.

Entretanto, a estimativa mais confiável fala em um aumento médio de 3ºC, assumindo que os níveis de dióxido de carbono se estabilizem em 45% acima da taxa atual. Aponta também, com mais de 90% de confiabilidade, que a maior parte do aumento de temperatura observado nos últimos 50 anos foi provocada por atividades humanas.