Arquivo da tag: Modelagem

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: arxiv.org/abs/1404.1499 : A Ballistic Monte Carlo Approximation of π

Modeling the past to understand the future of a stronger El Niño (Science Daily)

Date:

November 26, 2014

Source:

University of Wisconsin-Madison

Summary:

El Nino is not a contemporary phenomenon; it’s long been the Earth’s dominant source of year-to-year climate fluctuation. But as the climate warms and the feedbacks that drive the cycle change, researchers want to know how El Nino will respond.

141126132628-large

Using state-of-the-art computer models maintained at the National Center for Atmospheric Research, researchers determined that El Niño has intensified over the last 6,000 years. This pier and cafe are in Ocean Beach, California. Credit: Jon Sullivan

It was fishermen off the coast of Peru who first recognized the anomaly, hundreds of years ago. Every so often, their usually cold, nutrient-rich water would turn warm and the fish they depended on would disappear. Then there was the ceaseless rain.

They called it “El Nino,” The Boy — or Christmas Boy — because of its timing near the holiday each time it returned, every three to seven years.

El Nino is not a contemporary phenomenon; it’s long been Earth’s dominant source of year-to-year climate fluctuation. But as the climate warms and the feedbacks that drive the cycle change, researchers want to know how El Nino will respond. A team of researchers led by the University of Wisconsin’s Zhengyu Liu published the latest findings in this quest Nov. 27, 2014 in Nature.

“We can’t see the future; the only thing we can do is examine the past,” says Liu, a professor in the Department of Atmospheric and Oceanic Sciences. “The question people are interested in now is whether it’s going to be stronger or weaker, and this requires us to first check if our model can simulate its past history.”

The study examines what has influenced El Nino over the last 21,000 years in order to understand its future and to prepare for the consequences. It is valuable knowledge for scientists, land managers, policymakers and many others, as people across the globe focus on adapting to a changing climate.

Using state-of-the-art computer models maintained at the National Center for Atmospheric Research in Colorado, the researchers — also from Peking University in China, the University of Hawaii at Manoa, and Georgia Institute of Technology — determined that El Nino has intensified over the last 6,000 years.

The findings corroborate data from previous studies, which relied on observations like historical sediments off the Central American coast and changes in fossilized coral. During warm, rainy El Nino years, the coastal sediments consist of larger mixed deposits of lighter color, and the coral provides a unique signature, akin to rings on a tree.

“There have been some observations that El Nino has been changing,” says Liu, also a professor in the Nelson Institute for Environmental Studies Center for Climatic Research. “Previous studies seem to indicate El Nino has increased over the last 5,000 to 7,000 years.”

But unlike previous studies, the new model provides a continuous look at the long history of El Nino, rather than a snapshot in time.

It examines the large-scale influences that have impacted the strength of El Nino over the last 21,000 years, such as atmospheric carbon dioxide, ice sheet melting and changes to Earth’s orbit.

El Nino is driven by an intricate tango between the ocean and Earth’s atmosphere. In non-El Nino years, trade winds over the tropical Pacific Ocean drive the seas westward, from the coast of Central America toward Indonesia, adding a thick, warm layer to the surface of the western part of the ocean while cooler water rises to the surface in the east. This brings rain to the west and dry conditions to the east.

During El Nino, the trade winds relax and the sea surface temperature differences between the Western and Eastern Pacific Ocean are diminished. This alters the heat distribution in both the water and the air in each region, forcing a cascade of global climate-related changes.

“It has an impact on Madison winter temperatures — when Peru is warm, it’s warm here,” says Liu. “It has global impact. If there are changes in the future, will it change the pattern?”

Before the start of the Holocene — which began roughly 12,000 years ago — pulses of melting water during deglaciation most strongly influenced El Nino, the study found. But since that time, changes in Earth’s orbit have played the greatest role in intensifying it.

Like an uptick in tempo, the feedbacks between ocean and atmosphere — such as how wind and seas interact — have grown stronger.

However, even with the best data available, some features of the simulated El Nino — especially prior to 6,000 years ago — can’t be tested unambiguously, Liu says. The current observational data feeding the model is sparse and the resolution too low to pick up subtle shifts in El Nino over the millennia.

The study findings indicate better observational data is needed to refine the science, like more coral samples and sediment measurements from different locations in the Central Pacific. Like all science, better understanding what drives El Nino and how it might change is a process, and one that will continue to evolve over time.

“It’s really an open door; we need more data to get a more significant model,” he says. “With this study, we are providing the first benchmark for the next five, 10, 20 years into the future.”

Story Source:

The above story is based on materials provided by University of Wisconsin-Madison. The original article was written by Kelly April Tyrrell. Note: Materials may be edited for content and length.

Journal Reference:

  1. Zhengyu Liu, Zhengyao Lu, Xinyu Wen, B. L. Otto-Bliesner, A. Timmermann, K. M. Cobb. Evolution and forcing mechanisms of El Niño over the past 21,000 years. Nature, 2014; 515 (7528): 550 DOI: 10.1038/nature13963

Climate change caused by ocean, not just atmosphere (Science Daily)

Date: October 25, 2014

Source: Rutgers University

Summary: Most of the concerns about climate change have focused on the amount of greenhouse gases that have been released into the atmosphere. A new study reveals another equally important factor in regulating Earth’s climate. Researchers say the major cooling of Earth and continental ice build-up in the Northern Hemisphere 2.7 million years ago coincided with a shift in the circulation of the ocean.

The ocean conveyor moves heat and water between the hemispheres, along the ocean bottom. It also moves carbon dioxide. Credit: NASA

Most of the concerns about climate change have focused on the amount of greenhouse gases that have been released into the atmosphere.

But in a new study published in Science, a group of Rutgers researchers have found that circulation of the ocean plays an equally important role in regulating Earth’s climate.

In their study, the researchers say the major cooling of Earth and continental ice build-up in the Northern Hemisphere 2.7 million years ago coincided with a shift in the circulation of the ocean — which pulls in heat and carbon dioxide in the Atlantic and moves them through the deep ocean from north to south until it’s released in the Pacific.

The ocean conveyor system, Rutgers scientists believe, changed at the same time as a major expansion in the volume of the glaciers in the northern hemisphere as well as a substantial fall in sea levels. It was the Antarctic ice, they argue, that cut off heat exchange at the ocean’s surface and forced it into deep water. They believe this caused global climate change at that time, not carbon dioxide in the atmosphere.

“We argue that it was the establishment of the modern deep ocean circulation — the ocean conveyor — about 2.7 million years ago, and not a major change in carbon dioxide concentration in the atmosphere that triggered an expansion of the ice sheets in the northern hemisphere,” says Stella Woodard, lead author and a post-doctoral researcher in the Department of Marine and Coastal Sciences. Their findings, based on ocean sediment core samples between 2.5 million to 3.3 million years old, provide scientists with a deeper understanding of the mechanisms of climate change today.

The study shows that changes in heat distribution between the ocean basins is important for understanding future climate change. However, scientists can’t predict precisely what effect the carbon dioxide currently being pulled into the ocean from the atmosphere will have on climate. Still, they argue that since more carbon dioxide has been released in the past 200 years than any recent period in geological history, interactions between carbon dioxide, temperature changes and precipitation, and ocean circulation will result in profound changes.

Scientists believe that the different pattern of deep ocean circulation was responsible for the elevated temperatures 3 million years ago when the carbon dioxide level in the atmosphere was arguably what it is now and the temperature was 4 degree Fahrenheit higher. They say the formation of the ocean conveyor cooled Earth and created the climate we live in now.

“Our study suggests that changes in the storage of heat in the deep ocean could be as important to climate change as other hypotheses — tectonic activity or a drop in the carbon dioxide level — and likely led to one of the major climate transitions of the past 30 million years,” says Yair Rosenthal, co-author and professor of marine and coastal sciences at Rutgers

The paper’s co-authors are Woodard, Rosenthal, Kenneth Miller and James Wright, both professors of earth and planetary sciences at Rutgers; Beverly Chiu, a Rutgers undergraduate majoring in earth and planetary sciences; and Kira Lawrence, associate professor of geology at Lafayette College in Easton, Pennsylvania.


Journal Reference:

  1. S. C. Woodard, Y. Rosenthal, K. G. Miller, J. D. Wright, B. K. Chiu, K. T. Lawrence.Antarctic role in Northern Hemisphere glaciation. Science, 2014; DOI:10.1126/science.1255586

What were they thinking? Study examines federal reserve prior to 2008 financial crisis (Science Daily)

Date: September 15, 2014

Source: Swarthmore College

Summary: A new study illustrates how the Federal Reserve was aware of potential problems in the financial markets prior to 2008, but did not take the threats seriously.


Six years after the start of the Great Recession, a new study from three Swarthmore College professors illustrates how the Federal Reserve was aware of potential problems in the financial markets, but did not take the threats seriously.

Published in the Review of International Political Economy, the study is the result of a collaboration between Swarthmore College economist Stephen Golub, political scientist Ayse Kaya, and sociologist Michael Reay.

The team looked at pre-crisis Federal Reserve documents to come to its conclusion, focusing particularly on the transcripts of meetings of the Federal Open Market Committee. The meeting transcripts indicate that policymakers and staff were aware of troubling developments but remained largely unconcerned.

Drawing on literatures in economics, political science and sociology, the study demonstrates that the Federal Reserve’s intellectual paradigm in the years before the crisis focused on ‘post hoc interventionism’ — the institution’s ability to limit the fallout should a problem arise. Additionally, the study argues that institutional routines and a “silo mentality” contributed to the Federal Reserve’s lack of attention to the serious warning signals in the pre-crisis period.

To speak with Professors Golub, Kaya, or Reay, please contact Mark Anskis (manskis1@swarthmore.edu / 570-274-0471) in the Swarthmore College communications office.


Journal Reference:

  1. Stephen Golub, Ayse Kaya, Michael Reay. What were they thinking? The Federal Reserve in the run-up to the 2008 financial crisis. Review of International Political Economy, 2014; 1 DOI: 10.1080/09692290.2014.932829

Weather history ‘time machine’ created (Science Daily)

Date: October 15, 2014

Source: San Diego State University

Summary: A software program that allows climate researchers to access historical climate data for the entire global surface (excluding the poles) has been developed. This software include the oceans, and is based statistical research into historical climates.

During the 1930s, North America endured the Dust Bowl, a prolonged era of dryness that withered crops and dramatically altered where the population settled. Land-based precipitation records from the years leading up to the Dust Bowl are consistent with the telltale drying-out period associated with a persistent dry weather pattern, but they can’t explain why the drought was so pronounced and long-lasting.

The mystery lies in the fact that land-based precipitation tells only part of the climate story.Building accurate computer reconstructions of historical global precipitation is tricky business. The statistical models are very complicated, the historical data is often full of holes, and researchers invariably have to make educated guesses at correcting for sampling errors.

Hard science

The high degree of difficulty and expertise required means that relatively few climate scientists have been able to base their research on accurate models of historical precipitation. Now, a new software program developed by a research team including San Diego State University Distinguished Professor of Mathematics and Statistics Samuel Shen will democratize this ability, allowing far more researchers access to these models.

“In the past, only a couple dozen scientists could do these reconstructions,” Shen said. “Now, anybody can play with this user-friendly software, use it to inform their research, and develop new models and hypotheses. This new tool brings historical precipitation reconstruction from a ‘rocket science’ to a ‘toy science.'”

The National Science Foundation-funded project is a collaboration between Shen, University of Maryland atmospheric scientist Phillip A. Arkin and National Oceanic and Atmospheric Administration climatologist Thomas M. Smith.

Predicting past patterns

Prescribed oceanic patterns are useful for predicting large weather anomalies. Prolonged dry or wet spells over certain regions can reliably tell you whether, for instance, North America will undergo an oceanic weather pattern such as the El Nino or La Nina patterns. The problem for historical models is that reliable data exists from only a small percentage of Earth’s surface. About eighty-four percent of all rain falls in the middle of the ocean with no one to record it. Satellite weather tracking is only a few decades old, so for historical models, researchers must fill in the gaps based on the data that does exist.

Shen, who co-directs SDSU’s Center for Climate and Sustainability Studies Area of Excellence, is an expert in minimizing error size inside model simulations. In the case of climate science, that means making the historical fill-in-the-gap guesses as accurate as possible.Shen and his SDSU graduate students Nancy Tafolla and Barbara Sperberg produced a user-friendly, technologically advanced piece of software that does the statistical heavy lifting for researchers. The program, known as SOGP 1.0, is based on research published last month in the Journal of Atmospheric Sciences. The group released SOGP 1.0 to the public last week, available by request.

SOGP 1.0, which stands for a statistical technique known as spectral optimal gridding of precipitation, is based on the MATLAB programming language, commonly used in science and engineering. It reconstructs precipitation records for the entire globe (excluding the Polar Regions) between the years 1900 and 2011 and allows researchers to zoom in on particular regions and timeframes.

New tool for climate change models

For example, Shen referenced a region in the middle of the Pacific Ocean that sometimes glows bright red on the computer model, indicating extreme dryness, and sometimes dark blue, indicating an unusually wet year. When either of these climate events occur, he said, it’s almost certain that North American weather will respond to these patterns, sometimes in a way that lasts several years.

“The tropical Pacific is the engine of climate,” Shen explained.

In the Dust Bowl example, the SOGP program shows extreme dryness in the tropical Pacific in the late 1920s and early 1930s — a harbinger of a prolonged dry weather event in North America. Combining this data with land-record data, the model can retroactively demonstrate the Dust Bowl’s especially brutal dry spell.

“If you include the ocean’s precipitation signal, the drought signal is amplified,” Shen said. “We can understand the 1930s Dust Bowl better by knowing the oceanic conditions.”

The program isn’t a tool meant to look exclusively at the past, though. Shen hopes that its ease of use will encourage climate scientists to incorporate this historical data into their own models, improving our future predictions of climate change.

Researchers interested in using SOGP 1.0 can request the software package as well as the digital datasets used by the program by e-mailing sogp.precip@gmail.com with the subject line, “SOGP precipitation product request,” followed by your name, affiliation, position, and the purpose for which you intend to use the program.

Journal Reference:

  1. Samuel S. P. Shen, Nancy Tafolla, Thomas M. Smith, Phillip A. Arkin. Multivariate Regression Reconstruction and Its Sampling Error for the Quasi-Global Annual Precipitation from 1900 to 2011. Journal of the Atmospheric Sciences, 2014; 71 (9): 3250 DOI: 10.1175/JAS-D-13-0301.1

Adding uncertainty to improve mathematical models (Science Daily)

Date: September 29, 2014

Source: Brown University

Summary: Mathematicians have introduced a new element of uncertainty into an equation used to describe the behavior of fluid flows. While being as certain as possible is generally the stock and trade of mathematics, the researchers hope this new formulation might ultimately lead to mathematical models that better reflect the inherent uncertainties of the natural world.

Burgers’ equation. Named for Johannes Martinus Burgers (1895–1981), the equation describes fluid flows, as when two air masses meet and create a front. A new development accounts for many more complexities and uncertainties, making predictions more robust, less sterile. Credit: Image courtesy of Brown University

Ironically, allowing uncertainty into a mathematical equation that models fluid flows makes the equation much more capable of correctly reflecting the natural world — like the formation, strength, and position of air masses and fronts in the atmosphere.

Mathematicians from Brown University have introduced a new element of uncertainty into an equation used to describe the behavior of fluid flows. While being as certain as possible is generally the stock and trade of mathematics, the researchers hope this new formulation might ultimately lead to mathematical models that better reflect the inherent uncertainties of the natural world.

The research, published in Proceedings of the Royal Society A, deals with Burgers’ equation, which is used to describe turbulence and shocks in fluid flows. The equation can be used, for example, to model the formation of a front when airflows run into each other in the atmosphere.

“Say you have a wave that’s moving very fast in the atmosphere,” said George Karniadakis, the Charles Pitts Robinson and John Palmer Barstow Professor of Applied Mathematics at Brown and senior author of the new research. “If the rest of the air in the domain is at rest, then flow one goes over the other. That creates a very stiff front or a shock, and that’s what Burgers’ equation describes.”

It does so, however, in what Karniadakis describes as “a very sterilized” way, meaning the flows are modeled in the absence of external influences.

For example, when modeling turbulence in the atmosphere, the equations don’t take into consideration the fact that the airflows are interacting not just with each other, but also with whatever terrain may be below — be it a mountain, a valley or a plain. In a general model designed to capture any random point of the atmosphere, it’s impossible to know what landforms might lie underneath. But the effects of whatever those landforms might be can still be accounted for in the equation by adding a new term — one that treats those effects as a “random forcing.”

In this latest research, Karniadakis and his colleagues showed that Burgers’ equation can indeed be solved in the presence of this additional random term. The new term produces a range of solutions that accounts for uncertain external conditions that could be acting on the model system.

The work is part of a larger effort and a burgeoning field in mathematics called uncertainty quantification (UQ). Karniadakis is leading a Multidisciplinary University Research Initiative centered at Brown to lay out the mathematical foundations of UQ.

“The general idea in UQ,” Karniadakis said, “is that when we model a system, we have to simplify it. When we simplify it, we throw out important degrees of freedom. So in UQ, we account for the fact that we committed a crime with our simplification and we try to reintroduce some of those degrees of freedom as a random forcing. It allows us to get more realism from our simulations and our predictions.”

Solving these equations is computationally expensive, and only in recent years has computing power reached a level that makes such calculations possible.

“This is something people have thought about for years,” Karniadakis said. “During my career, computing power has increased by a factor of a billion, so now we can think about harnessing that power.”

The aim, ultimately, is to make the mathematical models describing all kinds of phenomena — from atmospheric currents to the cardiovascular system to gene expression — that better reflect the uncertainties of the natural world.

Heyrim Cho and Daniele Venturi were co-authors on the paper.

Journal Reference:

  1. H. Cho, D. Venturi, G. E. Karniadakis. Statistical analysis and simulation of random shocks in stochastic Burgers equation. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2014; 470 (2171): 20140080 DOI: 10.1098/rspa.2014.0080

Vira-latas sob controle (Fapesp)

22 de setembro de 2014

Por Yuri Vasconcelos

Software estima a população de cães e gatos abandonados e simula estratégias que beneficiam a saúde animal e humana (foto: Wikimedia)

Revista Pesquisa FAPESP – Ninguém conhece ao certo o tamanho das populações canina ou felina no Brasil, sejam elas de animais supervisionados – que têm dono e vivem em domicílios – ou de rua.

A caracterização demográfica de cães e gatos é um passo importante para definir estratégias de manejo populacional desses animais, além de contribuir para o controle de zoonoses como a raiva e a leishmaniose visceral, que causam 55 mil mortes e 500 mil casos no mundo, respectivamente.

Para lidar melhor com esse problema, um grupo de pesquisadores da Faculdade de Medicina Veterinária (FMVZ) da Universidade de São Paulo (USP), na capital paulista, criou um software capaz de estimar com elevado índice de precisão quantos cães e gatos domiciliados vivem nas cidades brasileiras. Em breve, esse programa poderá ser acessado livremente por órgãos do Ministério da Saúde e prefeituras.

“Conhecer a população de rua é essencial. Ela é resultado do abandono de animais”, diz o médico veterinário Fernando Ferreira, professor e coordenador do programa de pós-graduação da FMVZ.

O Brasil lidera a incidência de leishmaniose visceral na América Latina com cerca de 3 mil infectados por ano, o que representa 90% do total do continente. A raiva, apesar de poder ser controlada com vacinação, ainda tem casos no país. Em 1990, foram 50 casos em humanos, situação que variou de zero a dois casos entre 2007 e 2013.

Animais abandonados representam um problema de saúde pública, porque são os principais reservatórios e transmissores dessas enfermidades. Ao mesmo tempo, esses animais são vítimas de atropelamentos, abusos e crueldade.

A técnica mais confiável para dimensionar e classificar a população canina de rua foi criada pelo Instituto Pasteur em 2002 e indica que esses animais representam cerca de 5% dos indivíduos que têm dono.

“Assim, sabendo quantos cães supervisionados vivem numa determinada região, é possível estimar quantos existem nas ruas desse mesmo lugar”, diz Ferreira. “Já que existe uma relação direta entre essas duas populações, as estratégias de controle de cães abandonados passam pelo controle reprodutivo dos animais domiciliados”, explica o pesquisador, que contou no projeto com a colaboração do professor Marcos Amaku, também da FMVZ.

Batizado com a sigla capm – iniciais em inglês de companion animal population management ou manejo populacional de cães e gatos –, o software foi desenvolvido pelo doutorando Oswaldo Santos Baquero, bolsista da FAPESP.

“No meu estudo, avalio a validade de um desenho amostral complexo para estimar o tamanho populacional de cães domiciliados em municípios brasileiros. Também elaborei um modelo matemático de dinâmica populacional para simular cenários e definir prioridades de intervenção”, conta Baquero.

Para ele, a partir da modelagem matemática é possível, por exemplo, compreender com mais facilidade que o principal efeito esperado da esterilização é o aumento da população infértil e não a diminuição do tamanho de uma população inteira.

“Modelos matemáticos da transmissão da raiva na China sugerem que a melhor forma de controlar a doença é reduzir a taxa de natalidade canina e aumentar a imunização. Essas duas ações combinadas mostraram-se mais efetivas do que o sacrifício de animais.”

Leia a reportagem completa em: http://revistapesquisa.fapesp.br/2014/09/16/vira-latas-sob-controle

Forming consensus in social networks (Science Daily)

Date: September 3, 2014

Source: University of Miami

Summary: To understand the process through which we operate as a group, and to explain why we do what we do, researchers have developed a novel computational model and the corresponding conditions for reaching consensus in a wide range of situations.


Social networks have become a dominant force in society. Family, friends, peers, community leaders and media communicators are all part of people’s social networks. Individuals within a network may have different opinions on important issues, but it’s their collective actions that determine the path society takes.

To understand the process through which we operate as a group, and to explain why we do what we do, researchers have developed a novel computational model and the corresponding conditions for reaching consensus in a wide range of situations. The findings are published in the August 2014 issue on Signal Processing for Social Networks of the IEEE Journal of Selected Topics in Signal Processing.

“We wanted to provide a new method for studying the exchange of opinions and evidence in networks,” said Kamal Premaratne, professor of electrical and computer engineering, at the University of Miami (UM) and principal investigator of the study. “The new model helps us understand the collective behavior of adaptive agents–people, sensors, data bases or abstract entities–by analyzing communication patterns that are characteristic of social networks.”

The model addresses some fundamental questions: what is a good way to model opinions and how these opinions are updated, and when is consensus reached.

One key feature of the new model is its capacity to handle the uncertainties associated with soft data (such as opinions of people) in combination with hard data (facts and numbers).

“Human-generated opinions are more nuanced than physical data and require rich models to capture them,” said Manohar N. Murthi, associate professor of electrical and computer engineering at UM and co-author of the study. “Our study takes into account the difficulties associated with the unstructured nature of the network,” he adds. “By using a new ‘belief updating mechanism,’ our work establishes the conditions under which agents can reach a consensus, even in the presence of these difficulties.”

The agents exchange and revise their beliefs through their interaction with other agents. The interaction is usually local, in the sense that only neighboring agents in the network exchange information, for the purpose of updating one’s belief or opinion. The goal is for the group of agents in a network to arrive at a consensus that is somehow ‘similar’ to the ground truth — what has been confirmed by the gathering of objective data.

In previous works, consensus achieved by the agents was completely dependent on how agents update their beliefs. In other words, depending on the updating scheme being utilized, one can get different consensus states. The consensus in the current model is more rational or meaningful.

“In our work, the consensus is consistent with a reliable estimate of the ground truth, if it is available,” Premaratne said. “This consistency is very important, because it allows us to estimate how credible each agent is.”

According to the model, if the consensus opinion is closer to an agent’s opinion, then one can say that this agent is more credible. On the other hand, if the consensus opinion is very different from an agent’s opinion, then it can be inferred that this agent is less credible.

“The fact that the same strategy can be used even in the absence of a ground truth is of immense importance because, in practice, we often have to determine if an agent is credible or not when we don’t have knowledge of the ground truth,” Murthi said.

In the future, the researchers would like to expand their model to include the formation of opinion clusters, where each cluster of agents share similar opinions. Clustering can be seen in the emergence of extremism, minority opinion spreading, the appearance of political affiliations, or affinity for a particular product, for example.

 

Journal Reference:

  1. Thanuka L. Wickramarathne, Kamal Premaratne, Manohar N. Murthi, Nitesh V. Chawla. Convergence Analysis of Iterated Belief Revision in Complex Fusion Environments. IEEE Journal of Selected Topics in Signal Processing, 2014; 8 (4): 598 DOI: 10.1109/JSTSP.2014.2314854

City and rural super-dialects exposed via Twitter (New Scientist)

11 August 2014 by Aviva Rutkin

Magazine issue 2981.

WHAT do two Twitter users who live halfway around the world from each other have in common? They might speak the same “super-dialect”. An analysis of millions of Spanish tweets found two popular speaking styles: one favoured by people living in cities, another by those in small rural towns.

Bruno Gonçalves at Aix-Marseille University in France and David Sánchez at the Institute for Cross-Disciplinary Physics and Complex Systems in Palma, Majorca, Spain, analysed more than 50 million tweets sent over a two-year period. Each tweet was tagged with a GPS marker showing whether the message came from a user somewhere in Spain, Latin America, or Spanish-speaking pockets of Europe and the US.

The team then searched the tweets for variations on common words. Someone tweeting about their socks might use the word calcetas, medias, orsoquetes, for example. Another person referring to their car might call it theircoche, auto, movi, or one of three other variations with roughly the same meaning. By comparing these word choices to where they came from, the researchers were able to map preferences across continents (arxiv.org/abs/1407.7094).

According to their data, Twitter users in major cities thousands of miles apart, like Quito in Ecuador and San Diego in California, tend to have more language in common with each other than with a person tweeting from the nearby countryside, probably due to the influence of mass media.

Studies like these may allow us to dig deeper into how language varies across place, time and culture, says Eric Holt at the University of South Carolina in Columbia.

This article appeared in print under the headline “Super-dialects exposed via millions of tweets”

Researchers treat incarceration as a disease epidemic, discover small changes help (Science Daily)

Date: June 25, 2014

Source: Virginia Tech

Summary: By treating incarceration as an infectious disease, researchers show that small differences in prison sentences can lead to large differences in incarceration rates. The incarceration rate has nearly quadrupled since the U.S. declared a war on drugs, researchers say. Along with that, racial disparities abound. Incarceration rates for black Americans are more than six times higher than those for white Americans, according to the U.S. Bureau of Justice Statistics.

The incarceration rate has nearly quadrupled since the U.S. declared a war on drugs, researchers say. Along with that, racial disparities abound. Incarceration rates for black Americans are more than six times higher than those for white Americans, according to the U.S. Bureau of Justice Statistics.

To explain these growing racial disparities, researchers at Virginia Tech are using the same modeling techniques used for infectious disease outbreaks to take on the mass incarceration problem.

By treating incarceration as an infectious disease, the scientists demonstrated that small but significant differences in prison sentences can lead to large differences in incarceration rates. The research was published in June in the Journal of the Royal Society Interface.

Incarceration can be “transmitted” to others, the researchers say. For instance, incarceration can increase family members’ emotional and economic stress or expose family and friends to a network of criminals, and these factors can lead to criminal activity.

Alternatively, “official bias” leads police and the courts to pay more attention to the incarcerated person’s family and friends, thereby increasing the probability they will be caught, prosecuted and processed by the criminal justice system, researchers said.

“Regardless of the specific mechanisms involved,” said Kristian Lum, a former statistician at the Virginia Bioinformatics Institute now working for DataPad, “the incarceration of one family member increases the likelihood of other family members and friends being incarcerated.”

Building on this insight, incarceration is treated like a disease in the model and the incarcerated are infectious to their social contacts — their family members and friends most likely affected by their incarceration.

“Criminologists have long recognized that social networks play an important role in criminal behavior, the control of criminal behavior, and the re-entry of prisoners into society,” said James Hawdon, a professor of sociology in the College of Liberal Arts and Human Sciences. “We therefore thought we should test if networks also played a role in the incarceration epidemic. Our model suggests they do.”

Synthesizing publically available data from a variety of sources, the researchers generated a realistic, multi-generational synthetic population with contact networks, sentence lengths, and transmission probabilities.

The researchers’ model is comparable to real-world incarceration rates, reproducing many facets of incarceration in the United States.

Both the model and actual statistics show large discrepancies in incarceration rates between black and white Americans and, subsequently, the likelihood of becoming a repeat offender is high.

Comparisons such as these can be used to validate the assumption that incarceration is infectious.

“Research clearly shows that this epidemic has had devastating effects on individuals, families, and entire communities,” Lum said. “Since our model captures the emergent properties of the incarceration epidemic, we can use it to test policy options designed to reverse it.”

Harsher sentencing may actually result in higher levels of criminality. Examining the role of social influence is an important step in reducing the growing incarceration epidemic.

Journal Reference:

  1. K. Lum, S. Swarup, S. Eubank, J. Hawdon. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration ratesJournal of The Royal Society Interface, 2014; 11 (98): 20140409 DOI: 10.1098/rsif.2014.0409

We Have a Weather Forecast For Every World Cup Match, Even the Ones a Month Away (Five Thirty Eight)

It’s the moment every soccer fan’s been waiting for. The teams are out on the field and the match is about to begin. Then comes the rain. And then the thunder. And then the lightning. Enough of it that the match is delayed.

With the World Cup taking place in a country comprising several different ecosystems — a rain forest among them — you’re going to be hearing a lot about the weather in Brazil over the next month.

But we don’t have to wait until the day of — or even five days before — any given match to get a sense of what the weather will be. We already know the broad outlines of the next month of weather in Brazil — June and July have happened before, after all, and somebody kept track of whether it rained.

I did something like this for the Super Bowl in New York, when I provided a climatological forecast based on years worth of historical data. This isn’t the most accurate way to predict the weather — seven days before a match there will be far better forecasts — but it is a solid way to do it many weeks in advance.

I collected past weather data for the World Cup’s timespan (mid-June through mid-July) from WeatherSpark and Weather Underground for the observation stations closest to the 12 different World Cup sites. Keep in mind, the data for the different areas of Brazil hasn’t been collected for as long as it has in the United States. In some cases, we only have records since the late 1990s, which is about half as many years as I’d like to make the best climatological assessment. Still, history can give us an idea of the variability of the weather in Brazil.

You can see what high temperatures have looked like for the 12 World Cup sites in the table below. I’ve taken the average, as well as the 10th, 25th, 75th and 90th percentile for past high temperatures. This gives us a better idea of the range of what could occur than just the average. Remember, 20 percent of high temperatures have fallen out of this range.  (For games starting in the early evening, knock off a few degrees to get the expected average.)

enten-feature-worldcupweather3

What we see is that the weather can be quite comfortable or hot, depending on the site. In the southern coastal region, we see high temperatures that average below 70 degrees Fahrenheit in the cities of Curitiba and Porto Alegre. (I’ve presented all temperatures in Fahrenheit.) It may seem odd to you that southern areas are actually coolest, but remember that this is the southern hemisphere, so everything’s topsy-turvy for a Northerner. It’s winter in Brazil, and climatology suggests that we shouldn’t be surprised if the high temperature is below 60 degrees at one of these sites.

Host sites for the 2014 World Cup.
Wikimedia CommonsHost sites for the 2014 World Cup.

But most of the country is not like these two sites. Belo Horizonte and Brasilia reach the mid- to high 70s usually, but don’t go too much higher because of their elevation (2,720 feet for the former and 3,500 feet for the latter). From Rio de Janeiro northward, temperatures average 80 degrees or greater, but winds from the ocean will often keep them from getting out of hand.

The site tied for the highest median temperature is Manaus, which is also surrounded by the Amazonian rainforest, making it the most interesting site climatologically. There’s a 15 percent chance that it will rain in Manaus on any given day during the tournament. In small quantities, rain can help a passing game by making the grass slick, but if there’s too much precipitation, it can slow the ball significantly as the pitch gets waterlogged. And that doesn’t even get to the threat of lightning, which can halt a game completely.

But Manaus isn’t the site with the highest chance of rain. (Just the highest chance of thunderstorms.) To figure out what is, I looked at the average rainfall and thunderstorm tallies during the 1 p.m. to 6 p.m. hours during June and July in past years. From there I estimated the chance of rain during two-hour stretches in the afternoon and early evening, rather than for the entire day.

So here are approximations for each site on rain and thunderstorms during the games:

enten-weather-table-1It probably won’t rain during any given match, but if it does it’s likely to be in the sites closest to the tropics in the north and thehumid subtropical climate in the south. Recife, for example, has the best chance of rain of any site in the country, in part because it’s right where a lot of different air masses combine, which makes the weather there somewhat more unpredictable.

Thunderstorms, on the other hand, rarely occur anywhere besides Manaus, where the chance of a thunderstorm in a given afternoon hour is in the double digits. Manaus is also where the United States will be playing against Portugal in its second match; climatology suggests it should be a muggy game.

The Americans’ other games are likely to be hot but dry. The United States’ first match, against Ghana, is in Natal on Monday, a city that normally is expected to offer a high temperature around 84 degrees, with a slightly cooler temperature by the evening game time. The current forecasts (based on meteorological data, rather than climatology) are calling for something around normal with around a 15 percent chance of rain, as we’d expect. The weather for the U.S. team’s third match, on the coast in Recife, should be about the same. Thunderstorms probably won’t interrupt the game, but rain is possible.

Most likely, though, the weather will hold up just fine. The optimistic U.S. fan can safely engage in blue-sky thinking — for the team’s chances, and for the skies above it, even if our coach is finding another way to rain on the parade.

Formigas são mais eficientes em busca do que o Google, diz pesquisa (O Globo)

JC e-mail 4960, de 27 de maio de 2014

O estudo mostrou que insetos desenvolvem complexos sistemas de informação para encontrar alimentos

Todos aprendemos desde pequenos que as formigas são prudentes, e que enquanto a cigarra canta e toca violão no verão, esses pequenos insetos trabalham para coletar alimento suficiente para todo o inverno. No entanto, segundo estudo publicado na revista Procedimentos da Academina Nacional de Ciências, elas não só são precavidas, mas também “muito mais eficientes que o próprio Google”.

Para chegar a essa inusitada conclusão, cientistas chineses e alemães utilizaram algorítimos matemáticos que tentam enxergar ordem em um aparente cenário caótico ao criar complexas redes de informação. Em fórmulas e equações, descobriu-se que as formigas desenvolvem caminhos engenhosos para procurar alimentos, dividindo-se em grupos de “exploradoras” e “agregadoras”.

Aquela formiga encontrada solitária que você encontra andando pela casa em um movimento aparentemente aleatório é, na verdade, a exploradora, que libera feromônios pelo caminho para que as agregadoras sigam o trajeto posteriormente com um maior contigente. Com base no primeiro trajeto, novas rotas mais curtas e eficientes são refinadas. Se o esforço for repetido persistentemente, a distância entre os insetos e a comida é drasticamente reduzida.

– Enquanto formigas solitárias parecem andar em movimento caótico, elas rapidamente se tornam uma linha de formigas cruzando o chão em busca de alimento – explicou ao The Independent o co-autor do estudo, professor Jurgen Kurths.

Por isso, segundo Kurths, o processo de busca de um alimento realizado pelos insetos é “muito mais eficiente” do que a ferramenta de pesquisa do Google.

Os modelos matemáticos do estudo podem ser igualmente aplicados a outros movimentos coletivos de animais, inclusive em humanos. A ferramenta pode ser útil, por exemplo, para entender o comportamento das pessoas em redes sociais e até em ambientes de transporte público lotado.

(O Globo com Agências)
http://oglobo.globo.com/sociedade/ciencia/formigas-sao-mais-eficientes-em-busca-do-que-google-diz-pesquisa-12614920#ixzz32vCQx2oB

Important and complex systems, from the global financial market to groups of friends, may be highly controllable (Science Daily)

Date: March 20, 2014

Source: McGill University

Summary: Scientists have discovered that all complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled.

All complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled, researchers say. Credit: © Artur Marciniec / Fotolia

We don’t often think of them in these terms, but our brains, global financial markets and groups of friends are all examples of different kinds of complex networks or systems. And unlike the kind of system that exists in your car that has been intentionally engineered for humans to use, these systems are convoluted and not obvious how to control. Economic collapse, disease, and miserable dinner parties may result from a breakdown in such systems, which is why researchers have recently being putting so much energy into trying to discover how best to control these large and important systems.

But now two brothers, Profs. Justin and Derek Ruths, from Singapore University of Technology and Design and McGill University respectively, have suggested, in an article published in Science, that all complex systems, whether they are found in the body, in international finance, or in social situations, actually fall into just three basic categories, in terms of how they can be controlled.

They reached this conclusion by surveying the inputs and outputs and the critical control points in a wide range of systems that appear to function in completely different ways. (The critical control points are the parts of a system that you have to control in order to make it do whatever you want — not dissimilar to the strings you use to control a puppet).

“When controlling a cell in the body, for example, these control points might correspond to proteins that we can regulate using specific drugs,” said Justin Ruths. “But in the case of a national or international economic system, the critical control points could be certain companies whose financial activity needs to be directly regulated.”

One grouping, for example, put organizational hierarchies, gene regulation, and human purchasing behaviour together, in part because in each, it is hard to control individual parts of the system in isolation. Another grouping includes social networks such as groups of friends (whether virtual or real), and neural networks (in the brain), where the systems allow for relatively independent behaviour. The final group includes things like food systems, electrical circuits and the internet, all of which function basically as closed systems where resources circulate internally.

Referring to these groupings, Derek Ruths commented, “While our framework does provide insights into the nature of control in these systems, we’re also intrigued by what these groupings tell us about how very different parts of the world share deep and fundamental attributes in common — which may help unify our understanding of complexity and of control.”

“What we really want people to take away from the research at this point is that we can control these complex and important systems in the same way that we can control a car,” says Justin Ruths. “And that our work is giving us insight into which parts of the system we need to control and why. Ultimately, at this point we have developed some new theory that helps to advance the field in important ways, but it may still be another five to ten years before we see how this will play out in concrete terms.”

Journal Reference:

  1. Justin Ruths and Derek Ruths. Control Profiles of Complex NetworksScience, 2014 DOI: 10.1126/science.1242063

Discurso sobre o sonho pode ajudar no diagnóstico de doenças mentais (Fapesp)

Pesquisadores brasileiros desenvolvem técnica de análise matemática de relatos sobre sonhos, capaz de auxiliar na identificação de sintomas de esquizofrenia e bipolaridade (imagem: divulgação)

17/03/2014

Por Elton Alisson

Agência FAPESP – A pista dada por Sigmund Freud (1856-1939) no livro “A intepretação dos sonhos, de 1899, de que “os sonhos são a estrada real para o inconsciente”, chave para a Psicanálise, também pode ser útil na Psiquiatria, no diagnóstico clínico de transtornos mentais, como a esquizofrenia e a bipolaridade, entre outras.

A constatação é de um grupo de pesquisadores do Instituto do Cérebro da Universidade Federal do Rio Grande do Norte (UFRN), em colaboração com colegas do Departamento de Física da Universidade Federal de Pernambuco (UFPE) e do Centro de Pesquisa, Inovação e Difusão em Neuromatemática (Neuromat) – um dos CEPIDs da FAPESP.

Eles desenvolveram uma técnica de análise matemática de relatos de sonhos que poderá, no futuro, auxiliar no diagnóstico de psicoses.

A técnica foi descrita em um artigo publicado em janeiro na Scientific Reports, revista de acesso aberto do grupo Nature.

“A ideia é que a técnica, relativamente simples e barata, seja utilizada como ferramenta para auxiliar os psiquiatras no diagnóstico clínico de pacientes com transtornos mentais de forma mais precisa”, disse Mauro Copelli, professor da UFPE e um dos autores do estudo, à Agência FAPESP.

De acordo com Copelli – que realizou mestrado e doutorado parcialmente com Bolsa da FAPESP –, apesar dos esforços seculares para aumentar a precisão da classificação dos transtornos mentais, o atual método de diagnóstico de psicoses tem sido duramente criticado.

Isso porque ele ainda peca pela falta de objetividade e pelo fato de a maioria dos transtornos mentais não contar com biomarcadores (indicadores biométricos) capazes de auxiliar os psiquiatras a diagnosticá-los com maior exatidão.

Além disso, pacientes com esquizofrenia ou transtorno bipolar muitas vezes apresentam sintomas psicóticos comuns, como alucinações, delírios, hiperatividade e comportamento agressivo – o que pode comprometer a precisão do diagnóstico.

“O diagnóstico dos sintomas psicóticos é altamente subjetivo”, afirmou Copelli. “Por isso mesmo, a última versão do Manual Diagnóstico e Estatístico de Transtornos Mentais [publicado pela Associação Americana de Psiquiatria em 2013] foi muito atacada”, avaliou.

A fim de desenvolver um método quantitativo para avaliar sintomas psiquiátricos, os pesquisadores gravaram, com o consentimento dos envolvidos, os relatos dos sonhos de 60 pacientes voluntários, atendidos no ambulatório de psiquiatria de um hospital público em Natal (RN).

Alguns dos pacientes já tinham recebido o diagnóstico de esquizofrenia, outros de bipolaridade e os demais, que formaram o grupo de controle, não apresentavam sintomas de transtornos mentais.

Os relatos dos sonhos dos pacientes, feitos à psiquiatra Natália Bezerra Mota, doutoranda na URFN e primeira autora do estudo, foram transcritos.

As frases dos discursos dos pacientes foram transformadas por um software desenvolvido por pesquisadores do Instituto do Cérebro em grafos – estruturas matemáticas similares a diagramas nas quais cada palavra dita pelo paciente foi representada por um ponto ou nó, como o feito em uma linha de crochê.

Ao analisar os grafos dos relatos dos sonhos dos três grupos de pacientes os pesquisadores observaram que há diferenças muito claras entre eles.

O tamanho, em termos de quantidade de arestas ou links, e a conectividade (relação) entre os nós dos grafos dos pacientes diagnosticados com esquizofrenia, bipolaridade ou sem transtornos mentais apresentaram variações, afirmaram os pesquisadores.

“Os pacientes com esquizofrenia, por exemplo, fazem relatos que, quando representados por grafos, possuem menos ligações do que os demais grupos de pacientes”, disse Mota.

Diferenças de discursos

Segundo os pesquisadores, a diferenciação de pacientes a partir da análise dos grafos de relatos dos sonhos foi possível porque suas características de fala também são bastante diversificadas.

Os pacientes esquizofrênicos costumam falar de forma lacônica e com pouca digressão (desvio de assunto) – o que explica por que a conectividade e a quantidade de arestas dos grafos de seus relatos são menores em comparação às dos bipolares.

Por sua vez, pacientes com transtorno bipolar tendem a apresentar um sintoma oposto ao da digressão, chamado logorreia ou verborragia, falando atabalhoadamente frases sem sentido – chamado na Psiquiatria de “fuga de ideias”.

“Encontramos uma correlação importante dessas medidas feitas por meio das análises dos grafos com os sintomas negativos e cognitivos medidos por escalas psicométricas utilizadas na prática clínica da Psiquiatria”, afirmou Mota.

Ao transformar essas características marcantes de fala dos pacientes em grafos é possível dar origem a um classificador computacional capaz de auxiliar os psiquiatras no diagnóstico de transtornos mentais, indicou Copelli.

“Todas as ocorrências no discurso dos pacientes com transtornos mentais que no grafo têm um significado aparentemente geométrico podem ser quantificadas matematicamente e ajudar a classificar se um paciente é esquizofrênico ou bipolar, com uma taxa de sucesso comparável ou até mesmo melhor do que as escalas psiquiátricas subjetivas utilizadas para essa finalidade”, avaliou.

O objetivo dos pesquisadores é avaliar um maior número de pacientes e calibrar o algoritmo (sequência de comandos) do software desenvolvido para transformar os relatos dos sonhos em grafos que possam ser usados em larga escala na prática clínica de Psiquiatria.

Apesar de utilizada inicialmente para o diagnóstico de psicoses, a técnica poderá ser expandida para diversas outras finalidades, contou Mota.

“Ela poderá ser utilizada, por exemplo, para buscar mais informações sobre estrutura de linguagem aplicadas à análise de relatos de pessoas não apenas com sintomas psicóticos, mas também em diferentes situações de declínio cognitivo, como demência, ou em ascensão, como durante o aprendizado e o desenvolvimento da fala e escrita”, indicou a pesquisadora.

Papel dos sonhos

Os pesquisadores também desenvolveram e analisaram, durante o estudo, os grafos de relatos sobre atividades realizadas pelos pacientes voluntários na véspera do sonho.

Os grafos desses relatos do dia a dia, chamados de “relatos de vigília”, não foram tão indicativos do tipo de transtorno mental sofrido pelo paciente como outros, disse Copelli.

“Conseguimos distinguir esquizofrênicos dos demais grupos usando a análise dos grafos dos relatos de vigília, mas não conseguimos distinguir bem os bipolares do grupo de controle dessa forma”, contou.

Os pesquisadores ainda não sabem por que os grafos dos discursos sobre o sonho são mais informativos sobre psicose do que os grafos da vigília.

Algumas hipóteses esmiuçadas na pesquisa de doutorado de Mota estão relacionadas a mecanismos fisiológicos de formação de memória.

“Acreditamos que, por serem memórias mais transitórias, os sonhos podem ser mais demandantes cognitivamente e ter maior impacto afetivo do que as memórias relacionadas ao cotidiano, e isso pode tornar seus relatos mais complexos”, contou a pesquisadora.

“Outra hipótese é que o sonho está relacionado a um evento vivenciado exclusivamente por uma pessoa, sem ser compartilhado com outras, e por isso talvez seja mais complexo de ser explicado do que uma atividade relacionada ao cotidiano”, disse.

Para testar essas hipóteses, os pesquisadores pretendem ampliar a coleta de dados aplicando questionários em pacientes com registro de primeiro surto psicótico, com o objetivo de esclarecer se outros tipos de relatos, como de memórias antigas, podem se equiparar ao sonho em termos de informação psiquiátrica. Eles também querem verificar se podem usar o método para identificar sinais ou grupo de sintomas (pródromo) e acompanhar efeitos de medicações.

“Pretendemos investigar em laboratório, com eletroencefalografia de alta densidade e diversas técnicas de mensuração de distâncias semânticas e análise de estrutura de grafos, de que forma os estímulos recebidos imediatamente antes de dormir influenciam os relatos de sonhos produzidos ao despertar”, disse Sidarta Ribeiro, pesquisador do Instituto do Cérebro da UFRN.

“Estamos particularmente interessados nos efeitos distintos de imagens com valor afetivo”, afirmou Ribeiro, que também é pesquisador associado do Neuromat.

O artigo Graph analysis of dream reports is especially informative about psychosis (doi: 10.1038/srep03691), de Mota e outros, pode ser lido na revista Scientific Reports emwww.nature.com/srep/2014/140115/srep03691/full/srep03691.html.

Soap Bubbles for Predicting Cyclone Intensity? (Science Daily)

Jan. 8, 2014 — Could soap bubbles be used to predict the strength of hurricanes and typhoons? However unexpected it may sound, this question prompted physicists at the Laboratoire Ondes et Matière d’Aquitaine (CNRS/université de Bordeaux) to perform a highly novel experiment: they used soap bubbles to model atmospheric flow. A detailed study of the rotation rates of the bubble vortices enabled the scientists to obtain a relationship that accurately describes the evolution of their intensity, and propose a simple model to predict that of tropical cyclones.

Vortices in a soap bubble. (Credit: © Hamid Kellay)

The work, carried out in collaboration with researchers from the Institut de Mathématiques de Bordeaux (CNRS/université de Bordeaux/Institut Polytechnique de Bordeaux) and a team from Université de la Réunion, has just been published in the journal NatureScientific Reports.

Predicting wind intensity or strength in tropical cyclones, typhoons and hurricanes is a key objective in meteorology: the lives of hundreds of thousands of people may depend on it. However, despite recent progress, such forecasts remain difficult since they involve many factors related to the complexity of these giant vortices and their interaction with the environment. A new research avenue has now been opened up by physicists at the Laboratoire Ondes et Matière d’Aquitaine (CNRS/Université Bordeaux 1), who have performed a highly novel experiment using, of all things, soap bubbles.

The researchers carried out simulations of flow on soap bubbles, reproducing the curvature of the atmosphere and approximating as closely as possible a simple model of atmospheric flow. The experiment allowed them to obtain vortices that resemble tropical cyclones and whose rotation rate and intensity exhibit astonishing dynamics-weak initially or just after the birth of the vortex, and increasing significantly over time. Following this intensification phase, the vortex attains its maximum intensity before entering a phase of decline.

A detailed study of the rotation rate of the vortices enabled the researchers to obtain a simple relationship that accurately describes the evolution of their intensity. For instance, the relationship can be used to determine the maximum intensity of the vortex and the time it takes to reach it, on the basis of its initial evolution. This prediction can begin around fifty hours after the formation of the vortex, a period corresponding to approximately one quarter of its lifetime and during which wind speeds intensify. The team then set out to verify that these results could be applied to real tropical cyclones. By applying the same analysis to approximately 150 tropical cyclones in the Pacific and Atlantic oceans, they showed that the relationship held true for such low-pressure systems. This study therefore provides a simple model that could help meteorologists to better predict the strength of tropical cyclones in the future.

Journal Reference:

  1. T. Meuel, Y. L. Xiong, P. Fischer, C. H. Bruneau, M. Bessafi, H. Kellay. Intensity of vortices: from soap bubbles to hurricanesScientific Reports, 2013; 3 DOI:10.1038/srep03455

Democracy Pays (Science Daily)

Dec. 23, 2013 — In relatively large communities, individuals do not always obey the rules and often exploit the willingness of others to cooperate. Institutions such as the police are there to provide protection from misconduct such as tax fraud. But such institutions don’t just come about spontaneously because they cost money which each individual must contribute.

An interdisciplinary team of researchers led by Manfred Milinski from the Max Planck Institute for Evolutionary Biology in Plön has now used an experimental game to investigate the conditions under which institutions of this kind can nevertheless arise. The study shows that a group of players does particularly well if it has first used its own “tax money” to set up a central institution which punishes both free riders and tax evaders. However, the groups only set up institutions to penalize tax evasion if they have decided to do so by a democratic majority decision. Democracy thus enables the creation of rules and institutions which, while demanding individual sacrifice, are best for the group. The chances of agreeing on common climate protection measures around the globe are thus greater under democratic conditions.

In most modern states, central institutions are funded by public taxation. This means, however, that tax evaders must also be punished. Once such a system has been established, it is also good for the community: it makes co-existence easier and it helps maintain common standards. However, such advantageous institutions do not come about by themselves. The community must first agree that such a common punishment authority makes sense and decide what powers it should be given. Climate protection is a case in point, demonstrating that this cannot always be achieved. But how can a community agree on sensible institutions and self-limitations?

The Max Planck researchers allowed participants in a modified public goods game to decide whether to pay taxes towards a policing institution with their starting capital. They were additionally able to pay money into a common pot. The total paid in was then tripled and paid out to all participants. If taxes had been paid beforehand, free riders who did not contribute to the group pot were punished by the police. In the absence of taxation, however, there would be no police and the group would run the risk that no-one would pay into the common pot.

Police punishment of both free riders and tax evaders quickly established cooperative behavior in the experiment. If, however, tax evaders were not punished, the opposite happened and the participants avoided paying taxes. Without policing, there was no longer any incentive to pay into the group pot, so reducing the profits for the group members. Ultimately, each individual thus benefits if tax evaders are punished.

But can participants foresee this development? To find out, the scientists gave the participants a choice: they were now able to choose individually whether they joined a group in which the police also punish tax evaders. Alternatively, they could choose a group in which only those participants who did not pay into the common pot were penalized. Faced with this choice, the majority preferred a community without punishment for tax evaders — with the result that virtually no taxes were paid and, subsequently, that contributions to the group pot also fell.

In a second experimental scenario, the players were instead able to decide by democratic vote whether, for all subsequent rounds, the police should be authorized to punish tax evaders as well as free riders or only free riders. In this case, the players clearly voted for institutions in which tax evaders were also punished. “People are often prepared to impose rules on themselves, but only if they know that these rules apply to everyone,” summarizes Christian Hilbe, the lead author of the study. A majority decision ensures that all participants are equally affected by the outcome of the vote. This makes it easier to introduce rules and institutions which, while demanding individual sacrifice, are best for the group.

The participants’ profits also demonstrate that majority decisions are better: those groups which were able to choose democratically were more cooperative and so also made greater profits. “Democracy pays — in the truest sense of the word,” says Manfred Milinski. “More democracy would certainly not go amiss when it comes to the problem of global warming.”

The Oracle of the T Cell (Science Daily)

Dec. 5, 2013 — A platform that simulates how the body defends itself: The T cells of the immune system decide whether to trigger an immune response against foreign substances.

The virtual T cell allows an online simulation of the response of this immune cell to external signals. (Credit: University of Freiburg)

Since December 2013, scientists from around the world can use the “virtual T cell” to test for themselves what happens in the blood cell when receptor proteins are activated on the surface. Prof. Dr. Wolfgang Schamel from the Institute of Biology III, Facutly of Biology, the Cluster of Excellence BIOSS Centre for Biological Signalling Studies and the Center of Chromic Immunodeficiency of the University of Freiburg is coordinating the European Union-funded project SYBILLA, “Systems Biology of T-Cell Activation in Health and Disease.” This consortium of 17 partners from science and industry has been working since 2008 to understand the T cell as a system. Now the findings of the project are available to the public on an interactive platform. Simulating the signaling pathways in the cell enables researchers to develop new therapeutic approaches for cancer, autoimmune diseases, and infectious diseases.

The T cell is activated by vaccines, allergens, bacteria, or viruses. The T cell receptor identifies these foreign substances and sets off intracellular signaling cascades. This response is then modified by many further receptors. In the end, the network of signaling proteins results in cell division, growth, or the release of messengers that guide other cells of the immune system. The network initiates the attack on the foreign substances. Sometimes, however, the process of activation goes awry: The T cells mistakenly attack the body’s own cells, as in autoimmune diseases, or they ignore harmful cells like cancer cells.

The online platform developed by Dr. Utz-Uwe Haus and Prof. Dr. Robert Weismantel from the Department of Mathematics of ETH Zurich in collaboration with Dr. Jonathan Lindquist and Prof. Dr. Burkhart Schraven from the Institute of Molecular and Clinical Immunology of the University of Magdeburg and the Helmholtz Center for Infection Research in Braunschweig allows researchers to click through the signaling network of the T cells: Users can switch on twelve receptors, including the T cell receptor, identify the signals on the surface of other cells, or bind messengers.

The mathematical model then calculates the behavior of the network out of the 403 elements in the system. The result is a combination of the activity of 52 proteins that predict what will happen with the cell: They change the way in which the DNA is read and thus also that which the cell produces. Now researchers can find weak points for active substances that could be used to treat immune diseases or cancer by switching on and off particular signals in the model. Every protein and every interaction between proteins is described in detail in the network, backed up with references to publications. In addition, users can even extend the model themselves to include further signaling proteins.

No Qualms About Quantum Theory (Science Daily)

Nov. 26, 2013 — A colloquium paper published inThe European Physical Journal D looks into the alleged issues associated with quantum theory. Berthold-Georg Englert from the National University of Singapore reviews a selection of the potential problems of the theory. In particular, he discusses cases when mathematical tools are confused with the actual observed sub-atomic scale phenomena they are describing. Such tools are essential to provide an interpretation of the observations, but cannot be confused with the actual object of studies.

The author sets out to demystify a selected set of objections targeted against quantum theory in the literature. He takes the example of Schrödinger’s infamous cat, whose vital state serves as the indicator of the occurrence of radioactive decay, whereby the decay triggers a hammer mechanism designed to release a lethal substance. The term ‘Schrödinger’s cat state’ is routinely applied to superposition of so-called quantum states of a particle. However, this imagined superposition of a dead and live cat has no reality. Indeed, it confuses a physical object with its description. Something as abstract as the wave function − which is a mathematical tool describing the quantum state − cannot be considered a material entity embodied by a cat, regardless of whether it is dead or alive.

Other myths debunked in this paper include the provision of proof that quantum theory is well defined, has a clear interpretation, is a local theory, is not reversible, and does not feature any instant action at a distance. It also demonstrates that there is no measurement problem, despite the fact that the measure is commonly known to disturb the system under measurement. Hence, since the establishment of quantum theory in the 1920s, its concepts are now clearer, but its foundations remain unchanged.

Journal Reference:

  1. Berthold-Georg Englert. On quantum theoryThe European Physical Journal D, 2013; 67 (11) DOI: 10.1140/epjd/e2013-40486-5

Selecting Mathematical Models With Greatest Predictive Power: Finding Occam’s Razor in an Era of Information Overload (Science Daily)

Nov. 20, 2013 — How can the actions and reactions of proteins so small or stars so distant they are invisible to the human eye be accurately predicted? How can blurry images be brought into focus and reconstructed?

A new study led by physicist Steve Pressé, Ph.D., of the School of Science at Indiana University-Purdue University Indianapolis, shows that there may be a preferred strategy for selecting mathematical models with the greatest predictive power. Picking the best model is about sticking to the simplest line of reasoning, according to Pressé. His paper explaining his theory is published online this month in Physical Review Letters.

“Building mathematical models from observation is challenging, especially when there is, as is quite common, a ton of noisy data available,” said Pressé, an assistant professor of physics who specializes in statistical physics. “There are many models out there that may fit the data we do have. How do you pick the most effective model to ensure accurate predictions? Our study guides us towards a specific mathematical statement of Occam’s razor.”

Occam’s razor is an oft cited 14th century adage that “plurality should not be posited without necessity” sometimes translated as “entities should not be multiplied unnecessarily.” Today it is interpreted as meaning that all things being equal, the simpler theory is more likely to be correct.

A principle for picking the simplest model to answer complex questions of science and nature, originally postulated in the 19th century by Austrian physicist Ludwig Boltzmann, had been embraced by the physics community throughout the world. Then, in 1998, an alternative strategy for picking models was developed by Brazilian Constantino Tsallis. This strategy has been widely used in business (such as in option pricing and for modeling stock swings) as well as scientific applications (such as for evaluating population distributions). The new study finds that Boltzmann’s strategy, not the 20th century alternative, assures that the models picked are the simplest and most consistent with data.

“For almost three decades in physics we have had two main competing strategies for picking the best model. We needed some resolution,” Pressé said. “Even as simple an experiment as flipping a coin or as complex an enterprise as understanding functions of proteins or groups of proteins in human disease need a model to describe them. Simply put, we need one Occam’s razor, not two, when selecting models.”

In addition to Pressé, co-authors of “Nonadditive entropies yield probability distributions with biases not warranted by the data” are Kingshuk Ghosh of the University of Denver, Julian Lee of Soongsil University, and Ken A. Dill of Stony Brook University.

Pressé is also the first author of a companion paper, “Principles of maximum entropy and maximum caliber in statistical physics” published in the July-September issue of the Reviews of Modern Physics.

Scientists Pioneer Method to Predict Environmental Collapse (Science Daily)

Researcher Enlou Zhang takes a core sample from the bed of Lake Erhai in China. (Credit: University of Southampton)

Nov. 19, 2012 — Scientists at the University of Southampton are pioneering a technique to predict when an ecosystem is likely to collapse, which may also have potential for foretelling crises in agriculture, fisheries or even social systems.

The researchers have applied a mathematical model to a real world situation, the environmental collapse of a lake in China, to help prove a theory which suggests an ecosystem ‘flickers’, or fluctuates dramatically between healthy and unhealthy states, shortly before its eventual collapse.

Head of Geography at Southampton, Professor John Dearing explains: “We wanted to prove that this ‘flickering’ occurs just ahead of a dramatic change in a system — be it a social, ecological or climatic one — and that this method could potentially be used to predict future critical changes in other impacted systems in the world around us.”

A team led by Dr Rong Wang extracted core samples from sediment at the bottom of Lake Erhai in Yunnan province, China and charted the levels and variation of fossilised algae (diatoms) over a 125-year period. Analysis of the core sample data showed the algae communities remained relatively stable up until about 30 years before the lake’s collapse into a turbid or polluted state. However, the core samples for these last three decades showed much fluctuation, indicating there had been numerous dramatic changes in the types and concentrations of algae present in the water — evidence of the ‘flickering’ before the lake’s final definitive change of state.

Rong Wang comments: “By using the algae as a measure of the lake’s health, we have shown that its eco-system ‘wobbled’ before making a critical transition — in this instance, to a turbid state.

“Dramatic swings can be seen in other data, suggesting large external impacts on the lake over a long time period — for example, pollution from fertilisers, sewage from fields and changes in water levels — caused the system to switch back and forth rapidly between alternate states. Eventually, the lake’s ecosystem could no longer cope or recover — losing resilience and reaching what is called a ‘tipping point’ and collapsing altogether.”

The researchers hope the method they have trialled in China could be applied to other regions and landscapes.

Co-author Dr Pete Langdon comments: “In this case, we used algae as a marker of how the lake’s ecosystem was holding-up against external impacts — but who’s to say we couldn’t use this method in other ways? For example, perhaps we should look for ‘flickering’ signals in climate data to try and foretell impending crises?”

Journal Reference:

  1. Rong Wang, John A. Dearing, Peter G. Langdon, Enlou Zhang, Xiangdong Yang, Vasilis Dakos, Marten Scheffer.Flickering gives early warning signals of a critical transition to a eutrophic lake stateNature, 2012; DOI:10.1038/nature11655

Do We Live in a Computer Simulation Run by Our Descendants? Researchers Say Idea Can Be Tested (Science Daily)

The conical (red) surface shows the relationship between energy and momentum in special relativity, a fundamental theory concerning space and time developed by Albert Einstein, and is the expected result if our universe is not a simulation. The flat (blue) surface illustrates the relationship between energy and momentum that would be expected if the universe is a simulation with an underlying cubic lattice. (Credit: Martin Savage)

Dec. 10, 2012 — A decade ago, a British philosopher put forth the notion that the universe we live in might in fact be a computer simulation run by our descendants. While that seems far-fetched, perhaps even incomprehensible, a team of physicists at the University of Washington has come up with a potential test to see if the idea holds water.

The concept that current humanity could possibly be living in a computer simulation comes from a 2003 paper published inPhilosophical Quarterly by Nick Bostrom, a philosophy professor at the University of Oxford. In the paper, he argued that at least one of three possibilities is true:

  • The human species is likely to go extinct before reaching a “posthuman” stage.
  • Any posthuman civilization is very unlikely to run a significant number of simulations of its evolutionary history.
  • We are almost certainly living in a computer simulation.

He also held that “the belief that there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.”

With current limitations and trends in computing, it will be decades before researchers will be able to run even primitive simulations of the universe. But the UW team has suggested tests that can be performed now, or in the near future, that are sensitive to constraints imposed on future simulations by limited resources.

Currently, supercomputers using a technique called lattice quantum chromodynamics and starting from the fundamental physical laws that govern the universe can simulate only a very small portion of the universe, on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom, said Martin Savage, a UW physics professor.

Eventually, more powerful simulations will be able to model on the scale of a molecule, then a cell and even a human being. But it will take many generations of growth in computing power to be able to simulate a large enough chunk of the universe to understand the constraints on physical processes that would indicate we are living in a computer model.

However, Savage said, there are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum.

The supercomputers performing lattice quantum chromodynamics calculations essentially divide space-time into a four-dimensional grid. That allows researchers to examine what is called the strong force, one of the four fundamental forces of nature and the one that binds subatomic particles called quarks and gluons together into neutrons and protons at the core of atoms.

“If you make the simulations big enough, something like our universe should emerge,” Savage said. Then it would be a matter of looking for a “signature” in our universe that has an analog in the current small-scale simulations.

Savage and colleagues Silas Beane of the University of New Hampshire, who collaborated while at the UW’s Institute for Nuclear Theory, and Zohreh Davoudi, a UW physics graduate student, suggest that the signature could show up as a limitation in the energy of cosmic rays.

In a paper they have posted on arXiv, an online archive for preprints of scientific papers in a number of fields, including physics, they say that the highest-energy cosmic rays would not travel along the edges of the lattice in the model but would travel diagonally, and they would not interact equally in all directions as they otherwise would be expected to do.

“This is the first testable signature of such an idea,” Savage said.

If such a concept turned out to be reality, it would raise other possibilities as well. For example, Davoudi suggests that if our universe is a simulation, then those running it could be running other simulations as well, essentially creating other universes parallel to our own.

“Then the question is, ‘Can you communicate with those other universes if they are running on the same platform?'” she said.

Journal References:

  1. Silas R. Beane, Zohreh Davoudi, Martin J. Savage.Constraints on the Universe as a Numerical SimulationArxiv, 2012 [link]
  2. Nick Bostrom. Are You Living in a Computer Simulation? Philosophical Quarterly, (2003) Vol. 53, No. 211, pp. 243-255 [link]

‘Missing’ Polar Weather Systems Could Impact Climate Predictions (Science Daily)

Intense but small-scale polar storms could make a big difference to climate predictions according to new research. (Credit: NEODAAS / University of Dundee)

Dec. 16, 2012 — Intense but small-scale polar storms could make a big difference to climate predictions, according to new research from the University of East Anglia and the University of Massachusetts.

Difficult-to-forecast polar mesoscale storms occur frequently over the polar seas; however, they are missing in most climate models.

Research published Dec. 16 inNature Geoscience shows that their inclusion could paint a different picture of climate change in years to come.

Polar mesoscale storms are capable of producing hurricane-strength winds which cool the ocean and lead to changes in its circulation.

Prof Ian Renfrew, from UEA’s School of Environmental Sciences, said: “These polar lows are typically under 500 km in diameter and over within 24-36 hours. They’re difficult to predict, but we have shown they play an important role in driving large-scale ocean circulation.

“There are hundreds of them a year in the North Atlantic, and dozens of strong ones. They create a lot of stormy weather, strong winds and snowfall — particularly over Norway, Iceland, and Canada, and occasionally over Britain, such as in 2003 when a massive dump of snow brought the M11 to a standstill for 24 hours.

“We have shown that adding polar storms into computer-generated models of the ocean results in significant changes in ocean circulation — including an increase in heat travelling north in the Atlantic Ocean and more overturning in the Sub-polar seas.

“At present, climate models don’t have a high enough resolution to account for these small-scale polar lows.

“As Arctic Sea ice continues to retreat, polar lows are likely to migrate further north, which could have consequences for the ‘thermohaline’ or northward ocean circulation — potentially leading to it weakening.”

Alan Condron from the University of Massachusetts said: “By simulating polar lows, we find that the area of the ocean that becomes denser and sinks each year increases and causes the amount of heat being transported towards Europe to intensify.

“The fact that climate models are not simulating these storms is a real problem because these models will incorrectly predict how much heat is being moved northward towards the poles. This will make it very difficult to reliably predict how the climate of Europe and North America will change in the near-future.”

Prof Renfrew added: “Climate models are always improving, and there is a trade-off between the resolution of the model, the complexity of the model, and the number of simulations you can carry out. Our work suggests we should put some more effort into resolving such storms.”

‘The impact of polar mesoscale storms on Northeast Atlantic ocean circulation’ by Alan Condron from the University of Massachusetts (US) and Ian Renfrew from UEA (UK), is published in Nature Geoscience on December 16, 2012.

Journal Reference:

  1. Alan Condron, Ian A. Renfrew. The impact of polar mesoscale storms on northeast Atlantic Ocean circulationNature Geoscience, 2012; DOI:10.1038/ngeo1661

Physicist Happens Upon Rain Data Breakthrough (Science Daily)

John Lane looks over data recorded from his laser system as he refines his process and formula to calibrate measurements of raindrops. (Credit: NASA/Jim Grossmann)

Dec. 3, 2012 — A physicist and researcher who set out to develop a formula to protect Apollo sites on the moon from rocket exhaust may have happened upon a way to improve weather forecasting on Earth.

Working in his backyard during rain showers and storms, John Lane, a physicist at NASA’s Kennedy Space Center in Florida, found that the laser and reflector he was developing to track lunar dust also could determine accurately the size of raindrops, something weather radar and other meteorological systems estimate, but don’t measure.

The special quantity measured by the laser system is called the “second moment of the size distribution,” which results in the average cross-section area of raindrops passing through the laser beam.

“It’s not often that you’re studying lunar dust and it ends up producing benefits in weather forecasting,” said Phil Metzger, a physicist who leads the Granular Mechanics and Regolith Operations Lab, part of the Surface Systems Office at Kennedy.

Lane said the additional piece of information would be useful in filling out the complex computer calculations used to determine the current conditions and forecast the weather.

“We may be able to refine (computer weather) models to make them more accurate,” Lane said. “Weather radar data analysis makes assumptions about raindrop size, so I think this could improve the overall drop size distribution estimates.”

The breakthrough came because Metzger and Lane were looking for a way to calibrate a laser sensor to pick up the fine particles of blowing lunar dust and soil. It turns out that rain is a good stand-in for flying lunar soil.

“I was pretty skeptical in the beginning that the numbers would come out anywhere close,” Lane said. “Anytime you do something new, it’s a risk that you’re just wasting your time.”

The genesis of the research was the need to find out how much damage would be done by robotic landers getting too close to the six places on the moon where Apollo astronauts landed, lived and worked.

NASA fears that dust and soil particles thrown up by the rocket exhaust of a lander will scour and perhaps puncture the metal skin of the lunar module descent stages and experiment hardware left behind by the astronauts from 1969 to 1972.

“It’s like sandblasting, if you have something coming down like a rocket engine, and it lifts up this dust, there’s not air, so it just keeps going fast,” Lane said. “Some of the stuff can actually reach escape velocity and go into orbit.”

Such impacts to those materials could ruin their scientific value to researchers on Earth who want to know what happens to human-made materials left on another world for more than 40 years.

“The Apollo sites have value scientifically and from an engineering perspective because they are a record of how these materials on the moon have interacted with the solar system over 40 years,” Metzger said. “They are witness plates to the environment.”

There also are numerous bags of waste from the astronauts laying up there that biologists want to examine simply to see if living organisms can survive on the moon for almost five decades where there is no air and there is a constant bombardment of cosmic radiation.

“If anybody goes back and sprays stuff on the bags or touches the bags, they ruin the experiment,” Metzger said. “It’s not just the scientific and engineering value. They believe the Apollo sites are the most important archaeological sites in the human sphere, more important than the pyramids because it’s the first place humans stepped off the planet. And from a national point of view, these are symbols of our country and we don’t want them to be damaged by wanton ransacking.”

Current thinking anticipates placing a laser sensor on the bottom of one of the landers taking part in the Google X-Prize competition. The sensor should be able to pick up the blowing dust and soil and give researchers a clear set of results so they can formulate restrictions for other landers, such as how far away from the Apollo sites new landers can touch down.

As research continues into the laser sensor, Lane expects the work to continue on the weather forecasting side of the equation, too. Lane already presented some of his findings at a meteorological conference and is working on a research paper to detail the work. “This is one of those topics that span a lot of areas of science,” Lane said.

Water Resources Management and Policy in a Changing World: Where Do We Go from Here? (Science Daily)

Nov. 26, 2012 — Visualize a dusty place where stream beds are sand and lakes are flats of dried mud. Are we on Mars? In fact, we’re on arid parts of Earth, a planet where water covers some 70 percent of the surface.

How long will water be readily available to nourish life here?

Scientists funded by the National Science Foundation’s (NSF) Dynamics of Coupled Natural and Human Systems (CNH) program are finding new answers.

NSF-supported CNH researchers will address water resources management and policy in a changing world at the fall meeting of the American Geophysical Union (AGU), held in San Francisco from Dec. 3-7, 2012.

In the United States, more than 36 states face water shortages. Other parts of the world are faring no better.

What are the causes? Do the reasons lie in climate change, population growth or still other factors?

Among the topics to be covered at AGU are sociohydrology, patterns in coupled human-water resource systems and the resilience of coupled natural and human systems to global change.

Researchers will report, for example, that human population growth in the Andes outweighs climate change as the culprit in the region’s dwindling water supplies. Does the finding apply in other places, and perhaps around the globe?

Scientists presenting results are affiliated with CHANS-Net, an international network of researchers who study coupled natural and human systems.

NSF’s CNH program supports CHANS-Net, with coordination from the Center for Systems Integration and Sustainability at Michigan State University.

CHANS-Net facilitates communication and collaboration among scientists, engineers and educators striving to find sustainable solutions that benefit the environment while enabling people to thrive.

“For more than a decade, NSF’s CNH program has supported projects that explore the complex ways people and natural systems interact with each other,” says Tom Baerwald, NSF CNH program director.

“CHANS-Net and its investigators represent a broad range of projects. They’re developing a new, better understanding of how our planet works. CHANS-Net researchers are finding practical answers for how people can prosper while maintaining environmental quality.”

CNH and CHANS-Net are part of NSF’s Science, Engineering and Education for Sustainability (SEES) investment. NSF’s Directorates for Geosciences; Social, Behavioral and Economic Sciences; and Biological Sciences support the CNH program.

“CHANS-Net has grown to more than 1,000 members who span generations of natural and social scientists from around the world,” says Jianguo “Jack” Liu, principal investigator of CHANS-Net and Rachel Carson Chair in Sustainability at Michigan State University.

“CHANS-Net is very happy to support another 10 CHANS Fellows–outstanding young scientists–to attend AGU, give presentations there, and learn from leaders in CHANS research and build professional networks. We’re looking forward to these exciting annual CHANS-Net events.”

Speakers at AGU sessions organized by CHANS-Net will discuss such subjects as the importance of water conservation in the 21st century; the Gila River and whether its flows might reduce the risk of water shortages in the Colorado River Basin; and historical evolution of the hydrological functioning of the old Lake Xochimilco in the southern Mexico Basin.

Other topics to be addressed include water conflicts in a changing world; system modeling of the Great Salt Lake in Utah to improve the hydro-ecological performance of diked wetlands; and integrating economics into water resources systems analysis.

“Of all our natural resources, water has become the most precious,” wrote Rachel Carson in 1962 in Silent Spring. “By a strange paradox, most of the Earth’s abundant water is not usable for agriculture, industry, or human consumption because of its heavy load of sea salts, and so most of the world’s population is either experiencing or is threatened with critical shortages.”

Fifty years later, more than 100 scientists will present research reflecting Rachel Carson’s conviction that “seldom if ever does nature operate in closed and separate compartments, and she has not done so in distributing Earth’s water supply.”