Arquivo da tag: Matemática

The new astrology (Aeon)

By fetishising mathematical models, economists turned economics into a highly paid pseudoscience

04 April, 2016

Alan Jay Levinovitz is an assistant professor of philosophy and religion at James Madison University in Virginia. His most recent book is The Gluten Lie: And Other Myths About What You Eat (2015).Edited by Sam Haselby

 

What would make economics a better discipline?

Since the 2008 financial crisis, colleges and universities have faced increased pressure to identify essential disciplines, and cut the rest. In 2009, Washington State University announced it would eliminate the department of theatre and dance, the department of community and rural sociology, and the German major – the same year that the University of Louisiana at Lafayette ended its philosophy major. In 2012, Emory University in Atlanta did away with the visual arts department and its journalism programme. The cutbacks aren’t restricted to the humanities: in 2011, the state of Texas announced it would eliminate nearly half of its public undergraduate physics programmes. Even when there’s no downsizing, faculty salaries have been frozen and departmental budgets have shrunk.

But despite the funding crunch, it’s a bull market for academic economists. According to a 2015 sociological study in the Journal of Economic Perspectives, the median salary of economics teachers in 2012 increased to $103,000 – nearly $30,000 more than sociologists. For the top 10 per cent of economists, that figure jumps to $160,000, higher than the next most lucrative academic discipline – engineering. These figures, stress the study’s authors, do not include other sources of income such as consulting fees for banks and hedge funds, which, as many learned from the documentary Inside Job (2010), are often substantial. (Ben Bernanke, a former academic economist and ex-chairman of the Federal Reserve, earns $200,000-$400,000 for a single appearance.)

Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline. Nor, in the case of financial economics and macroeconomics, can they point to the predictive power of their theories. Hedge funds employ cutting-edge economists who command princely fees, but routinely underperform index funds. Eight years ago, Warren Buffet made a 10-year, $1 million bet that a portfolio of hedge funds would lose to the S&P 500, and it looks like he’s going to collect. In 1998, a fund that boasted two Nobel Laureates as advisors collapsed, nearly causing a global financial crisis.

The failure of the field to predict the 2008 crisis has also been well-documented. In 2003, for example, only five years before the Great Recession, the Nobel Laureate Robert E Lucas Jr told the American Economic Association that ‘macroeconomics […] has succeeded: its central problem of depression prevention has been solved’. Short-term predictions fair little better – in April 2014, for instance, a survey of 67 economists yielded 100 per cent consensus: interest rates would rise over the next six months. Instead, they fell. A lot.

Nonetheless, surveys indicate that economists see their discipline as ‘the most scientific of the social sciences’. What is the basis of this collective faith, shared by universities, presidents and billionaires? Shouldn’t successful and powerful people be the first to spot the exaggerated worth of a discipline, and the least likely to pay for it?

In the hypothetical worlds of rational markets, where much of economic theory is set, perhaps. But real-world history tells a different story, of mathematical models masquerading as science and a public eager to buy them, mistaking elegant equations for empirical accuracy.

As an extreme example, take the extraordinary success of Evangeline Adams, a turn-of-the-20th-century astrologer whose clients included the president of Prudential Insurance, two presidents of the New York Stock Exchange, the steel magnate Charles M Schwab, and the banker J P Morgan. To understand why titans of finance would consult Adams about the market, it is essential to recall that astrology used to be a technical discipline, requiring reams of astronomical data and mastery of specialised mathematical formulas. ‘An astrologer’ is, in fact, the Oxford English Dictionary’s second definition of ‘mathematician’. For centuries, mapping stars was the job of mathematicians, a job motivated and funded by the widespread belief that star-maps were good guides to earthly affairs. The best astrology required the best astronomy, and the best astronomy was done by mathematicians – exactly the kind of person whose authority might appeal to bankers and financiers.

In fact, when Adams was arrested in 1914 for violating a New York law against astrology, it was mathematics that eventually exonerated her. During the trial, her lawyer Clark L Jordan emphasised mathematics in order to distinguish his client’s practice from superstition, calling astrology ‘a mathematical or exact science’. Adams herself demonstrated this ‘scientific’ method by reading the astrological chart of the judge’s son. The judge was impressed: the plaintiff, he observed, went through a ‘mathematical process to get at her conclusions… I am satisfied that the element of fraud… is absent here.’

Romer compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism

The enchanting force of mathematics blinded the judge – and Adams’s prestigious clients – to the fact that astrology relies upon a highly unscientific premise, that the position of stars predicts personality traits and human affairs such as the economy. It is this enchanting force that explains the enduring popularity of financial astrology, even today. The historian Caley Horan at the Massachusetts Institute of Technology described to me how computing technology made financial astrology explode in the 1970s and ’80s. ‘Within the world of finance, there’s always a superstitious, quasi-spiritual trend to find meaning in markets,’ said Horan. ‘Technical analysts at big banks, they’re trying to find patterns in past market behaviour, so it’s not a leap for them to go to astrology.’ In 2000, USA Today quoted Robin Griffiths, the chief technical analyst at HSBC, the world’s third largest bank, saying that ‘most astrology stuff doesn’t check out, but some of it does’.

Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.

‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’

Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:

When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.

The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

Romer is not the first to elaborate the mathiness critique. In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal ‘emptiness behind a breastwork of mathematical formulas’. More recently, Deirdre N McCloskey’s The Rhetoric of Economics(1998) and Robert H Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message ‘Look at how very scientific I am.’

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. ‘As I see it,’ he wrote, ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.’ Krugman named economists’ ‘desire… to show off their mathematical prowess’ as the ‘central cause of the profession’s failure’.

The mathiness critique isn’t limited to macroeconomics. In 2014, the Stanford financial economist Paul Pfleiderer published the paper‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, which helped to inspire Romer’s understanding of mathiness. Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’ that substitute ‘mathematical elegance’ for empirical accuracy. Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’

The notion that an entire culture – not just a few eccentric financiers – could be bewitched by empty, extravagant theories might seem absurd. How could all those people, all that math, be mistaken? This was my own feeling as I began investigating mathiness and the shaky foundations of modern economic science. Yet, as a scholar of Chinese religion, it struck me that I’d seen this kind of mistake before, in ancient Chinese attitudes towards the astral sciences. Back then, governments invested incredible amounts of money in mathematical models of the stars. To evaluate those models, government officials had to rely on a small cadre of experts who actually understood the mathematics – experts riven by ideological differences, who couldn’t even agree on how to test their models. And, of course, despite collective faith that these models would improve the fate of the Chinese people, they did not.

Astral Science in Early Imperial China, a forthcoming book by the historian Daniel P Morgan, shows that in ancient China, as in the Western world, the most valuable type of mathematics was devoted to the realm of divinity – to the sky, in their case (and to the market, in ours). Just as astrology and mathematics were once synonymous in the West, the Chinese spoke of li, the science of calendrics, which early dictionaries also glossed as ‘calculation’, ‘numbers’ and ‘order’. Li models, like macroeconomic theories, were considered essential to good governance. In the classic Book of Documents, the legendary sage king Yao transfers the throne to his successor with mention of a single duty: ‘Yao said: “Oh thou, Shun! The li numbers of heaven rest in thy person.”’

China’s oldest mathematical text invokes astronomy and divine kingship in its very title – The Arithmetical Classic of the Gnomon of the Zhou. The title’s inclusion of ‘Zhou’ recalls the mythic Eden of the Western Zhou dynasty (1045–771 BCE), implying that paradise on Earth can be realised through proper calculation. The book’s introduction to the Pythagorean theorem asserts that ‘the methods used by Yu the Great in governing the world were derived from these numbers’. It was an unquestioned article of faith: the mathematical patterns that govern the stars also govern the world. Faith in a divine, invisible hand, made visible by mathematics. No wonder that a newly discovered text fragment from 200 BCE extolls the virtues of mathematics over the humanities. In it, a student asks his teacher whether he should spend more time learning speech or numbers. His teacher replies: ‘If my good sir cannot fathom both at once, then abandon speech and fathom numbers, [for] numbers can speak, [but] speech cannot number.’

Modern governments, universities and businesses underwrite the production of economic theory with huge amounts of capital. The same was true for li production in ancient China. The emperor – the ‘Son of Heaven’ – spent astronomical sums refining mathematical models of the stars. Take the armillary sphere, such as the two-metre cage of graduated bronze rings in Nanjing, made to represent the celestial sphere and used to visualise data in three-dimensions. As Morgan emphasises, the sphere was literally made of money. Bronze being the basis of the currency, governments were smelting cash by the metric ton to pour it into li. A divine, mathematical world-engine, built of cash, sanctifying the powers that be.

The enormous investment in li depended on a huge assumption: that good government, successful rituals and agricultural productivity all depended upon the accuracy of li. But there were, in fact, no practical advantages to the continued refinement of li models. The calendar rounded off decimal points such that the difference between two models, hotly contested in theory, didn’t matter to the final product. The work of selecting auspicious days for imperial ceremonies thus benefited only in appearance from mathematical rigour. And of course the comets, plagues and earthquakes that these ceremonies promised to avert kept on coming. Farmers, for their part, went about business as usual. Occasional governmental efforts to scientifically micromanage farm life in different climes using li ended in famine and mass migration.

Like many economic models today, li models were less important to practical affairs than their creators (and consumers) thought them to be. And, like today, only a few people could understand them. In 101 BCE, Emperor Wudi tasked high-level bureaucrats – including the Great Director of the Stars – with creating a new li that would glorify the beginning of his path to immortality. The bureaucrats refused the task because ‘they couldn’t do the math’, and recommended the emperor outsource it to experts.

The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession

The debates of these ancient li experts bear a striking resemblance to those of present-day economists. In 223 CE, a petition was submitted to the emperor asking him to approve tests of a new li model developed by the assistant director of the astronomical office, a man named Han Yi.

At the time of the petition, Han Yi’s model, and its competitor, the so-called Supernal Icon, had already been subjected to three years of ‘reference’, ‘comparison’ and ‘exchange’. Still, no one could agree which one was better. Nor, for that matter, was there any agreement on how they should be tested.

In the end, a live trial involving the prediction of eclipses and heliacal risings was used to settle the debate. With the benefit of hindsight, we can see this trial was seriously flawed. The helical rising (first visibility) of planets depends on non-mathematical factors such as eyesight and atmospheric conditions. That’s not to mention the scoring of the trial, which was modelled on archery competitions. Archers scored points for proximity to the bullseye, with no consideration for overall accuracy. The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession.

None of this is to say that li models were useless or inherently unscientific. For the most part, li experts were genuine mathematical virtuosos who valued the integrity of their discipline. Despite being based on inaccurate assumptions – that the Earth was at the centre of the cosmos – their models really did work to predict celestial motions. Imperfect though the live trial might have been, it indicates that superior predictive power was a theory’s most important virtue. All of this is consistent with real science, and Chinese astronomy progressed as a science, until it reached the limits imposed by its assumptions.

However, there was no science to the belief that accurate li would improve the outcome of rituals, agriculture or government policy. No science to the Hall of Light, a temple for the emperor built on the model of a magic square. There, by numeric ritual gesture, the Son of Heaven was thought to channel the invisible order of heaven for the prosperity of man. This was quasi-theology, the belief that heavenly patterns – mathematical patterns – could be used to model every event in the natural world, in politics, even the body. Macro- and microcosm were scaled reflections of one another, yin and yang in a unifying, salvific mathematical vision. The expensive gadgets, the personnel, the bureaucracy, the debates, the competition – all of this testified to the divinely authoritative power of mathematics. The result, then as now, was overvaluation of mathematical models based on unscientific exaggerations of their utility.

In ancient China it would have been unfair to blame li experts for the pseudoscientific exploitation of their theories. These men had no way to evaluate the scientific merits of assumptions and theories – ‘science’, in a formalised, post-Enlightenment sense, didn’t really exist. But today it is possible to distinguish, albeit roughly, science from pseudoscience, astronomy from astrology. Hypothetical theories, whether those of economists or conspiracists, aren’t inherently pseudoscientific. Conspiracy theories can be diverting – even instructive – flights of fancy. They become pseudoscience only when promoted from fiction to fact without sufficient evidence.

Romer believes that fellow economists know the truth about their discipline, but don’t want to admit it. ‘If you get people to lower their shield, they’ll tell you it’s a big game they’re playing,’ he told me. ‘They’ll say: “Paul, you may be right, but this makes us look really bad, and it’s going to make it hard for us to recruit young people.”’

Demanding more honesty seems reasonable, but it presumes that economists understand the tenuous relationship between mathematical models and scientific legitimacy. In fact, many assume the connection is obvious – just as in ancient China, the connection between li and the world was taken for granted. When reflecting in 1999 on what makes economics more scientific than the other social sciences, the Harvard economist Richard B Freeman explained that economics ‘attracts stronger students than [political science or sociology], and our courses are more mathematically demanding’. In Lives of the Laureates (2004), Robert E Lucas Jr writes rhapsodically about the importance of mathematics: ‘Economic theory is mathematical analysis. Everything else is just pictures and talk.’ Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:

The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.

Even for those who agree with Romer, conflict of interest still poses a problem. Why would skeptical astronomers question the emperor’s faith in their models? In a phone conversation, Daniel Hausman, a philosopher of economics at the University of Wisconsin, put it bluntly: ‘If you reject the power of theory, you demote economists from their thrones. They don’t want to become like sociologists.’

George F DeMartino, an economist and an ethicist at the University of Denver, frames the issue in economic terms. ‘The interest of the profession is in pursuing its analysis in a language that’s inaccessible to laypeople and even some economists,’ he explained to me. ‘What we’ve done is monopolise this kind of expertise, and we of all people know how that gives us power.’

Every economist I interviewed agreed that conflicts of interest were highly problematic for the scientific integrity of their field – but only tenured ones were willing to go on the record. ‘In economics and finance, if I’m trying to decide whether I’m going to write something favourable or unfavourable to bankers, well, if it’s favourable that might get me a dinner in Manhattan with movers and shakers,’ Pfleiderer said to me. ‘I’ve written articles that wouldn’t curry favour with bankers but I did that when I had tenure.’

When mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience

Then there’s the additional problem of sunk-cost bias. If you’ve invested in an armillary sphere, it’s painful to admit that it doesn’t perform as advertised. When confronted with their profession’s lack of predictive accuracy, some economists find it difficult to admit the truth. Easier, instead, to double down, like the economist John H Cochrane at the University of Chicago. The problem isn’t too much mathematics, he writes in response to Krugman’s 2009 post-Great-Recession mea culpa for the field, but rather ‘that we don’t have enough math’. Astrology doesn’t work, sure, but only because the armillary sphere isn’t big enough and the equations aren’t good enough.

If overhauling economics depended solely on economists, then mathiness, conflict of interest and sunk-cost bias could easily prove insurmountable. Fortunately, non-experts also participate in the market for economic theory. If people remain enchanted by PhDs and Nobel Prizes awarded for the production of complicated mathematical theories, those theories will remain valuable. If they become disenchanted, the value will drop.

Economists who rationalise their discipline’s value can be convincing, especially with prestige and mathiness on their side. But there’s no reason to keep believing them. The pejorative verb ‘rationalise’ itself warns of mathiness, reminding us that we often deceive each other by making prior convictions, biases and ideological positions look ‘rational’, a word that confuses truth with mathematical reasoning. To be rational is, simply, to think in ratios, like the ratios that govern the geometry of the stars. Yet when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience. The result is people like the judge in Evangeline Adams’s trial, or the Son of Heaven in ancient China, who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

Anúncios

What Did Neanderthals Leave to Modern Humans? Some Surprises (New York Times)

Geneticists tell us that somewhere between 1 and 5 percent of the genome of modern Europeans and Asians consists of DNA inherited from Neanderthals, our prehistoric cousins.

At Vanderbilt University, John Anthony Capra, an evolutionary genomics professor, has been combining high-powered computation and a medical records databank to learn what a Neanderthal heritage — even a fractional one — might mean for people today.

We spoke for two hours when Dr. Capra, 35, recently passed through New York City. An edited and condensed version of the conversation follows.

Q. Let’s begin with an indiscreet question. How did contemporary people come to have Neanderthal DNA on their genomes?

A. We hypothesize that roughly 50,000 years ago, when the ancestors of modern humans migrated out of Africa and into Eurasia, they encountered Neanderthals. Matings must have occurred then. And later.

One reason we deduce this is because the descendants of those who remained in Africa — present day Africans — don’t have Neanderthal DNA.

What does that mean for people who have it? 

At my lab, we’ve been doing genetic testing on the blood samples of 28,000 patients at Vanderbilt and eight other medical centers across the country. Computers help us pinpoint where on the human genome this Neanderthal DNA is, and we run that against information from the patients’ anonymized medical records. We’re looking for associations.

What we’ve been finding is that Neanderthal DNA has a subtle influence on risk for disease. It affects our immune system and how we respond to different immune challenges. It affects our skin. You’re slightly more prone to a condition where you can get scaly lesions after extreme sun exposure. There’s an increased risk for blood clots and tobacco addiction.

To our surprise, it appears that some Neanderthal DNA can increase the risk for depression; however, there are other Neanderthal bits that decrease the risk. Roughly 1 to 2 percent of one’s risk for depression is determined by Neanderthal DNA. It all depends on where on the genome it’s located.

Was there ever an upside to having Neanderthal DNA?

It probably helped our ancestors survive in prehistoric Europe. When humans migrated into Eurasia, they encountered unfamiliar hazards and pathogens. By mating with Neanderthals, they gave their offspring needed defenses and immunities.

That trait for blood clotting helped wounds close up quickly. In the modern world, however, this trait means greater risk for stroke and pregnancy complications. What helped us then doesn’t necessarily now.

Did you say earlier that Neanderthal DNA increases susceptibility to nicotine addiction?

Yes. Neanderthal DNA can mean you’re more likely to get hooked on nicotine, even though there were no tobacco plants in archaic Europe.

We think this might be because there’s a bit of Neanderthal DNA right next to a human gene that’s a neurotransmitter implicated in a generalized risk for addiction. In this case and probably others, we think the Neanderthal bits on the genome may serve as switches that turn human genes on or off.

Aside from the Neanderthals, do we know if our ancestors mated with other hominids?

We think they did. Sometimes when we’re examining genomes, we can see the genetic afterimages of hominids who haven’t even been identified yet.

A few years ago, the Swedish geneticist Svante Paabo received an unusual fossilized bone fragment from Siberia. He extracted the DNA, sequenced it and realized it was neither human nor Neanderthal. What Paabo found was a previously unknown hominid he named Denisovan, after the cave where it had been discovered. It turned out that Denisovan DNA can be found on the genomes of modern Southeast Asians and New Guineans.

Have you long been interested in genetics?

Growing up, I was very interested in history, but I also loved computers. I ended up majoring in computer science at college and going to graduate school in it; however, during my first year in graduate school, I realized I wasn’t very motivated by the problems that computer scientists worked on.

Fortunately, around that time — the early 2000s — it was becoming clear that people with computational skills could have a big impact in biology and genetics. The human genome had just been mapped. What an accomplishment! We now had the code to what makes you, you, and me, me. I wanted to be part of that kind of work.

So I switched over to biology. And it was there that I heard about a new field where you used computation and genetics research to look back in time — evolutionary genomics.

There may be no written records from prehistory, but genomes are a living record. If we can find ways to read them, we can discover things we couldn’t know any other way.

Not long ago, the two top editors of The New England Journal of Medicine published an editorial questioning “data sharing,” a common practice where scientists recycle raw data other researchers have collected for their own studies. They labeled some of the recycling researchers, “data parasites.” How did you feel when you read that?

I was upset. The data sets we used were not originally collected to specifically study Neanderthal DNA in modern humans. Thousands of patients at Vanderbilt consented to have their blood and their medical records deposited in a “biobank” to find genetic diseases.

Three years ago, when I set up my lab at Vanderbilt, I saw the potential of the biobank for studying both genetic diseases and human evolution. I wrote special computer programs so that we could mine existing data for these purposes.

That’s not being a “parasite.” That’s moving knowledge forward. I suspect that most of the patients who contributed their information are pleased to see it used in a wider way.

What has been the response to your Neanderthal research since you published it last year in the journal Science?

Some of it’s very touching. People are interested in learning about where they came from. Some of it is a little silly. “I have a lot of hair on my legs — is that from Neanderthals?”

But I received racist inquiries, too. I got calls from all over the world from people who thought that since Africans didn’t interbreed with Neanderthals, this somehow justified their ideas of white superiority.

It was illogical. Actually, Neanderthal DNA is mostly bad for us — though that didn’t bother them.

As you do your studies, do you ever wonder about what the lives of the Neanderthals were like?

It’s hard not to. Genetics has taught us a tremendous amount about that, and there’s a lot of evidence that they were much more human than apelike.

They’ve gotten a bad rap. We tend to think of them as dumb and brutish. There’s no reason to believe that. Maybe those of us of European heritage should be thinking, “Let’s improve their standing in the popular imagination. They’re our ancestors, too.’”

Researchers model how ‘publication bias’ does, and doesn’t, affect the ‘canonization’ of facts in science (Science Daily)

Date:
December 20, 2016
Source:
University of Washington
Summary:
Researchers present a mathematical model that explores whether “publication bias” — the tendency of journals to publish mostly positive experimental results — influences how scientists canonize facts.

Arguing in a Boston courtroom in 1770, John Adams famously pronounced, “Facts are stubborn things,” which cannot be altered by “our wishes, our inclinations or the dictates of our passion.”

But facts, however stubborn, must pass through the trials of human perception before being acknowledged — or “canonized” — as facts. Given this, some may be forgiven for looking at passionate debates over the color of a dress and wondering if facts are up to the challenge.

Carl Bergstrom believes facts stand a fighting chance, especially if science has their back. A professor of biology at the University of Washington, he has used mathematical modeling to investigate the practice of science, and how science could be shaped by the biases and incentives inherent to human institutions.

“Science is a process of revealing facts through experimentation,” said Bergstrom. “But science is also a human endeavor, built on human institutions. Scientists seek status and respond to incentives just like anyone else does. So it is worth asking — with precise, answerable questions — if, when and how these incentives affect the practice of science.”

In an article published Dec. 20 in the journal eLife, Bergstrom and co-authors present a mathematical model that explores whether “publication bias” — the tendency of journals to publish mostly positive experimental results — influences how scientists canonize facts. Their results offer a warning that sharing positive results comes with the risk that a false claim could be canonized as fact. But their findings also offer hope by suggesting that simple changes to publication practices can minimize the risk of false canonization.

These issues have become particularly relevant over the past decade, as prominent articles have questioned the reproducibility of scientific experiments — a hallmark of validity for discoveries made using the scientific method. But neither Bergstrom nor most of the scientists engaged in these debates are questioning the validity of heavily studied and thoroughly demonstrated scientific truths, such as evolution, anthropogenic climate change or the general safety of vaccination.

“We’re modeling the chances of ‘false canonization’ of facts on lower levels of the scientific method,” said Bergstrom. “Evolution happens, and explains the diversity of life. Climate change is real. But we wanted to model if publication bias increases the risk of false canonization at the lowest levels of fact acquisition.”

Bergstrom cites a historical example of false canonization in science that lies close to our hearts — or specifically, below them. Biologists once postulated that bacteria caused stomach ulcers. But in the 1950s, gastroenterologist E.D. Palmer reported evidence that bacteria could not survive in the human gut.

“These findings, supported by the efficacy of antacids, supported the alternative ‘chemical theory of ulcer development,’ which was subsequently canonized,” said Bergstrom. “The problem was that Palmer was using experimental protocols that would not have detected Helicobacter pylori, the bacteria that we know today causes ulcers. It took about a half century to correct this falsehood.”

While the idea of false canonization itself may cause dyspepsia, Bergstrom and his team — lead author Silas Nissen of the Niels Bohr Institute in Denmark and co-authors Kevin Gross of North Carolina State University and UW undergraduate student Tali Magidson — set out to model the risks of false canonization given the fact that scientists have incentives to publish only their best, positive results. The so-called “negative results,” which show no clear, definitive conclusions or simply do not affirm a hypothesis, are much less likely to be published in peer-reviewed journals.

“The net effect of publication bias is that negative results are less likely to be seen, read and processed by scientific peers,” said Bergstrom. “Is this misleading the canonization process?”

For their model, Bergstrom’s team incorporated variables such as the rates of error in experiments, how much evidence is needed to canonize a claim as fact and the frequency with which negative results are published. Their mathematical model showed that the lower the publication rate is for negative results, the higher the risk for false canonization. And according to their model, one possible solution — raising the bar for canonization — didn’t help alleviate this risk.

“It turns out that requiring more evidence before canonizing a claim as fact did not help,” said Bergstrom. “Instead, our model showed that you need to publish more negative results — at least more than we probably are now.”

Since most negative results live out their obscurity in the pages of laboratory notebooks, it is difficult to quantify the ratio that are published. But clinical trials, which must be registered with the U.S. Food and Drug Administration before they begin, offer a window into how often negative results make it into the peer-reviewed literature. A 2008 analysis of 74 clinical trials for antidepressant drugs showed that scarcely more than 10 percent of negative results were published, compared to over 90 percent for positive results.

“Negative results are probably published at different rates in other fields of science,” said Bergstrom. “And new options today, such as self-publishing papers online and the rise of journals that accept some negative results, may affect this. But in general, we need to share negative results more than we are doing today.”

Their model also indicated that negative results had the biggest impact as a claim approached the point of canonization. That finding may offer scientists an easy way to prevent false canonization.

“By more closely scrutinizing claims as they achieve broader acceptance, we could identify false claims and keep them from being canonized,” said Bergstrom.

To Bergstrom, the model raises valid questions about how scientists choose to publish and share their findings — both positive and negative. He hopes that their findings pave the way for more detailed exploration of bias in scientific institutions, including the effects of funding sources and the different effects of incentives on different fields of science. But he believes a cultural shift is needed to avoid the risks of publication bias.

“As a community, we tend to say, ‘Damn it, this didn’t work, and I’m not going to write it up,'” said Bergstrom. “But I’d like scientists to reconsider that tendency, because science is only efficient if we publish a reasonable fraction of our negative findings.”


Journal Reference:

  1. Silas Boye Nissen, Tali Magidson, Kevin Gross, Carl T Bergstrom. Publication bias and the canonization of false factseLife, 2016; 5 DOI: 10.7554/eLife.21451

Global climate models do not easily downscale for regional predictions (Science Daily)

Date:
August 24, 2016
Source:
Penn State
Summary:
One size does not always fit all, especially when it comes to global climate models, according to climate researchers who caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.

One size does not always fit all, especially when it comes to global climate models, according to Penn State climate researchers.

“The impacts of climate change rightfully concern policy makers and stakeholders who need to make decisions about how to cope with a changing climate,” said Fuqing Zhang, professor of meteorology and director, Center for Advanced Data Assimilation and Predictability Techniques, Penn State. “They often rely upon climate model projections at regional and local scales in their decision making.”

Zhang and Michael Mann, Distinguished professor of atmospheric science and director, Earth System Science Center, were concerned that the direct use of climate model output at local or even regional scales could produce inaccurate information. They focused on two key climate variables, temperature and precipitation.

They found that projections of temperature changes with global climate models became increasingly uncertain at scales below roughly 600 horizontal miles, a distance equivalent to the combined widths of Pennsylvania, Ohio and Indiana. While climate models might provide useful information about the overall warming expected for, say, the Midwest, predicting the difference between the warming of Indianapolis and Pittsburgh might prove futile.

Regional changes in precipitation were even more challenging to predict, with estimates becoming highly uncertain at scales below roughly 1200 miles, equivalent to the combined width of all the states from the Atlantic Ocean through New Jersey across Nebraska. The difference between changing rainfall totals in Philadelphia and Omaha due to global warming, for example, would be difficult to assess. The researchers report the results of their study in the August issue of Advances in Atmospheric Sciences.

“Policy makers and stakeholders use information from these models to inform their decisions,” said Mann. “It is crucial they understand the limitation in the information the model projections can provide at local scales.”

Climate models provide useful predictions of the overall warming of the globe and the largest-scale shifts in patterns of rainfall and drought, but are considerably more hard pressed to predict, for example, whether New York City will become wetter or drier, or to deal with the effects of mountain ranges like the Rocky Mountains on regional weather patterns.

“Climate models can meaningfully project the overall global increase in warmth, rises in sea level and very large-scale changes in rainfall patterns,” said Zhang. “But they are uncertain about the potential significant ramifications on society in any specific location.”

The researchers believe that further research may lead to a reduction in the uncertainties. They caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.

“Uncertainty is hardly a reason for inaction,” said Mann. “Moreover, uncertainty can cut both ways, and we must be cognizant of the possibility that impacts in many regions could be considerably greater and more costly than climate model projections suggest.”

An Ancient Mayan Copernicus (The Current/UC Santa Barbara)

In a new paper, UCSB scholar says ancient hieroglyphic texts reveal Mayans made a major discovery in math, astronomy

By Jim Logan

Tuesday, August 16, 2016 – 09:00 – Santa Barbara, CA

The Observatory, Chich'en Itza

“The Observatory” at Chich’en Itza, the building where a Mayan astronomer would have worked. Photo Credit: GERARDO ALDANA

Venus Table

The Preface of the Venus Table of the Dresden Codex, first panel on left, and the first three pages of the Table.

Gerardo Aldana

Gerardo Aldana. Photo Credit: LEROY LAVERMAN

For more than 120 years the Venus Table of the Dresden Codex — an ancient Mayan book containing astronomical data — has been of great interest to scholars around the world. The accuracy of its observations, especially the calculation of a kind of ‘leap year’ in the Mayan Calendar, was deemed an impressive curiosity used primarily for astrology.

But UC Santa Barbara’s Gerardo Aldana, a professor of anthropology and of Chicana and Chicano studies, believes the Venus Table has been misunderstood and vastly underappreciated. In a new journal article, Aldana makes the case that the Venus Table represents a remarkable innovation in mathematics and astronomy — and a distinctly Mayan accomplishment. “That’s why I’m calling it ‘discovering discovery,’ ” he explained, “because it’s not just their discovery, it’s all the blinders that we have, that we’ve constructed and put in place that prevent us from seeing that this was their own actual scientific discovery made by Mayan people at a Mayan city.”

Multitasking science

Aldana’s paper, “Discovering Discovery: Chich’en Itza, the Dresden Codex Venus Table and 10th Century Mayan Astronomical Innovation,” in the Journal of Astronomy in Culture, blends the study of Mayan hieroglyphics (epigraphy), archaeology and astronomy to present a new interpretation of the Venus Table, which tracks the observable phases of the second planet from the Sun. Using this multidisciplinary approach, he said, a new reading of the table demonstrates that the mathematical correction of their “Venus calendar” — a sophisticated innovation — was likely developed at the city of Chich’en Itza during the Terminal Classic period (AD 800-1000). What’s more, the calculations may have been done under the patronage of K’ak’ U Pakal K’awiil, one of the city’s most prominent historical figures.

“This is the part that I find to be most rewarding, that when we get in here, we’re looking at the work of an individual Mayan, and we could call him or her a scientist, an astronomer,” Aldana said. “This person, who’s witnessing events at this one city during this very specific period of time, created, through their own creativity, this mathematical innovation.”

The Venus Table

Scholars have long known that the Preface to the Venus Table, Page 24 of the Dresden Codex, contained what Aldana called a “mathematical subtlety” in its hieroglyphic text. They even knew what it was for: to serve as a correction for Venus’s irregular cycle, which is 583.92 days. “So that means if you do anything on a calendar that’s based on days as a basic unit, there is going to be an error that accrues,” Aldana explained. It’s the same principle used for Leap Years in the Gregorian calendar. Scholars figured out the math for the Venus Table’s leap in the 1930s, Aldana said, “but the question is, what does it mean? Did they discover it way back in the 1st century BC? Did they discover it in the 16th? When did they discover it and what did it mean to them? And that’s where I come in.”

Unraveling the mystery demanded Aldana employ a unique set of skills. The first involved epigraphy, and it led to an important development: In poring over the Table’s hieroglyphics, he came to realize that a key verb, k’al, had a different meaning than traditionally interpreted. Used throughout the Table, k’al means “to enclose” and, in Aldana’s reading, had a historical and cosmological purpose.

Rethinking assumptions

That breakthrough led him to question the assumptions of what the Mayan scribe who authored the text was doing in the Table. Archaeologists and other scholars could see its observations of Venus were accurate, but insisted it was based in numerology. “They [the Maya] knew it was wrong, but the numerology was more important. And that’s what scholars have been saying for the last 70 years,” Aldana said.

“So what I’m saying is, let’s step back and make a different assumption,” he continued. “Let’s assume that they had historical records and they were keeping historical records of astronomical events and they were consulting them in the future — exactly what the Greeks did and the Egyptians and everybody else. That’s what they did. They kept these over a long period of time and then they found patterns within them. The history of Western astronomy is based entirely on this premise.”

To test his new assumption, Aldana turned to another Mayan archaeological site, Copán in Honduras. The former city-state has its own record of Venus, which matched as a historical record the observations in the Dresden Codex. “Now we’re just saying, let’s take these as historical records rather than numerology,” he said. “And when you do that, when you see it as historical record, it changes the interpretation.”

Putting the pieces together

The final piece of the puzzle was what Aldana, whose undergraduate degree was in mechanical engineering, calls “the machinery,” or how the pieces fit together. Scholars know the Mayans had accurate observations of Venus, and Aldana could see that they were historical, not numerological. The question was, Why? One hint lay more than 500 years in the future: Nicolaus Copernicus.

The great Polish astronomer stumbled into the heliocentric universe while trying to figure out the predictions for future dates of Easter, a challenging feat that requires good mathematical models. That’s what Aldana saw in the Venus Table. “They’re using Venus not just to strictly chart when it was going to appear, but they were using it for their ritual cycles,” he explained. “They had ritual activities when the whole city would come together and they would do certain events based on the observation of Venus. And that has to have a degree of accuracy, but it doesn’t have to have overwhelming accuracy. When you change that perspective of, ‘What are you putting these cycles together for?’ that’s the third component.”

Putting those pieces together, Aldana found there was a unique period of time during the occupation of Chichen’Itza when an ancient astronomer in the temple that was used to observe Venus would have seen the progressions of the planet and discovered it was a viable way to correct the calendar and to set their ritual events.

“If you say it’s just numerology that this date corresponds to; it’s not based on anything you can see. And if you say, ‘We’re just going to manipulate them [the corrections written] until they give us the most accurate trajectory,’ you’re not confining that whole thing in any historical time,” he said. “If, on the other hand, you say, ‘This is based on a historical record,’ that’s going to nail down the range of possibilities. And if you say that they were correcting it for a certain kind of purpose, then all of a sudden you have a very small window of when this discovery could have occurred.”

A Mayan achievement

By reinterpreting the work, Aldana said it puts the Venus Table into cultural context. It was an achievement of Mayan science, and not a numerological oddity. We might never know exactly who made that discovery, he noted, but recasting it as a historical work of science returns it to the Mayans.

“I don’t have a name for this person, but I have a name for the person who is probably one of the authority figures at the time,” Aldana said. “It’s the kind of thing where you know who the pope was, but you don’t know Copernicus’s name. You know the pope was giving him this charge, but the person who did it? You don’t know his or her name.”

Theoretical tiger chases statistical sheep to probe immune system behavior (Science Daily)

Physicists update predator-prey model for more clues on how bacteria evade attack from killer cells

Date:
April 29, 2016
Source:
IOP Publishing
Summary:
Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Researchers have created a numerical model that explores this behavior in more detail.

Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Reporting their results in the Journal of Physics A: Mathematical and Theoretical, researchers in Europe have created a numerical model that explores this behaviour in more detail.

Using mathematical expressions, the group can examine the dynamics of a single predator hunting a herd of prey. The routine splits the hunter’s motion into a diffusive part and a ballistic part, which represent the search for prey and then the direct chase that follows.

“We would expect this to be a fairly good approximation for many animals,” explained Ralf Metzler, who led the work and is based at the University of Potsdam in Germany.

Obstructions included

To further improve its analysis, the group, which includes scientists from the National Institute of Chemistry in Slovenia, and Sorbonne University in France, has incorporated volume effects into the latest version of its model. The addition means that prey can now inadvertently get in each other’s way and endanger their survival by blocking potential escape routes.

Thanks to this update, the team can study not just animal behaviour, but also gain greater insight into the way that killer cells such as macrophages (large white blood cells patrolling the body) attack colonies of bacteria.

One of the key parameters determining the life expectancy of the prey is the so-called ‘sighting range’ — the distance at which the prey is able to spot the predator. Examining this in more detail, the researchers found that the hunter profits more from the poor eyesight of the prey than from the strength of its own vision.

Long tradition with a new dimension

The analysis of predator-prey systems has a long tradition in statistical physics and today offers many opportunities for cooperative research, particularly in fields such as biology, biochemistry and movement ecology.

“With the ever more detailed experimental study of systems ranging from molecular processes in living biological cells to the motion patterns of animal herds and humans, the need for cross-fertilisation between the life sciences and the quantitative mathematical approaches of the physical sciences has reached a new dimension,” Metzler comments.

To help support this cross-fertilisation, he heads up a new section of the Journal of Physics A: Mathematical and Theoretical that is dedicated to biological modelling and examines the use of numerical techniques to study problems in the interdisciplinary field connecting biology, biochemistry and physics.


Journal Reference:

  1. Maria Schwarzl, Aljaz Godec, Gleb Oshanin, Ralf Metzler. A single predator charging a herd of prey: effects of self volume and predator–prey decision-makingJournal of Physics A: Mathematical and Theoretical, 2016; 49 (22): 225601 DOI: 10.1088/1751-8113/49/22/225601

Modelo matemático auxilia a planejar operação de reservatórios de água (Fapesp)

Sistema computacional desenvolvido por pesquisadores da USP e da Unicamp estabelece regras de racionamento de suprimento hídrico em períodos de seca

Pesquisadores da Escola Politécnica da Universidade de São Paulo (Poli-USP) e da Faculdade de Engenharia Civil, Arquitetura e Urbanismo da Universidade Estadual de Campinas (FEC-Unicamp) desenvolveram novos modelos matemáticos e computacionais voltados a otimizar a gestão e a operação de sistemas complexos de suprimento hídrico e de energia elétrica, como os existentes no Brasil.

Os modelos, que começaram a ser desenvolvidos no início dos anos 2000, foram aprimorados por meio do Projeto Temático “HidroRisco: Tecnologias de gestão de riscos aplicadas a sistemas de suprimento hídrico e de energia elétrica”, realizado com apoio da Fapesp.

“A ideia é que os modelos matemáticos e computacionais que desenvolvemos possam auxiliar os gestores dos sistemas de distribuição e abastecimento de água e energia elétrica na tomada de decisões que têm enormes impactos sociais e econômicos, como a de decretar racionamento”, disse Paulo Sérgio Franco Barbosa, professor da FEC-Unicamp e coordenador do projeto, à Agência Fapesp.

De acordo com Barbosa, muitas das tecnologias utilizadas hoje nos setores hídrico e energético no Brasil para gerir a oferta e a demanda e os riscos de desabastecimento de água e energia em situações de eventos climáticos extremos, como estiagem severa, foram desenvolvidas na década de 1970, quando as cidades brasileiras eram menores e o País não dispunha de um sistema hídrico e hidroenergético tão complexo como o atual.

Por essas razões, segundo ele, esses sistemas de gestão apresentam falhas como não levar em conta a conexão entre as diferentes bacias e não estimar a ocorrência de eventos climáticos mais extremos do que os que já aconteceram no passado ao planejar a operação de um sistema de reservatórios e distribuição de água.

“Houve falha no dimensionamento da capacidade de abastecimento de água do reservatório Cantareira, por exemplo, porque não se imaginou que aconteceria uma seca pior do que a que atingiu a bacia em 1953, considerado o ano mais seco da história do reservatório antes de 2014”, afirmou Barbosa.

A fim de aprimorar esses sistemas de gestão de risco existentes hoje, os pesquisadores desenvolveram novos modelos matemáticos e computacionais que simulam a operação de um sistema de suprimento hídrico ou de energia de forma integrada e em diferentes cenários de aumento de oferta e demanda de água.

“Por meio de algumas técnicas estatísticas e computacionais, os modelos que desenvolvemos são capazes de fazer simulações melhores e proteger mais um sistema de suprimento hídrico ou de energia elétrica contra riscos climáticos”, disse Barbosa.

Sisagua

Um dos modelos desenvolvidos pelos pesquisadores em colaboração com colegas da University of California em Los Angeles, nos Estados Unidos, é a plataforma de modelagem de otimização e simulação de sistemas de suprimento hídrico Sisagua.

A plataforma computacional integra e representa todas as fontes de abastecimento de um sistema de reservatórios e distribuição de água de cidades de grande porte, como São Paulo, incluindo os reservatórios, canais, dutos, estações de tratamento e de bombeamento.

“O Sisagua possibilita planejar a operação, estudar a capacidade de suprimento e avaliar alternativas de expansão ou de diminuição do fornecimento de um sistema de abastecimento de água de forma integrada”, apontou Barbosa.

Um dos diferenciais do modelo computacional, segundo o pesquisador, é estabelecer regras de racionamento de um sistema de reservatórios e distribuição de água de grande porte em períodos de seca, como o que São Paulo passou em 2014, de modo a minimizar os danos à população e à economia causados por um eventual racionamento.

Quando um dos reservatórios do sistema atinge um volume abaixo dos níveis normais e próximo do volume mínimo de operação, o modelo computacional indica um primeiro estágio de racionamento, reduzindo a oferta da água armazenada em 10%, por exemplo.

Se a crise de abastecimento do reservatório prolongar, o modelo matemático indica alternativas para minimizar a intensidade do racionamento distribuindo o corte de água de forma mais uniforme ao longo do período de escassez de água e entre os outros reservatórios do sistema.

“O Sisagua possui uma inteligência computacional que indica onde e quando cortar o fornecimento de água de um sistema de abastecimento hídrico, de modo a minimizar os danos no sistema e para a população e a economia de uma cidade”, afirmou Barbosa.

Sistema Cantareira

Os pesquisadores aplicaram o Sisagua para simular a operação e a gestão do sistema de distribuição de água da região metropolitana de São Paulo, que abastece cerca de 18 milhões de pessoas e é considerado um dos maiores do mundo, com vazão média de 67 metros cúbicos por segundo (m³/s).

O sistema de distribuição de água paulista é composto por oito subsistemas de abastecimento, sendo o maior deles o Cantareira, que fornece água para 5,3 milhões de pessoas, com vazão média de 33 m³/s.

A fim de avaliar a capacidade de suprimento do Cantareira em um cenário de escassez de água e, ao mesmo tempo, de aumento da demanda pelo recurso natural, os pesquisadores realizaram uma simulação de planejamento do uso do subsistema em um período de dez anos utilizando o Sisagua.

Para isso, eles usaram dados de vazões afluentes (de entrada de água) do Cantareira entre 1950 e 1960, fornecidos pela Companhia de Saneamento Básico do Estado de São Paulo (Sabesp).

“Essa período de tempo foi escolhido como base para as projeções do Sisagua porque registrou secas severas, quando as afluências ficaram significativamente abaixo das médias por quatro anos seguidos, entre 1952 e 1956”, explicou Barbosa.

A partir dos dados de vazão afluente desse série histórica, o modelo matemático e computacional analisou cenários com demanda variável de água do Cantareira entre 30 e 40 m³/s.

Algumas das constatações do modelo foram que o Cantareira é capaz de atender uma demanda de até 34 m³/s em um cenário de escassez de água como ocorreu entre 1950 a 1960 com um risco insignificante de desabastecimento. Acima desse valor a escassez e, consequentemente, o risco de racionamento de água no reservatório aumenta exponencialmente.

Para que o Cantareira possa atender uma demanda de 38 m³/s em um período de escassez de água, o modelo indicou que seria preciso começar a racionar a água do reservatório 40 meses (3 anos e 4 meses) antes que o nível da bacia atingisse o ponto crítico, abaixo do volume normal e próximo do limite mínimo de operação.

Dessa forma, seria possível atender entre 85% e 90% da demanda de água do reservatório no período de seca até que ele recuperasse seu volume ideal, evitando um racionamento mais grave do que aconteceria caso fosse mantido o nível pleno de abastecimento do reservatório.

“Quanto antes for feito o racionamento de água de um sistema de abastecimento hídrico melhor o prejuízo é distribuído ao longo do tempo”, disse Barbosa. “A população pode se preparar melhor para um racionamento de 15% de água durante um período de dois anos, por exemplo, do que um corte de 40% em apenas dois meses”, comparou.

Sistemas integrados

Em outro estudo, os pesquisadores usaram o Sisagua para avaliar a capacidade de os subsistemas Cantareira, Guarapiranga, Alto Tietê e Alto Cotia atenderem as atuais demandas de água em um cenário de escassez do recurso natural.

Para isso, eles também utilizaram dados de vazões afluentes dos quatro subsistemas no período de 1950 a 1960.

Os resultados das análises feitas pelo método matemático e computacional indicaram que o subsistema de Cotia atingiu um limite crítico de racionamento diversas vezes durante o período simulado de dez anos.

Em contrapartida, o subsistema Alto Tietê ficou com volume de água acima de sua meta frequentemente.

Com base nessas constatações, os pesquisadores sugerem novas interligações para transferência entre esses quatro subsistemas de abastecimento.

Parte da demanda de água do subsistema de Cotia poderia ser fornecida pelos subsistemas de Guarapiranga e Cantareira. Por outro lado, esses dois subsistemas também poderiam receber água do subsistema Alto Tietê, indicaram as projeções do Sisagua.

“A transferência de água entre os subsistemas proporcionaria maior flexibilidade e resultaria em uma melhor distribuição, eficiência e confiabilidade do sistema de abastecimento hídrico da região metropolitana de São Paulo”, avaliou Barbosa.

De acordo com o pesquisador, as projeções feitas pelo Sisagua também indicaram a necessidade de investimentos em novas fontes de abastecimento de água para a região metropolitana de São Paulo.

Segundo ele, as principais bacias que abastecem São Paulo sofrem de problemas como a concentração urbana.

Em torno da bacia do Alto Tietê, por exemplo, que ocupa apenas 2,7% do território paulista, está concentrada quase 50% da população do Estado de São Paulo, superando em cinco vezes a densidade demográfica de países como Japão, Coréia e Holanda.

Já as bacias de Piracicaba, Paraíba do Sul, Sorocaba e Baixada Santista – que representam 20% da área de São Paulo – concentram 73% da população paulista, com densidade demográfica superior ao de países como Japão, Holanda e Reino Unido, apontam os pesquisadores.

“Será inevitável pensar em outras fontes de abastecimento de água para a região metropolitana de São Paulo, como o sistema Juquiá, no interior do estado, que tem água de excelente quantidade e em grandes volumes”, disse Barbosa.

“Em razão da distância, essa obra será cara e tem sido postergada. Mas, agora, não dá mais para adiá-la”, afirmou.

Além de São Paulo, o Sisagua também foi utilizado para modelar os sistemas de suprimento hídrico de Los Angeles, nos Estados Unidos, e Taiwan.

O artigo “Planning and operation of large-scale water distribution systems with preemptive priorities”, (doi: 10.1061/(ASCE)0733-9496(2008)134:3(247)), de Barros e outros, pode ser lido por assinantes do Journal of Water Resources Planning and Managementem ascelibrary.org/doi/abs/10.1061/%28ASCE%290733-9496%282008%29134%3A3%28247%29.

Agência Fapesp

The Water Data Drought (N.Y.Times)

Then there is water.

Water may be the most important item in our lives, our economy and our landscape about which we know the least. We not only don’t tabulate our water use every hour or every day, we don’t do it every month, or even every year.

The official analysis of water use in the United States is done every five years. It takes a tiny team of people four years to collect, tabulate and release the data. In November 2014, the United States Geological Survey issued its most current comprehensive analysis of United States water use — for the year 2010.

The 2010 report runs 64 pages of small type, reporting water use in each state by quality and quantity, by source, and by whether it’s used on farms, in factories or in homes.

It doesn’t take four years to get five years of data. All we get every five years is one year of data.

The data system is ridiculously primitive. It was an embarrassment even two decades ago. The vast gaps — we start out missing 80 percent of the picture — mean that from one side of the continent to the other, we’re making decisions blindly.

In just the past 27 months, there have been a string of high-profile water crises — poisoned water in Flint, Mich.; polluted water in Toledo, Ohio, and Charleston, W. Va.; the continued drying of the Colorado River basin — that have undermined confidence in our ability to manage water.

In the time it took to compile the 2010 report, Texas endured a four-year drought. California settled into what has become a five-year drought. The most authoritative water-use data from across the West couldn’t be less helpful: It’s from the year before the droughts began.

In the last year of the Obama presidency, the administration has decided to grab hold of this country’s water problems, water policy and water innovation. Next Tuesday, the White House is hosting a Water Summit, where it promises to unveil new ideas to galvanize the sleepy world of water.

The question White House officials are asking is simple: What could the federal government do that wouldn’t cost much but that would change how we think about water?

The best and simplest answer: Fix water data.

More than any other single step, modernizing water data would unleash an era of water innovation unlike anything in a century.

We have a brilliant model for what water data could be: the Energy Information Administration, which has every imaginable data point about energy use — solar, wind, biodiesel, the state of the heating oil market during the winter we’re living through right now — all available, free, to anyone. It’s not just authoritative, it’s indispensable. Congress created the agency in the wake of the 1970s energy crisis, when it became clear we didn’t have the information about energy use necessary to make good public policy.

That’s exactly the state of water — we’ve got crises percolating all over, but lack the data necessary to make smart policy decisions.

Congress and President Obama should pass updated legislation creating inside the United States Geological Survey a vigorous water data agency with the explicit charge to gather and quickly release water data of every kind — what utilities provide, what fracking companies and strawberry growers use, what comes from rivers and reservoirs, the state of aquifers.

Good information does three things.

First, it creates the demand for more good information. Once you know what you can know, you want to know more.

Second, good data changes behavior. The real-time miles-per-gallon gauges in our cars are a great example. Who doesn’t want to edge the M.P.G. number a little higher? Any company, community or family that starts measuring how much water it uses immediately sees ways to use less.

Finally, data ignites innovation. Who imagined that when most everyone started carrying a smartphone, we’d have instant, nationwide traffic data? The phones make the traffic data possible, and they also deliver it to us.

The truth is, we don’t have any idea what detailed water use data for the United States will reveal. But we can be certain it will create an era of water transformation. If we had monthly data on three big water users — power plants, farmers and water utilities — we’d instantly see which communities use water well, and which ones don’t.

We’d see whether tomato farmers in California or Florida do a better job. We’d have the information to make smart decisions about conservation, about innovation and about investing in new kinds of water systems.

Water’s biggest problem, in this country and around the world, is its invisibility. You don’t tackle problems that are out of sight. We need a new relationship with water, and that has to start with understanding it.

Statisticians Found One Thing They Can Agree On: It’s Time To Stop Misusing P-Values (FiveThirtyEight)

Footnotes

  1. Even the Supreme Court has weighed in, unanimously ruling in 2011 that statistical significance does not automatically equate to scientific or policy importance. ^

Christie Aschwanden is FiveThirtyEight’s lead writer for science.

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)

1-FEB-2016

Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages

SANTA FE INSTITUTE

We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

The world’s greatest literature reveals multi fractals and cascades of consciousness (Science Daily)

Date: January 21, 2016

Source: The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Summary: James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure.


Sequences of sentence lengths (as measured by number of words) in four literary works representative of various degree of cascading character. Credit: Source: IFJ PAN 

James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis carried out at the Institute of Nuclear Physics of the Polish Academy of Sciences, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure. This type of narrative turns out to be multifractal. That is, fractals of fractals are created.

As far as many bookworms are concerned, advanced equations and graphs are the last things which would hold their interest, but there’s no escape from the math. Physicists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, performed a detailed statistical analysis of more than one hundred famous works of world literature, written in several languages and representing various literary genres. The books, tested for revealing correlations in variations of sentence length, proved to be governed by the dynamics of a cascade. This means that the construction of these books is in fact a fractal. In the case of several works their mathematical complexity proved to be exceptional, comparable to the structure of complex mathematical objects considered to be multifractal. Interestingly, in the analyzed pool of all the works, one genre turned out to be exceptionally multifractal in nature.

Fractals are self-similar mathematical objects: when we begin to expand one fragment or another, what eventually emerges is a structure that resembles the original object. Typical fractals, especially those widely known as the Sierpinski triangle and the Mandelbrot set, are monofractals, meaning that the pace of enlargement in any place of a fractal is the same, linear: if they at some point were rescaled x number of times to reveal a structure similar to the original, the same increase in another place would also reveal a similar structure.

Multifractals are more highly advanced mathematical structures: fractals of fractals. They arise from fractals ‘interwoven’ with each other in an appropriate manner and in appropriate proportions. Multifractals are not simply the sum of fractals and cannot be divided to return back to their original components, because the way they weave is fractal in nature. The result is that in order to see a structure similar to the original, different portions of a multifractal need to expand at different rates. A multifractal is therefore non-linear in nature.

“Analyses on multiple scales, carried out using fractals, allow us to neatly grasp information on correlations among data at various levels of complexity of tested systems. As a result, they point to the hierarchical organization of phenomena and structures found in nature. So we can expect natural language, which represents a major evolutionary leap of the natural world, to show such correlations as well. Their existence in literary works, however, had not yet been convincingly documented. Meanwhile, it turned out that when you look at these works from the proper perspective, these correlations appear to be not only common, but in some works they take on a particularly sophisticated mathematical complexity,” says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

The study involved 113 literary works written in English, French, German, Italian, Polish, Russian and Spanish by such famous figures as Honore de Balzac, Arthur Conan Doyle, Julio Cortazar, Charles Dickens, Fyodor Dostoevsky, Alexandre Dumas, Umberto Eco, George Elliot, Victor Hugo, James Joyce, Thomas Mann, Marcel Proust, Wladyslaw Reymont, William Shakespeare, Henryk Sienkiewicz, JRR Tolkien, Leo Tolstoy and Virginia Woolf, among others. The selected works were no less than 5,000 sentences long, in order to ensure statistical reliability.

To convert the texts to numerical sequences, sentence length was measured by the number of words (an alternative method of counting characters in the sentence turned out to have no major impact on the conclusions). The dependences were then searched for in the data — beginning with the simplest, i.e. linear. This is the posited question: if a sentence of a given length is x times longer than the sentences of different lengths, is the same aspect ratio preserved when looking at sentences respectively longer or shorter?

“All of the examined works showed self-similarity in terms of organization of the lengths of sentences. Some were more expressive — here The Ambassadors by Henry James stood out — while others to far less of an extreme, as in the case of the French seventeenth-century romance Artamene ou le Grand Cyrus. However, correlations were evident, and therefore these texts were the construction of a fractal,” comments Dr. Pawel Oswiecimka (IFJ PAN), who also noted that fractality of a literary text will in practice never be as perfect as in the world of mathematics. It is possible to magnify mathematical fractals up to infinity, while the number of sentences in each book is finite, and at a certain stage of scaling there will always be a cut-off in the form of the end of the dataset.

Things took a particularly interesting turn when physicists from the IFJ PAN began tracking non-linear dependence, which in most of the studied works was present to a slight or moderate degree. However, more than a dozen works revealed a very clear multifractal structure, and almost all of these proved to be representative of one genre, that of stream of consciousness. The only exception was the Bible, specifically the Old Testament, which has so far never been associated with this literary genre.

“The absolute record in terms of multifractality turned out to be Finnegan’s Wake by James Joyce. The results of our analysis of this text are virtually indistinguishable from ideal, purely mathematical multifractals,” says Prof. Drozdz.

The most multifractal works also included A Heartbreaking Work of Staggering Genius by Dave Eggers, Rayuela by Julio Cortazar, The US Trilogy by John Dos Passos, The Waves by Virginia Woolf, 2666 by Roberto Bolano, and Joyce’s Ulysses. At the same time a lot of works usually regarded as stream of consciousness turned out to show little correlation to multifractality, as it was hardly noticeable in books such as Atlas Shrugged by Ayn Rand and A la recherche du temps perdu by Marcel Proust.

“It is not entirely clear whether stream of consciousness writing actually reveals the deeper qualities of our consciousness, or rather the imagination of the writers. It is hardly surprising that ascribing a work to a particular genre is, for whatever reason, sometimes subjective. We see, moreover, the possibility of an interesting application of our methodology: it may someday help in a more objective assignment of books to one genre or another,” notes Prof. Drozdz.

Multifractal analyses of literary texts carried out by the IFJ PAN have been published in Information Sciences, a journal of computer science. The publication has undergone rigorous verification: given the interdisciplinary nature of the subject, editors immediately appointed up to six reviewers.


Journal Reference:

  1. Stanisław Drożdż, Paweł Oświȩcimka, Andrzej Kulig, Jarosław Kwapień, Katarzyna Bazarnik, Iwona Grabska-Gradzińska, Jan Rybicki, Marek Stanuszek. Quantifying origin and character of long-range correlations in narrative textsInformation Sciences, 2016; 331: 32 DOI: 10.1016/j.ins.2015.10.023

The One Weird Trait That Predicts Whether You’re a Trump Supporter (Politico Magazine)

And it’s not gender, age, income, race or religion.

1/17/2016

 

If I asked you what most defines Donald Trump supporters, what would you say? They’re white? They’re poor? They’re uneducated?

You’d be wrong.

In fact, I’ve found a single statistically significant variable predicts whether a voter supports Trump—and it’s not race, income or education levels: It’s authoritarianism.

That’s right, Trump’s electoral strength—and his staying power—have been buoyed, above all, by Americans with authoritarian inclinations. And because of the prevalence of authoritarians in the American electorate, among Democrats as well as Republicans, it’s very possible that Trump’s fan base will continue to grow.

My finding is the result of a national poll I conducted in the last five days of December under the auspices of the University of Massachusetts, Amherst, sampling 1,800 registered voters across the country and the political spectrum. Running a standard statistical analysis, I found that education, income, gender, age, ideology and religiosity had no significant bearing on a Republican voter’s preferred candidate. Only two of the variables I looked at were statistically significant: authoritarianism, followed by fear of terrorism, though the former was far more significant than the latter.

Authoritarianism is not a new, untested concept in the American electorate. Since the rise of Nazi Germany, it has been one of the most widely studied ideas in social science. While its causes are still debated, the political behavior of authoritarians is not. Authoritarians obey. They rally to and follow strong leaders. And they respond aggressively to outsiders, especially when they feel threatened. From pledging to “make America great again” by building a wall on the border to promising to close mosques and ban Muslims from visiting the United States, Trump is playing directly to authoritarian inclinations.

Not all authoritarians are Republicans by any means; in national surveys since 1992, many authoritarians have also self-identified as independents and Democrats. And in the 2008 Democratic primary, the political scientist Marc Hetherington found that authoritarianism mattered more than income, ideology, gender, age and education in predicting whether voters preferred Hillary Clinton over Barack Obama. But Hetherington has also found, based on 14 years of polling, that authoritarians have steadily moved from the Democratic to the Republican Party over time. He hypothesizes that the trend began decades ago, as Democrats embraced civil rights, gay rights, employment protections and other political positions valuing freedom and equality. In my poll results, authoritarianism was not a statistically significant factor in the Democratic primary race, at least not so far, but it does appear to be playing an important role on the Republican side. Indeed, 49 percent of likely Republican primary voters I surveyed score in the top quarter of the authoritarian scale—more than twice as many as Democratic voters.

Political pollsters have missed this key component of Trump’s support because they simply don’t include questions about authoritarianism in their polls. In addition to the typical battery of demographic, horse race, thermometer-scale and policy questions, my poll asked a set of four simple survey questions that political scientists have employed since 1992 to measure inclination toward authoritarianism. These questions pertain to child-rearing: whether it is more important for the voter to have a child who is respectful or independent; obedient or self-reliant; well-behaved or considerate; and well-mannered or curious. Respondents who pick the first option in each of these questions are strongly authoritarian.

Based on these questions, Trump was the only candidate—Republican or Democrat—whose support among authoritarians was statistically significant.

So what does this mean for the election? It doesn’t just help us understand what motivates Trump’s backers—it suggests that his support isn’t capped. In a statistical analysis of the polling results, I found that Trump has already captured 43 percent of Republican primary voters who are strong authoritarians, and 37 percent of Republican authoritarians overall. A majority of Republican authoritarians in my poll also strongly supported Trump’s proposals to deport 11 million illegal immigrants, prohibit Muslims from entering the United States, shutter mosques and establish a nationwide database that track Muslims.

And in a general election, Trump’s strongman rhetoric will surely appeal to some of the 39 percent of independents in my poll who identify as authoritarians and the 17 percent of self-identified Democrats who are strong authoritarians.

What’s more, the number of Americans worried about the threat of terrorism is growing. In 2011, Hetherington published research finding that non-authoritarians respond to the perception of threat by behaving more like authoritarians. More fear and more threats—of the kind we’ve seen recently in the San Bernardino and Paris terrorist attacks—mean more voters are susceptible to Trump’s message about protecting Americans. In my survey, 52 percent of those voters expressing the most fear that another terrorist attack will occur in the United States in the next 12 months were non-authoritarians—ripe targets for Trump’s message.

Take activated authoritarians from across the partisan spectrum and the growing cadre of threatened non-authoritarians, then add them to the base of Republican general election voters, and the potential electoral path to a Trump presidency becomes clearer.

So, those who say a Trump presidency “can’t happen here” should check their conventional wisdom at the door. The candidate has confounded conventional expectations this primary season because those expectations are based on an oversimplified caricature of the electorate in general and his supporters in particular. Conditions are ripe for an authoritarian leader to emerge. Trump is seizing the opportunity. And the institutions—from the Republican Party to the press—that are supposed to guard against what James Madison called “the infection of violent passions” among the people have either been cowed by Trump’s bluster or are asleep on the job.

It is time for those who would appeal to our better angels to take his insurgency seriously and stop dismissing his supporters as a small band of the dispossessed. Trump support is firmly rooted in American authoritarianism and, once awakened, it is a force to be reckoned with. That means it’s also time for political pollsters to take authoritarianism seriously and begin measuring it in their polls.

Matthew MacWilliams is founder of MacWilliams Sanders, a political communications firms, and a Ph.D. candidate in political science at the University of Massachusetts, Amherst, where he is writing his dissertation about authoritarianism.

Read more: http://www.politico.com/magazine/story/2016/01/donald-trump-2016-authoritarian-213533#ixzz3xj06TM2n

Algoritmo quântico mostrou-se mais eficaz do que qualquer análogo clássico (Revista Fapesp)

11 de dezembro de 2015

José Tadeu Arantes | Agência FAPESP – O computador quântico poderá deixar de ser um sonho e se tornar realidade nos próximos 10 anos. A expectativa é que isso traga uma drástica redução no tempo de processamento, já que algoritmos quânticos oferecem soluções mais eficientes para certas tarefas computacionais do que quaisquer algoritmos clássicos correspondentes.

Até agora, acreditava-se que a chave da computação quântica eram as correlações entre dois ou mais sistemas. Exemplo de correlação quântica é o processo de “emaranhamento”, que ocorre quando pares ou grupos de partículas são gerados ou interagem de tal maneira que o estado quântico de cada partícula não pode ser descrito independentemente, já que depende do conjunto (Para mais informações veja agencia.fapesp.br/20553/).

Um estudo recente mostrou, no entanto, que mesmo um sistema quântico isolado, ou seja, sem correlações com outros sistemas, é suficiente para implementar um algoritmo quântico mais rápido do que o seu análogo clássico. Artigo descrevendo o estudo foi publicado no início de outubro deste ano na revista Scientific Reports, do grupo Nature: Computational speed-up with a single qudit.

O trabalho, ao mesmo tempo teórico e experimental, partiu de uma ideia apresentada pelo físico Mehmet Zafer Gedik, da Sabanci Üniversitesi, de Istambul, Turquia. E foi realizado mediante colaboração entre pesquisadores turcos e brasileiros. Felipe Fernandes Fanchini, da Faculdade de Ciências da Universidade Estadual Paulista (Unesp), no campus de Bauru, é um dos signatários do artigo. Sua participação no estudo se deu no âmbito do projeto Controle quântico em sistemas dissipativos, apoiado pela FAPESP.

“Este trabalho traz uma importante contribuição para o debate sobre qual é o recurso responsável pelo poder de processamento superior dos computadores quânticos”, disse Fanchini à Agência FAPESP.

“Partindo da ideia de Gedik, realizamos no Brasil um experimento, utilizando o sistema de ressonância magnética nuclear (RMN) da Universidade de São Paulo (USP) em São Carlos. Houve, então, a colaboração de pesquisadores de três universidades: Sabanci, Unesp e USP. E demonstramos que um circuito quântico dotado de um único sistema físico, com três ou mais níveis de energia, pode determinar a paridade de uma permutação numérica avaliando apenas uma vez a função. Isso é impensável em um protocolo clássico.”

Segundo Fanchini, o que Gedik propôs foi um algoritmo quântico muito simples que, basicamente, determina a paridade de uma sequência. O conceito de paridade é utilizado para informar se uma sequência está em determinada ordem ou não. Por exemplo, se tomarmos os algarismos 1, 2 e 3 e estabelecermos que a sequência 1- 2-3 está em ordem, as sequências 2-3-1 e 3-1-2, resultantes de permutações cíclicas dos algarismos, estarão na mesma ordem.

Isso é fácil de entender se imaginarmos os algarismos dispostos em uma circunferência. Dada a primeira sequência, basta girar uma vez em um sentido para obter a sequência seguinte, e girar mais uma vez para obter a outra. Porém, as sequências 1-3-2, 3-2-1 e 2-1-3 necessitam, para serem criadas, de permutações acíclicas. Então, se convencionarmos que as três primeiras sequências são “pares”, as outras três serão “ímpares”.

“Em termos clássicos, a observação de um único algarismo, ou seja uma única medida, não permite dizer se a sequência é par ou ímpar. Para isso, é preciso realizar ao menos duas observações. O que Gedik demonstrou foi que, em termos quânticos, uma única medida é suficiente para determinar a paridade. Por isso, o algoritmo quântico é mais rápido do que qualquer equivalente clássico. E esse algoritmo pode ser concretizado por meio de uma única partícula. O que significa que sua eficiência não depende de nenhum tipo de correlação quântica”, informou Fanchini.

O algoritmo em pauta não diz qual é a sequência. Mas informa se ela é par ou ímpar. Isso só é possível quando existem três ou mais níveis. Porque, havendo apenas dois níveis, algo do tipo 1-2 ou 2-1, não é possível definir uma sequência par ou ímpar. “Nos últimos tempos, a comunidade voltada para a computação quântica vem explorando um conceito-chave da teoria quântica, que é o conceito de ‘contextualidade’. Como a ‘contextualidade’ também só opera a partir de três ou mais níveis, suspeitamos que ela possa estar por trás da eficácia de nosso algoritmo”, acrescentou o pesquisador.

Conceito de contextulidade

“O conceito de ‘contextualidade’ pode ser melhor entendido comparando-se as ideias de mensuração da física clássica e da física quântica. Na física clássica, supõe-se que a mensuração nada mais faça do que desvelar características previamente possuídas pelo sistema que está sendo medido. Por exemplo, um determinado comprimento ou uma determinada massa. Já na física quântica, o resultado da mensuração não depende apenas da característica que está sendo medida, mas também de como foi organizada a mensuração, e de todas as mensurações anteriores. Ou seja, o resultado depende do contexto do experimento. E a ‘contextualidade’ é a grandeza que descreve esse contexto”, explicou Fanchini.

Na história da física, a “contextualidade” foi reconhecida como uma característica necessária da teoria quântica por meio do famoso Teorema de Bell. Segundo esse teorema, publicado em 1964 pelo físico irlandês John Stewart Bell (1928 – 1990), nenhuma teoria física baseada em variáveis locais pode reproduzir todas as predições da mecânica quântica. Em outras palavras, os fenômenos físicos não podem ser descritos em termos estritamente locais, uma vez que expressam a totalidade.

“É importante frisar que em outro artigo [Contextuality supplies the ‘magic’ for quantum computation] publicado na Nature em junho de 2014, aponta a contextualidade como a possível fonte do poder da computação quântica. Nosso estudo vai no mesmo sentido, apresentando um algoritmo concreto e mais eficiente do que qualquer um jamais imaginável nos moldes clássicos.”

Full-scale architecture for a quantum computer in silicon (Science Daily)

Scalable 3-D silicon chip architecture based on single atom quantum bits provides a blueprint to build operational quantum computers

Date:
October 30, 2015
Source:
University of New South Wales
Summary:
Researchers have designed a full-scale architecture for a quantum computer in silicon. The new concept provides a pathway for building an operational quantum computer with error correction.

This picture shows from left to right Dr Matthew House, Sam Hile (seated), Sciential Professor Sven Rogge and Scientia Professor Michelle Simmons of the ARC Centre of Excellence for Quantum Computation and Communication Technology at UNSW. Credit: Deb Smith, UNSW Australia 

Australian scientists have designed a 3D silicon chip architecture based on single atom quantum bits, which is compatible with atomic-scale fabrication techniques — providing a blueprint to build a large-scale quantum computer.

Scientists and engineers from the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), headquartered at the University of New South Wales (UNSW), are leading the world in the race to develop a scalable quantum computer in silicon — a material well-understood and favoured by the trillion-dollar computing and microelectronics industry.

Teams led by UNSW researchers have already demonstrated a unique fabrication strategy for realising atomic-scale devices and have developed the world’s most efficient quantum bits in silicon using either the electron or nuclear spins of single phosphorus atoms. Quantum bits — or qubits — are the fundamental data components of quantum computers.

One of the final hurdles to scaling up to an operational quantum computer is the architecture. Here it is necessary to figure out how to precisely control multiple qubits in parallel, across an array of many thousands of qubits, and constantly correct for ‘quantum’ errors in calculations.

Now, the CQC2T collaboration, involving theoretical and experimental researchers from the University of Melbourne and UNSW, has designed such a device. In a study published today in Science Advances, the CQC2T team describes a new silicon architecture, which uses atomic-scale qubits aligned to control lines — which are essentially very narrow wires — inside a 3D design.

“We have demonstrated we can build devices in silicon at the atomic-scale and have been working towards a full-scale architecture where we can perform error correction protocols — providing a practical system that can be scaled up to larger numbers of qubits,” says UNSW Scientia Professor Michelle Simmons, study co-author and Director of the CQC2T.

“The great thing about this work, and architecture, is that it gives us an endpoint. We now know exactly what we need to do in the international race to get there.”

In the team’s conceptual design, they have moved from a one-dimensional array of qubits, positioned along a single line, to a two-dimensional array, positioned on a plane that is far more tolerant to errors. This qubit layer is “sandwiched” in a three-dimensional architecture, between two layers of wires arranged in a grid.

By applying voltages to a sub-set of these wires, multiple qubits can be controlled in parallel, performing a series of operations using far fewer controls. Importantly, with their design, they can perform the 2D surface code error correction protocols in which any computational errors that creep into the calculation can be corrected faster than they occur.

“Our Australian team has developed the world’s best qubits in silicon,” says University of Melbourne Professor Lloyd Hollenberg, Deputy Director of the CQC2T who led the work with colleague Dr Charles Hill. “However, to scale up to a full operational quantum computer we need more than just many of these qubits — we need to be able to control and arrange them in such a way that we can correct errors quantum mechanically.”

“In our work, we’ve developed a blueprint that is unique to our system of qubits in silicon, for building a full-scale quantum computer.”

In their paper, the team proposes a strategy to build the device, which leverages the CQC2T’s internationally unique capability of atomic-scale device fabrication. They have also modelled the required voltages applied to the grid wires, needed to address individual qubits, and make the processor work.

“This architecture gives us the dense packing and parallel operation essential for scaling up the size of the quantum processor,” says Scientia Professor Sven Rogge, Head of the UNSW School of Physics. “Ultimately, the structure is scalable to millions of qubits, required for a full-scale quantum processor.”

Background

In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. However, a qubit can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on).

As a result, quantum computers will far exceed today’s most powerful super computers, and offer enormous advantages for a range of complex problems, such as rapidly scouring vast databases, modelling financial markets, optimising huge metropolitan transport networks, and modelling complex biological molecules.

How to build a quantum computer in silicon https://youtu.be/zo1q06F2sbY

Aquecimento pode triplicar seca na Amazônia (Observatório do Clima)

15/10/2015

 Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Modelos de computador sugerem que leste amazônico, que contém a maior parte da floresta, teria mais estiagens, incêndios e morte de árvores, enquanto o oeste ficaria mais chuvoso.

As mudanças climáticas podem aumentar a frequência tanto de secas quanto de chuvas extremas na Amazônia antes do meio do século, compondo com o desmatamento para causar mortes maciças de árvores, incêndios e emissões de carbono. A conclusão é de uma avaliação de 35 modelos climáticos aplicados à região, feita por pesquisadores dos EUA e do Brasil.

Segundo o estudo, liderado por Philip Duffy, do WHRC (Instituto de Pesquisas de Woods Hole, nos EUA) e da Universidade Stanford, a área afetada por secas extremas no leste amazônico, região que engloba a maior parte da Amazônia, pode triplicar até 2100. Paradoxalmente, a frequência de períodos extremamente chuvosos e a área sujeita a chuvas extremas tende a crescer em toda a região após 2040 – mesmo nos locais onde a precipitação média anual diminuir.

Já o oeste amazônico, em especial o Peru e a Colômbia, deve ter um aumento na precipitação média anual.

A mudança no regime de chuvas é um efeito há muito teorizado do aquecimento global. Com mais energia na atmosfera e mais vapor d’água, resultante da maior evaporação dos oceanos, a tendência é que os extremos climáticos sejam amplificados. As estações chuvosas – na Amazônia, o período de verão no hemisfério sul, chamado pelos moradores da região de “inverno” ficam mais curtas, mas as chuvas caem com mais intensidade.

No entanto, a resposta da floresta essas mudanças tem sido objeto de controvérsias entre os cientistas. Estudos da década de 1990 propuseram que a reação da Amazônia fosse ser uma ampla “savanização”, ou mortandade de grandes árvores, e a transformação de vastas porções da selva numa savana empobrecida.

Outros estudos, porém, apontaram que o calor e o CO2 extra teriam o efeito oposto – o de fazer as árvores crescerem mais e fixarem mais carbono, de modo a compensar eventuais perdas por seca. Na média, portanto, o impacto do aquecimento global sobre a Amazônia seria relativamente pequeno.

Ocorre que a própria Amazônia encarregou-se de dar aos cientistas dicas de como reagiria. Em 2005, 2007 e 2010, a floresta passou por secas históricas. O resultado foi ampla mortalidade de árvores e incêndios em florestas primárias em mais de 85 mil quilômetros quadrados. O grupo de Duffy, também integrado por Paulo Brando, do Ipam (Instituto de Pesquisa Ambiental da Amazônia), aponta que de 1% a 2% do carbono da Amazônia foi lançado na atmosfera em decorrência das secas da década de 2000. Brando e colegas do Ipam também já haviam mostrado que a Amazônia está mais inflamável, provavelmente devido aos efeitos combinados do clima e do desmatamento.

Os pesquisadores simularam o clima futuro da região usando os modelos do chamado projeto CMIP5, usado pelo IPCC (Painel Intergovernamental sobre Mudança Climática) no seu último relatório de avaliação do clima global. Um dos membros do grupo, Chris Field, de Stanford, foi um dos coordenadores do relatório – foi também candidato à presidência do IPCC na eleição realizada na semana passada, perdendo para o coreano Hoesung Lee.

Os modelos de computador foram testados no pior cenário de emissões, o chamado RMP 8.5, no qual se assume que pouca coisa será feita para controlar emissões de gases-estufa.

Eles não apenas captaram bem a influência das temperaturas dos oceanos Atlântico e Pacífico sobre o padrão de chuvas na Amazônia – diferenças entre os dois oceanos explicam por que o leste amazônico ficará mais seco e o oeste, mais úmido –, como também mostraram nas simulações de seca futura uma característica das secas recorde de 2005 e 2010: o extremo norte da Amazônia teve grande aumento de chuvas enquanto o centro e o sul estorricavam.

Segundo os pesquisadores, o estudo pode ser até mesmo conservador, já que só levou em conta as variações de precipitação. “Por exemplo, as chuvas no leste da Amazônia têm uma forte dependência da evapotranspiração, então uma redução na cobertura de árvores poderia reduzir a precipitação”, escreveram Duffy e Brando. “Isso sugere que, se os processos relacionados a mudanças no uso da terra fossem mais bem representados nos modelos do CMIP5, a intensidade das secas poderia ser maior do que a projetada aqui.”

O estudo foi publicado na PNAS, a revista da Academia Nacional de Ciências dos EUA. (Observatório do Clima/ #Envolverde)

* Publicado originalmente no site Observatório do Clima.

‘Targeted punishments’ against countries could tackle climate change (Science Daily)

Date:
August 25, 2015
Source:
University of Warwick
Summary:
Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

This is a diagram of two possible strategies of targeted punishment studied in the paper. Credit: Royal Society Open Science

Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

Conducted at the University of Warwick, the research suggests that in situations such as climate change, where everyone would be better off if everyone cooperated but it may not be individually advantageous to do so, the use of a strategy called ‘targeted punishment’ could help shift society towards global cooperation.

Despite the name, the ‘targeted punishment’ mechanism can apply to positive or negative incentives. The research argues that the key factor is that these incentives are not necessarily applied to everyone who may seem to deserve them. Rather, rules should be devised according to which only a small number of players are considered responsible at any one time.

The study’s author Dr Samuel Johnson, from the University of Warwick’s Mathematics Institute, explains: “It is well known that some form of punishment, or positive incentives, can help maintain cooperation in situations where almost everyone is already cooperating, such as in a country with very little crime. But when there are only a few people cooperating and many more not doing so punishment can be too dilute to have any effect. In this regard, the international community is a bit like a failed state.”

The paper, published in Royal Society Open Science, shows that in situations of entrenched defection (non-cooperation), there exist strategies of ‘targeted punishment’ available to would-be punishers which can allow them to move a community towards global cooperation.

“The idea,” said Dr Johnson, “is not to punish everyone who is defecting, but rather to devise a rule whereby only a small number of defectors are considered at fault at any one time. For example, if you want to get a group of people to cooperate on something, you might arrange them on an imaginary line and declare that a person is liable to be punished if and only if the person to their left is cooperating while they are not. This way, those people considered at fault will find themselves under a lot more pressure than if responsibility were distributed, and cooperation can build up gradually as each person decides to fall in line when the spotlight reaches them.”

For the case of climate change, the paper suggests that countries should be divided into groups, and these groups placed in some order — ideally, according roughly to their natural tendencies to cooperate. Governments would make commitments (to reduce emissions or leave fossil fuels in the ground, for instance) conditional on the performance of the group before them. This way, any combination of sanctions and positive incentives that other countries might be willing to impose would have a much greater effect.

“In the mathematical model,” said Dr Johnson, “the mechanism works best if the players are somewhat irrational. It seems a reasonable assumption that this might apply to the international community.”


Journal Reference:

  1. Samuel Johnson. Escaping the Tragedy of the Commons through Targeted PunishmentRoyal Society Open Science, 2015 [link]

Stop burning fossil fuels now: there is no CO2 ‘technofix’, scientists warn (The Guardian)

Researchers have demonstrated that even if a geoengineering solution to CO2 emissions could be found, it wouldn’t be enough to save the oceans

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber Photograph: Doug Perrine/Design Pics/Corbis

German researchers have demonstrated once again that the best way to limit climate change is to stop burning fossil fuels now.

In a “thought experiment” they tried another option: the future dramatic removal of huge volumes of carbon dioxide from the atmosphere. This would, they concluded, return the atmosphere to the greenhouse gas concentrations that existed for most of human history – but it wouldn’t save the oceans.

That is, the oceans would stay warmer, and more acidic, for thousands of years, and the consequences for marine life could be catastrophic.

The research, published in Nature Climate Change today delivers yet another demonstration that there is so far no feasible “technofix” that would allow humans to go on mining and drilling for coal, oil and gas (known as the “business as usual” scenario), and then geoengineer a solution when climate change becomes calamitous.

Sabine Mathesius (of the Helmholtz Centre for Ocean Research in Kiel and the Potsdam Institute for Climate Impact Research) and colleagues decided to model what could be done with an as-yet-unproven technology called carbon dioxide removal. One example would be to grow huge numbers of trees, burn them, trap the carbon dioxide, compress it and bury it somewhere. Nobody knows if this can be done, but Dr Mathesius and her fellow scientists didn’t worry about that.

They calculated that it might plausibly be possible to remove carbon dioxide from the atmosphere at the rate of 90 billion tons a year. This is twice what is spilled into the air from factory chimneys and motor exhausts right now.

The scientists hypothesised a world that went on burning fossil fuels at an accelerating rate – and then adopted an as-yet-unproven high technology carbon dioxide removal technique.

“Interestingly, it turns out that after ‘business as usual’ until 2150, even taking such enormous amounts of CO2 from the atmosphere wouldn’t help the deep ocean that much – after the acidified water has been transported by large-scale ocean circulation to great depths, it is out of reach for many centuries, no matter how much CO2 is removed from the atmosphere,” said a co-author, Ken Caldeira, who is normally based at the Carnegie Institution in the US.

The oceans cover 70% of the globe. By 2500, ocean surface temperatures would have increased by 5C (41F) and the chemistry of the ocean waters would have shifted towards levels of acidity that would make it difficult for fish and shellfish to flourish. Warmer waters hold less dissolved oxygen. Ocean currents, too, would probably change.

But while change happens in the atmosphere over tens of years, change in the ocean surface takes centuries, and in the deep oceans, millennia. So even if atmospheric temperatures were restored to pre-Industrial Revolution levels, the oceans would continue to experience climatic catastrophe.

“In the deep ocean, the chemical echo of this century’s CO2 pollution will reverberate for thousands of years,” said co-author Hans Joachim Schellnhuber, who directs the Potsdam Institute. “If we do not implement emissions reductions measures in line with the 2C (35.6F) target in time, we will not be able to preserve ocean life as we know it.”

Climate models are even more accurate than you thought (The Guardian)

The difference between modeled and observed global surface temperature changes is 38% smaller than previously thought

Looking across the frozen sea of Ullsfjord in Norway.  Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures.

Looking across the frozen sea of Ullsfjord in Norway. Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures. Photograph: Neale Clark/Robert Harding World Imagery/Corbis

Global climate models aren’t given nearly enough credit for their accurate global temperature change projections. As the 2014 IPCC report showed, observed global surface temperature changes have been within the range of climate model simulations.

Now a new study shows that the models were even more accurate than previously thought. In previous evaluations like the one done by the IPCC, climate model simulations of global surface air temperature were compared to global surface temperature observational records like HadCRUT4. However, over the oceans, HadCRUT4 uses sea surface temperatures rather than air temperatures.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

Thus looking at modeled air temperatures and HadCRUT4 observations isn’t quite an apples-to-apples comparison for the oceans. As it turns out, sea surface temperatures haven’t been warming fast as marine air temperatures, so this comparison introduces a bias that makes the observations look cooler than the model simulations. In reality, the comparisons weren’t quite correct. As lead author Kevin Cowtan told me,

We have highlighted the fact that the planet does not warm uniformly. Air temperatures warm faster than the oceans, air temperatures over land warm faster than global air temperatures. When you put a number on global warming, that number always depends on what you are measuring. And when you do a comparison, you need to ensure you are comparing the same things.

The model projections have generally reported global air temperatures. That’s quite helpful, because we generally live in the air rather than the water. The observations, by mixing air and water temperatures, are expected to slightly underestimate the warming of the atmosphere.

The new study addresses this problem by instead blending the modeled air temperatures over land with the modeled sea surface temperatures to allow for an apples-to-apples comparison. The authors also identified another challenging issue for these model-data comparisons in the Arctic. Over sea ice, surface air temperature measurements are used, but for open ocean, sea surface temperatures are used. As co-author Michael Mann notes, as Arctic sea ice continues to melt away, this is another factor that accurate model-data comparisons must account for.

One key complication that arises is that the observations typically extrapolate land temperatures over sea ice covered regions since the sea surface temperature is not accessible in that case. But the distribution of sea ice changes seasonally, and there is a long-term trend toward decreasing sea ice in many regions. So the observations actually represent a moving target.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias.  Created by Kevin Cowtan.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias. Created by Kevin Cowtan.

When accounting for these factors, the study finds that the difference between observed and modeled temperatures since 1975 is smaller than previously believed. The models had projected a 0.226°C per decade global surface air warming trend for 1975–2014 (and 0.212°C per decade over the geographic area covered by the HadCRUT4 record). However, when matching the HadCRUT4 methods for measuring sea surface temperatures, the modeled trend is reduced to 0.196°C per decade. The observed HadCRUT4 trend is 0.170°C per decade.

So when doing an apples-to-apples comparison, the difference between modeled global temperature simulations and observations is 38% smaller than previous estimates. Additionally, as noted in a 2014 paper led by NASA GISS director Gavin Schmidt, less energy from the sun has reached the Earth’s surface than anticipated in these model simulations, both because solar activity declined more than expected, and volcanic activity was higher than expected. Ed Hawkins, another co-author of this study, wrote about this effect.

Combined, the apparent discrepancy between observations and simulations of global temperature over the past 15 years can be partly explained by the way the comparison is done (about a third), by the incorrect radiative forcings (about a third) and the rest is either due to climate variability or because the models are slightly over sensitive on average. But, the room for the latter effect is now much smaller.

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

As Hawkins notes, the remaining discrepancy between modeled and observed temperatures may come down to climate variability; namely the fact that there has been a preponderance of La Niña events over the past decade, which have a short-term cooling influence on global surface temperatures. When there are more La Niñas, we expect temperatures to fall below the average model projection, and when there are more El Niños, we expect temperatures to be above the projection, as may be the case when 2015 breaks the temperature record.

We can’t predict changes in solar activity, volcanic eruptions, or natural ocean cycles ahead of time. If we want to evaluate the accuracy of long-term global warming model projections, we have to account for the difference between the simulated and observed changes in these factors. When the authors of this study did so, they found that climate models have very accurately projected the observed global surface warming trend.

In other words, as I discussed in my book and Denial101x lecture, climate models have proven themselves reliable in predicting long-term global surface temperature changes. In fact, even more reliable than I realized.

Denial101x climate science success stories lecture by Dana Nuccitelli.

There’s a common myth that models are unreliable, often based on apples-to-oranges comparisons, like looking at satellite estimates of temperatures higher in the atmosphere versus modeled surface air temperatures. Or, some contrarians like John Christy will only consider the temperature high in the atmosphere, where satellite estimates are less reliable, and where people don’t live.

This new study has shown that when we do an apples-to-apples comparison, climate models have done a good job projecting the observed temperatures where humans live. And those models predict that unless we take serious and immediate action to reduce human carbon pollution, global warming will continue to accelerate into dangerous territory.

Nova técnica estima multidões analisando atividade de celulares (BBC Brasil)

3 junho 2015

Multidão em aeroporto | Foto: Getty

Pesquisadores buscam maneiras mais eficientes de medir tamanho de multidões sem depender de imagens

Um estudo de uma universidade britânica desenvolveu um novo meio de estimar multidões em protestos ou outros eventos de massa: através da análise de dados geográficos de celulares e Twitter.

Pesquisadores da Warwick University, na Inglaterra, analisaram a geolocalização de celulares e de mensagens no Twitter durante um período de dois meses em Milão, na Itália.

Em dois locais com números de visitantes conhecidos – um estádio de futebol e um aeroporto – a atividade nas redes sociais e nos celulares aumentou e diminuiu de maneira semelhante ao fluxo de pessoas.

A equipe disse que, utilizando esta técnica, pode fazer medições em eventos como protestos.

Outros pesquisadores enfatizaram o fato de que há limitações neste tipo de dados – por exemplo, somente uma parte da população usa smartphones e Twitter e nem todas as áreas em um espaço estão bem servidos de torres telefônicas.

Mas os autores do estudo dizem que os resultados foram “um excelente ponto de partida” para mais estimativas do tipo – com mais precisão – no futuro.

“Estes números são exemplos de calibração nos quais podemos nos basear”, disse o coautor do estudo, Tobias Preis.

“Obviamente seria melhor termos exemplos em outros países, outros ambientes, outros momentos. O comportamento humano não é uniforme em todo o mundo, mas está é uma base muito boa para conseguir estimativas iniciais.”

O estudo, divulgado na publicação científica Royal Society Open Science, é parte de um campo de pesquisa em expansão que explora o que a atividade online pode revelar sobre o comportamento humano e outros fenômenos reais.

Foto: F. Botta et al

Cientistas compararam dados oficiais de visitantes em aeroporto e estádio com atividade no Twitter e no celular

Federico Botta, estudante de PhD que liderou a análise, afirmou que a metodologia baseada em celulares tem vantagens importantes sobre outros métodos para estimar o tamanho de multidões – que costumam se basear em observações no local ou em imagens.

“Este método é muito rápido e não depende do julgamento humano. Ele só depende dos dados que vêm dos telefones celulares ou da atividade no Twitter”, disse à BBC.

Margem de erro

Com dois meses de dados de celulares fornecidos pela Telecom Italia, Botta e seus colegas se concentraram no aeroporto de Linate e no estádio de futebol San Siro, em Milão.

Eles compararam o número de pessoas que se sabia estarem naqueles locais a cada momento – baseado em horários de voos e na venda de ingressos para os jogos de futebol – com três tipos de atividade em telefones celulares: o número de chamadas feitas e de mensagens de texto enviadas, a quantidade de internet utilizada e o volume de tuítes feitos.

“O que vimos é que estas atividades realmente tinham um comportamento muito semelhante ao número de pessoas no local”, afirma Botta.

Isso pode não parecer tão surpreendente, mas, especialmente no estádio de futebol, os padrões observados pela equipe eram tão confiáveis que eles conseguiam até fazer previsões.

Houve dez jogos de futebol no período em que o experimento foi feito. Com base nos dados de nove jogos, foi possível estimar quantas pessoas estariam no décimo jogo usando apenas os dados dos celulares.

“Nossa porcentagem absoluta média de erro é cerca de 13%. Isso significa que nossas estimativas e o número real de pessoas têm uma diferença entre si, em valores absolutos, de cerca de 13%”, diz Botta.

De acordo com os pesquisadores, esta margem de erro é boa em comparação com as técnicas tradicionais baseadas em imagens e no julgamento humano.

Eles deram o exemplo do manifestação em Washington, capital americana, conhecida como “Million Man March” (Passeata do milhão, em tradução livre) em 1995, em que mesmo as análises mais criteriosas conseguiram produzir estimativas com 20% de erro – depois que medições iniciais variaram entre 400 mil e dois milhões de pessoas.

Multidão em estádio italiano | Foto: Getty

Precisão de dados coletados em estádio de futebol surpreendeu até mesmo a equipe de pesquisadores

Segundo Ed Manley, do Centro para Análise Espacial Avançada do University College London, a técnica tem potencial e as pessoas devem sentir-se “otimistas, mas cautelosas” em relação ao uso de dados de celulares nestas estimativas.

“Temos essas bases de dados enormes e há muito o que pode ser feito com elas… Mas precisamos ter cuidado com o quanto vamos exigir dos dados”, afirmou.

Ele também chama a atenção para o fato de que tais informações não refletem igualitariamente uma população.

“Há vieses importantes aqui. Quem exatamente estamos medindo com essas bases de dados?”, o Twitter, por exemplo, diz Manley, tem uma base de usuários relativamente jovem e de classe alta.

Além destas dificuldades, há o fato de que é preciso escolher com cuidado as atividades que serão medidas, porque as pessoas usam seus telefones de maneira diferente em diferentes lugares – mais chamadas no aeroporto e mais tuítes no futebol, por exemplo.

Outra ressalva importante é o fato de que toda a metodologia de análise defendida por Botta depende do sinal de telefone e internet – que varia muito de lugar para lugar, quando está disponível.

“Se estamos nos baseando nesses dados para saber onde as pessoas estão, o que acontece quando temos um problema com a maneira como os dados são coletados?”, indaga Manley.

Ethnography: A Scientist Discovers the Value of the Social Sciences (The Scholarly Kitchen)

 

Picture from an early ethnographic study

I have always liked to think of myself as a good listener. Whether you are in therapy (or should be), conversing with colleagues, working with customers, embarking on strategic planning, or collaborating on a task, a dose of emotional intelligence – that is, embracing patience and the willingness to listen — is essential.

At the American Mathematical Society, we recently embarked on ambitious strategic planning effort across the organization. On the publishing side we have a number of electronic products, pushing us to consider how we position these products for the next generation of mathematician. We quickly realized that it is easy to be complacent. In our case we have a rich history online, and yet – have we really moved with the times? Does a young mathematician need our products?

We came to a sobering and rather exciting realization: In fact, we do not have a clear idea how mathematicians use online resources to do their research, teaching, hiring, and job hunting. We of course have opinions, but these are not informed by anything other than anecdotal evidence from conversations here and there.

To gain a sense of how mathematicians are using online resources, we embarked on an effort to gather more systematic intelligence embracing a qualitative approach to the research – ethhnography. The concept of ethnographic qualitative research was a new one to me – and it felt right. I quickly felt like I was back in school and a graduate student in ethnography, reading the literature, and thinking through with colleagues how we might apply qualitative research methods to understanding mathematicians’ behavior. It is worth taking a look at two excellent books: Just Enough Research by Erika Hall, and Practical Ethnography: A Guide to Doing Ethnography in the Private Sector by Sam Ladner.

What do we mean by ethnographic research? In essence we are talking about a rich, multi-factorial descriptive approach. While quantitative research uses pre-existing categories in its analysis, qualitative research is open to new ways of categorizing data – in this case, mathematicians’ behavior in using information. The idea is that one observes the subject (“key informant” in technical jargon) in their natural habitat. Imagine you are David Attenborough, exploring an “absolutely marvelous” new species – the mathematician – as they operate in the field. The concept is really quite simple. You just want to understand what your key informants are doing, and preferably why they are doing it. One has to do it in a setting that allows for them to behave naturally – this really requires an interview with one person not a group (because group members may influence each other’s actions).

Perhaps the hardest part is the interview itself. If you are anything like me, you will go charging in saying something along the lines of “look at these great things we are doing. What do you think? Great right?” Well, of course this is plain wrong. While you have a goal going in, perhaps to see how an individual is behaving with respect to a specific product, your questions need to be agnostic in flavor. The idea is to have the key informant do what they normally do, not just say what they think they do – the two things may be quite different. The questions need to be carefully crafted so as not to lead, but to enable gentle probing and discussion as the interview progresses. It is a good idea to record the interview – both in audio form, and ideally with screen capture technology such as Camtasia. When I was involved with this I went out and bought a good, but inexpensive audio recorder.

We decided that rather than approach mathematicians directly, we should work with the library at an academic institution. Libraries are our customers. The remarkable thing about academic libraries is that ethnography is becoming part of the service they provide to their stakeholders at many institutions. We actually began with a remarkable librarian, based at Rice University – Debra Kolah. She is the head of the user experience office at the Fondren Library of Rice University in Texas. She also happens to be the physics, math and statistics librarian at Rice. Debra is remarkable, and has become an expert in ethnographic study of academic user experience. She has multiple projects underway at Rice, working with a range of stakeholders, aiming to foster the activity of the library in the academic community she directly serves. She is a picture of enthusiasm when it comes to serving her community and to gaining insights into the cultural patterns of academic user behavior. Debra was our key to understanding how important it is to work with the library to reach the mathematical community at an institution. The relationship is trusted and symbiotic. This triangle of an institution’s library, academic, and outside entity, such as a society, or publisher, may represent the future of the library.

So the interviews are done – then what? Analysis. You have to try to make sense of all of this material you’ve gathered. First, transcribing audio interviews is no easy task. You have a range of voices and much technical jargon. The best bet is to get one of the many services out there to take the files and do a first pass transcription. They will get most of it right. Perhaps they will write “archive instead of arXiv, but that can be dealt with later. Once you have all this interview text, you need to group it into meaningful categories – what’s called “coding”. The idea is that you try to look at the material with a fresh, unbiased eye, to see what themes emerge from the data. Once these themes are coded, you can then start to think about patterns in the data. Interestingly, qualitative researchers have developed a host of software programs to aid the researcher in doing this. We settled for a relatively simple, web based solution – Dedoose.

With some 62 interviews under our belt, we are beginning to see patterns emerge in the ways that mathematicians behave online. I am not going to reveal our preliminary findings here – I must save that up for when the full results are in – but I am confident that the results will show a number of consistent threads that will help us think through how to better serve our community.

In summary, this experience has been a fascinating one – a new world for me. I have been trained as a scientist. As a scientist, I have ideas about what scientific method is, and what evidence is. I now understand the value of the qualitative approach – hard for a scientist to say. Qualitative research opens a window to descriptive data and analysis. As our markets change, understanding who constitutes our market, and how users behave is more important than ever.

Carry on listening!

Is the universe a hologram? (Science Daily)

Date:
April 27, 2015
Source:
Vienna University of Technology
Summary:
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.

Is our universe a hologram? Credit: TU Wien 

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.

Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.

The Holographic Principle

Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).

Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.

Correspondence Even in Flat Spaces

For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.

However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

Calculated Twice, Same Result

“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.

When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.

“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.

This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.


Journal Reference:

  1. Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat HolographyPhysical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602

Time and Events (Knowledge Ecology)

March 24, 2015 / Adam Robbert

tumblr_nivrggIBpb1qd0i7oo1_1280

[Image: Mohammad Reza Domiri Ganji]

I just came across Massimo Pigliucci’s interesting review of Mangabeira Unger and Lee Smolin’s book The Singular Universe and the Reality of Time. There are more than a few Whiteheadian themes explored throughout the review, including Unger and Smolin’s (U&S) view that time should be read as an abstraction from events and that the “laws” of the universe are better conceptualized as habits or contingent causal connections secured by the ongoingness of those events rather than as eternal, abstract formalisms. (This entangling of laws with phenomena, of events with time, is one of the ways we can think towards an ecological metaphysics.)

But what I am particularly interested in is the short discussion on Platonism and mathematical realism. I sometimes think of mathematical realism as the view that numbers, and thus the abstract formalisms they create, are real, mind-independent entities, and that, given this view, mathematical equations are discovered (i.e., they actually exist in the world) rather than created (i.e., humans made them up to fill this or that pragmatic need). The review makes it clear, though, that this definition doesn’t push things far enough for the mathematical realist. Instead, the mathematical realist argues for not just the mind-independent existence of numbers but also their nature-independence—math as independent not just of all knowers but of all natural phenomena, past, present, or future.

U&S present an alternative to mathematical realisms of this variety that I find compelling and more consistent with the view that laws are habits and that time is an abstraction from events. Here’s the reviewer’s take on U&S’s argument (the review starts with a quote from U&S and then unpacks it a bit):

“The third idea is the selective realism of mathematics. (We use realism here in the sense of relation to the one real natural world, in opposition to what is often described as mathematical Platonism: a belief in the real existence, apart from nature, of mathematical entities.) Now dominant conceptions of what the most basic natural science is and can become have been formed in the context of beliefs about mathematics and of its relation to both science and nature. The laws of nature, the discerning of which has been the supreme object of science, are supposed to be written in the language of mathematics.” (p. xii)

But they are not, because there are no “laws” and because mathematics is a human (very useful) invention, not a mysterious sixth sense capable of probing a deeper reality beyond the empirical. This needs some unpacking, of course. Let me start with mathematics, then move to the issue of natural laws.

I was myself, until recently, intrigued by mathematical Platonism [8]. It is a compelling idea, which makes sense of the “unreasonable effectiveness of mathematics” as Eugene Wigner famously put it [9]. It is a position shared by a good number of mathematicians and philosophers of mathematics. It is based on the strong gut feeling that mathematicians have that they don’t invent mathematical formalisms, they “discover” them, in a way analogous to what empirical scientists do with features of the outside world. It is also supported by an argument analogous to the defense of realism about scientific theories and advanced by Hilary Putnam: it would be nothing short of miraculous, it is suggested, if mathematics were the arbitrary creation of the human mind, and yet time and again it turns out to be spectacularly helpful to scientists [10].

But there are, of course, equally (more?) powerful counterarguments, which are in part discussed by Unger in the first part of the book. To begin with, the whole thing smells a bit too uncomfortably of mysticism: where, exactly, is this realm of mathematical objects? What is its ontological status? Moreover, and relatedly, how is it that human beings have somehow developed the uncanny ability to access such realm? We know how we can access, however imperfectly and indirectly, the physical world: we evolved a battery of sensorial capabilities to navigate that world in order to survive and reproduce, and science has been a continuous quest for expanding the power of our senses by way of more and more sophisticated instrumentation, to gain access to more and more (and increasingly less relevant to our biological fitness!) aspects of the world.

Indeed, it is precisely this analogy with science that powerfully hints to an alternative, naturalistic interpretation of the (un)reasonable effectiveness of mathematics. Math too started out as a way to do useful things in the world, mostly to count (arithmetics) and to measure up the world and divide it into manageable chunks (geometry). Mathematicians then developed their own (conceptual, as opposed to empirical) tools to understand more and more sophisticated and less immediate aspects of the world, in the process eventually abstracting entirely from such a world in pursuit of internally generated questions (what we today call “pure” mathematics).

U&S do not by any means deny the power and effectiveness of mathematics. But they also remind us that precisely what makes it so useful and general — its abstraction from the particularities of the world, and specifically its inability to deal with temporal asymmetries (mathematical equations in fundamental physics are time-symmetric, and asymmetries have to be imported as externally imposed background conditions) — also makes it subordinate to empirical science when it comes to understanding the one real world.

This empiricist reading of mathematics offers a refreshing respite to the resurgence of a certain Idealism in some continental circles (perhaps most interestingly spearheaded by Quentin Meillassoux). I’ve heard mention a few times now that the various factions squaring off within continental philosophy’s avant garde can be roughly approximated as a renewed encounter between Kantian finitude and Hegelian absolutism. It’s probably a bit too stark of a binary, but there’s a sense in which the stakes of these arguments really do center on the ontological status of mathematics in the natural world. It’s not a direct focus of my own research interests, really, but it’s a fascinating set of questions nonetheless.

On pi day, how scientists use this number (Science Daily)

Date: March 12, 2015

Source: NASA/Jet Propulsion Laboratory

Summary: If you like numbers, you will love March 14, 2015. When written as a numerical date, it’s 3/14/15, corresponding to the first five digits of pi (3.1415) — a once-in-a-century coincidence! Pi Day, which would have been the 136th birthday of Albert Einstein, is a great excuse to eat pie, and to appreciate how important the number pi is to math and science.

Take JPL Education’s Pi Day challenge featuring real-world questions about NASA spacecraft — then tweet your answers to @NASAJPL_Edu using the hashtag #PiDay. Answers will be revealed on March 16. Credit: NASA/JPL-Caltech

If you like numbers, you will love March 14, 2015. When written as a numerical date, it’s 3/14/15, corresponding to the first five digits of pi (3.1415) — a once-in-a-century coincidence! Pi Day, which would have been the 136th birthday of Albert Einstein, is a great excuse to eat pie, and to appreciate how important the number pi is to math and science.

Pi is the ratio of circumference to diameter of a circle. Any time you want to find out the distance around a circle when you have the distance across it, you will need this formula.

Despite its frequent appearance in math and science, you can’t write pi as a simple fraction or calculate it by dividing two integers (…3, -2, -1, 0, 1, 2, 3…). For this reason, pi is said to be “irrational.” Pi’s digits extend infinitely and without any pattern, adding to its intrigue and mystery.

Pi is useful for all kinds of calculations involving the volume and surface area of spheres, as well as for determining the rotations of circular objects such as wheels. That’s why pi is important for scientists who work with planetary bodies and the spacecraft that visit them.

At NASA’s Jet Propulsion Laboratory, Pasadena, California, pi makes a frequent appearance. It’s a staple for Marc Rayman, chief engineer and mission director for NASA’s Dawn spacecraft. Dawn went into orbit around dwarf planet Ceres on March 6. Rayman uses a formula involving pi to calculate the length of time it takes the spacecraft to orbit Ceres at any given altitude. You can also use pi to think about Earth’s rotation.

“On Pi Day, I will think about the nature of a day, as Earth’s rotation on its axis carries me on a circle 21,000 miles (34,000 kilometers) in circumference, which I calculated using pi and my latitude,” Rayman said.

Steve Vance, a planetary chemist and astrobiologist at JPL, also frequently uses pi. Lately, he has been using pi in his calculations of how much hydrogen might be available for chemical processes, and possibly biology, in the ocean beneath the surface of Jupiter’s moon Europa.

“To calculate the hydrogen produced in a given unit area, we divide by Europa’s surface area, which is the area of a sphere with a radius of 970 miles (1,561 kilometers),” Vance said.

Luisa Rebull, a research scientist at NASA’s Spitzer Science Center at the California Institute of Technology, Pasadena, also considers pi to be important in astronomy. When calculating the distance between stars in a projection of the sky, scientists use a special kind of geometry called spherical trigonometry. That’s an extension of the geometry you probably learned in middle school, but it takes place on a sphere rather than a flat plane.

“In order to do these calculations, we need to use formulae, the derivation of which uses pi,” she said. “So, this is pi in the sky!”

Make sure to note when the date and time spell out the first 10 digits of pi: 3.141592653. On 3/14/15 at 9:26:53 a.m., it is literally the most perfectly “pi” time of the century — so grab a slice of your favorite pie, and celebrate math!

For more fun with pi, check out JPL Education’s second annual Pi Day challenge, featuring real-world NASA math problems. NASA/JPL education specialists, with input from scientists and engineers, have crafted questions involving pi aimed at students in grades 4 through 11, but open to everyone. Take a crack at them at:

http://www.jpl.nasa.gov/infographics/infographic.view.php?id=11257

Share your answers on Twitter by tweeting to @NASAJPL_Edu with the hashtag #PiDay. Answers will be revealed on March 16 (aka Pi + 2 Day!).

Resources for educators, including printable Pi Day challenge classroom handouts, are available at: www.jpl.nasa.gov/edu/piday2015

Caltech manages JPL for NASA.

Physics’s pangolin (AEON)

Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality

by 

Illustration by Claire ScullyIllustration by Claire Scully

Margaret Wertheim is an Australian-born science writer and director of the Institute For Figuring in Los Angeles. Her latest book is Physics on the Fringe (2011).

Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being.

But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern.

Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses.

In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude

This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’.

On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject?

Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites.

Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not.

Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked.

Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes.

When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude.

Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe.

What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds.

Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system

This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won.

In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions.

On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here.

In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean.

Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society.

‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life.

As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system.

In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’.

According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart.

It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’

In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science.

When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of whatcan be measured and how.

Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured?

In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’.

Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatoreswere also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed.

How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa.Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science.

So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave.

The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged.

Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles.

We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years.

Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light.

Physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions

Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach.

Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world.

That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy.

Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’

Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff.

Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought.

More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required.

To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism

You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote:
The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.

In order to resolve contradictions between how physicists describetime and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws.

This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says.

To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitablyfailed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins.

In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear.

We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction.

3 June 2013