Arquivo da tag: Matemática

Full-scale architecture for a quantum computer in silicon (Science Daily)

Scalable 3-D silicon chip architecture based on single atom quantum bits provides a blueprint to build operational quantum computers

Date:
October 30, 2015
Source:
University of New South Wales
Summary:
Researchers have designed a full-scale architecture for a quantum computer in silicon. The new concept provides a pathway for building an operational quantum computer with error correction.

This picture shows from left to right Dr Matthew House, Sam Hile (seated), Sciential Professor Sven Rogge and Scientia Professor Michelle Simmons of the ARC Centre of Excellence for Quantum Computation and Communication Technology at UNSW. Credit: Deb Smith, UNSW Australia 

Australian scientists have designed a 3D silicon chip architecture based on single atom quantum bits, which is compatible with atomic-scale fabrication techniques — providing a blueprint to build a large-scale quantum computer.

Scientists and engineers from the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), headquartered at the University of New South Wales (UNSW), are leading the world in the race to develop a scalable quantum computer in silicon — a material well-understood and favoured by the trillion-dollar computing and microelectronics industry.

Teams led by UNSW researchers have already demonstrated a unique fabrication strategy for realising atomic-scale devices and have developed the world’s most efficient quantum bits in silicon using either the electron or nuclear spins of single phosphorus atoms. Quantum bits — or qubits — are the fundamental data components of quantum computers.

One of the final hurdles to scaling up to an operational quantum computer is the architecture. Here it is necessary to figure out how to precisely control multiple qubits in parallel, across an array of many thousands of qubits, and constantly correct for ‘quantum’ errors in calculations.

Now, the CQC2T collaboration, involving theoretical and experimental researchers from the University of Melbourne and UNSW, has designed such a device. In a study published today in Science Advances, the CQC2T team describes a new silicon architecture, which uses atomic-scale qubits aligned to control lines — which are essentially very narrow wires — inside a 3D design.

“We have demonstrated we can build devices in silicon at the atomic-scale and have been working towards a full-scale architecture where we can perform error correction protocols — providing a practical system that can be scaled up to larger numbers of qubits,” says UNSW Scientia Professor Michelle Simmons, study co-author and Director of the CQC2T.

“The great thing about this work, and architecture, is that it gives us an endpoint. We now know exactly what we need to do in the international race to get there.”

In the team’s conceptual design, they have moved from a one-dimensional array of qubits, positioned along a single line, to a two-dimensional array, positioned on a plane that is far more tolerant to errors. This qubit layer is “sandwiched” in a three-dimensional architecture, between two layers of wires arranged in a grid.

By applying voltages to a sub-set of these wires, multiple qubits can be controlled in parallel, performing a series of operations using far fewer controls. Importantly, with their design, they can perform the 2D surface code error correction protocols in which any computational errors that creep into the calculation can be corrected faster than they occur.

“Our Australian team has developed the world’s best qubits in silicon,” says University of Melbourne Professor Lloyd Hollenberg, Deputy Director of the CQC2T who led the work with colleague Dr Charles Hill. “However, to scale up to a full operational quantum computer we need more than just many of these qubits — we need to be able to control and arrange them in such a way that we can correct errors quantum mechanically.”

“In our work, we’ve developed a blueprint that is unique to our system of qubits in silicon, for building a full-scale quantum computer.”

In their paper, the team proposes a strategy to build the device, which leverages the CQC2T’s internationally unique capability of atomic-scale device fabrication. They have also modelled the required voltages applied to the grid wires, needed to address individual qubits, and make the processor work.

“This architecture gives us the dense packing and parallel operation essential for scaling up the size of the quantum processor,” says Scientia Professor Sven Rogge, Head of the UNSW School of Physics. “Ultimately, the structure is scalable to millions of qubits, required for a full-scale quantum processor.”

Background

In classical computers, data is rendered as binary bits, which are always in one of two states: 0 or 1. However, a qubit can exist in both of these states at once, a condition known as a superposition. A qubit operation exploits this quantum weirdness by allowing many computations to be performed in parallel (a two-qubit system performs the operation on 4 values, a three-qubit system on 8, and so on).

As a result, quantum computers will far exceed today’s most powerful super computers, and offer enormous advantages for a range of complex problems, such as rapidly scouring vast databases, modelling financial markets, optimising huge metropolitan transport networks, and modelling complex biological molecules.

How to build a quantum computer in silicon https://youtu.be/zo1q06F2sbY

Aquecimento pode triplicar seca na Amazônia (Observatório do Clima)

15/10/2015

 Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Modelos de computador sugerem que leste amazônico, que contém a maior parte da floresta, teria mais estiagens, incêndios e morte de árvores, enquanto o oeste ficaria mais chuvoso.

As mudanças climáticas podem aumentar a frequência tanto de secas quanto de chuvas extremas na Amazônia antes do meio do século, compondo com o desmatamento para causar mortes maciças de árvores, incêndios e emissões de carbono. A conclusão é de uma avaliação de 35 modelos climáticos aplicados à região, feita por pesquisadores dos EUA e do Brasil.

Segundo o estudo, liderado por Philip Duffy, do WHRC (Instituto de Pesquisas de Woods Hole, nos EUA) e da Universidade Stanford, a área afetada por secas extremas no leste amazônico, região que engloba a maior parte da Amazônia, pode triplicar até 2100. Paradoxalmente, a frequência de períodos extremamente chuvosos e a área sujeita a chuvas extremas tende a crescer em toda a região após 2040 – mesmo nos locais onde a precipitação média anual diminuir.

Já o oeste amazônico, em especial o Peru e a Colômbia, deve ter um aumento na precipitação média anual.

A mudança no regime de chuvas é um efeito há muito teorizado do aquecimento global. Com mais energia na atmosfera e mais vapor d’água, resultante da maior evaporação dos oceanos, a tendência é que os extremos climáticos sejam amplificados. As estações chuvosas – na Amazônia, o período de verão no hemisfério sul, chamado pelos moradores da região de “inverno” ficam mais curtas, mas as chuvas caem com mais intensidade.

No entanto, a resposta da floresta essas mudanças tem sido objeto de controvérsias entre os cientistas. Estudos da década de 1990 propuseram que a reação da Amazônia fosse ser uma ampla “savanização”, ou mortandade de grandes árvores, e a transformação de vastas porções da selva numa savana empobrecida.

Outros estudos, porém, apontaram que o calor e o CO2 extra teriam o efeito oposto – o de fazer as árvores crescerem mais e fixarem mais carbono, de modo a compensar eventuais perdas por seca. Na média, portanto, o impacto do aquecimento global sobre a Amazônia seria relativamente pequeno.

Ocorre que a própria Amazônia encarregou-se de dar aos cientistas dicas de como reagiria. Em 2005, 2007 e 2010, a floresta passou por secas históricas. O resultado foi ampla mortalidade de árvores e incêndios em florestas primárias em mais de 85 mil quilômetros quadrados. O grupo de Duffy, também integrado por Paulo Brando, do Ipam (Instituto de Pesquisa Ambiental da Amazônia), aponta que de 1% a 2% do carbono da Amazônia foi lançado na atmosfera em decorrência das secas da década de 2000. Brando e colegas do Ipam também já haviam mostrado que a Amazônia está mais inflamável, provavelmente devido aos efeitos combinados do clima e do desmatamento.

Os pesquisadores simularam o clima futuro da região usando os modelos do chamado projeto CMIP5, usado pelo IPCC (Painel Intergovernamental sobre Mudança Climática) no seu último relatório de avaliação do clima global. Um dos membros do grupo, Chris Field, de Stanford, foi um dos coordenadores do relatório – foi também candidato à presidência do IPCC na eleição realizada na semana passada, perdendo para o coreano Hoesung Lee.

Os modelos de computador foram testados no pior cenário de emissões, o chamado RMP 8.5, no qual se assume que pouca coisa será feita para controlar emissões de gases-estufa.

Eles não apenas captaram bem a influência das temperaturas dos oceanos Atlântico e Pacífico sobre o padrão de chuvas na Amazônia – diferenças entre os dois oceanos explicam por que o leste amazônico ficará mais seco e o oeste, mais úmido –, como também mostraram nas simulações de seca futura uma característica das secas recorde de 2005 e 2010: o extremo norte da Amazônia teve grande aumento de chuvas enquanto o centro e o sul estorricavam.

Segundo os pesquisadores, o estudo pode ser até mesmo conservador, já que só levou em conta as variações de precipitação. “Por exemplo, as chuvas no leste da Amazônia têm uma forte dependência da evapotranspiração, então uma redução na cobertura de árvores poderia reduzir a precipitação”, escreveram Duffy e Brando. “Isso sugere que, se os processos relacionados a mudanças no uso da terra fossem mais bem representados nos modelos do CMIP5, a intensidade das secas poderia ser maior do que a projetada aqui.”

O estudo foi publicado na PNAS, a revista da Academia Nacional de Ciências dos EUA. (Observatório do Clima/ #Envolverde)

* Publicado originalmente no site Observatório do Clima.

‘Targeted punishments’ against countries could tackle climate change (Science Daily)

Date:
August 25, 2015
Source:
University of Warwick
Summary:
Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

This is a diagram of two possible strategies of targeted punishment studied in the paper. Credit: Royal Society Open Science

Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

Conducted at the University of Warwick, the research suggests that in situations such as climate change, where everyone would be better off if everyone cooperated but it may not be individually advantageous to do so, the use of a strategy called ‘targeted punishment’ could help shift society towards global cooperation.

Despite the name, the ‘targeted punishment’ mechanism can apply to positive or negative incentives. The research argues that the key factor is that these incentives are not necessarily applied to everyone who may seem to deserve them. Rather, rules should be devised according to which only a small number of players are considered responsible at any one time.

The study’s author Dr Samuel Johnson, from the University of Warwick’s Mathematics Institute, explains: “It is well known that some form of punishment, or positive incentives, can help maintain cooperation in situations where almost everyone is already cooperating, such as in a country with very little crime. But when there are only a few people cooperating and many more not doing so punishment can be too dilute to have any effect. In this regard, the international community is a bit like a failed state.”

The paper, published in Royal Society Open Science, shows that in situations of entrenched defection (non-cooperation), there exist strategies of ‘targeted punishment’ available to would-be punishers which can allow them to move a community towards global cooperation.

“The idea,” said Dr Johnson, “is not to punish everyone who is defecting, but rather to devise a rule whereby only a small number of defectors are considered at fault at any one time. For example, if you want to get a group of people to cooperate on something, you might arrange them on an imaginary line and declare that a person is liable to be punished if and only if the person to their left is cooperating while they are not. This way, those people considered at fault will find themselves under a lot more pressure than if responsibility were distributed, and cooperation can build up gradually as each person decides to fall in line when the spotlight reaches them.”

For the case of climate change, the paper suggests that countries should be divided into groups, and these groups placed in some order — ideally, according roughly to their natural tendencies to cooperate. Governments would make commitments (to reduce emissions or leave fossil fuels in the ground, for instance) conditional on the performance of the group before them. This way, any combination of sanctions and positive incentives that other countries might be willing to impose would have a much greater effect.

“In the mathematical model,” said Dr Johnson, “the mechanism works best if the players are somewhat irrational. It seems a reasonable assumption that this might apply to the international community.”


Journal Reference:

  1. Samuel Johnson. Escaping the Tragedy of the Commons through Targeted PunishmentRoyal Society Open Science, 2015 [link]

Stop burning fossil fuels now: there is no CO2 ‘technofix’, scientists warn (The Guardian)

Researchers have demonstrated that even if a geoengineering solution to CO2 emissions could be found, it wouldn’t be enough to save the oceans

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber Photograph: Doug Perrine/Design Pics/Corbis

German researchers have demonstrated once again that the best way to limit climate change is to stop burning fossil fuels now.

In a “thought experiment” they tried another option: the future dramatic removal of huge volumes of carbon dioxide from the atmosphere. This would, they concluded, return the atmosphere to the greenhouse gas concentrations that existed for most of human history – but it wouldn’t save the oceans.

That is, the oceans would stay warmer, and more acidic, for thousands of years, and the consequences for marine life could be catastrophic.

The research, published in Nature Climate Change today delivers yet another demonstration that there is so far no feasible “technofix” that would allow humans to go on mining and drilling for coal, oil and gas (known as the “business as usual” scenario), and then geoengineer a solution when climate change becomes calamitous.

Sabine Mathesius (of the Helmholtz Centre for Ocean Research in Kiel and the Potsdam Institute for Climate Impact Research) and colleagues decided to model what could be done with an as-yet-unproven technology called carbon dioxide removal. One example would be to grow huge numbers of trees, burn them, trap the carbon dioxide, compress it and bury it somewhere. Nobody knows if this can be done, but Dr Mathesius and her fellow scientists didn’t worry about that.

They calculated that it might plausibly be possible to remove carbon dioxide from the atmosphere at the rate of 90 billion tons a year. This is twice what is spilled into the air from factory chimneys and motor exhausts right now.

The scientists hypothesised a world that went on burning fossil fuels at an accelerating rate – and then adopted an as-yet-unproven high technology carbon dioxide removal technique.

“Interestingly, it turns out that after ‘business as usual’ until 2150, even taking such enormous amounts of CO2 from the atmosphere wouldn’t help the deep ocean that much – after the acidified water has been transported by large-scale ocean circulation to great depths, it is out of reach for many centuries, no matter how much CO2 is removed from the atmosphere,” said a co-author, Ken Caldeira, who is normally based at the Carnegie Institution in the US.

The oceans cover 70% of the globe. By 2500, ocean surface temperatures would have increased by 5C (41F) and the chemistry of the ocean waters would have shifted towards levels of acidity that would make it difficult for fish and shellfish to flourish. Warmer waters hold less dissolved oxygen. Ocean currents, too, would probably change.

But while change happens in the atmosphere over tens of years, change in the ocean surface takes centuries, and in the deep oceans, millennia. So even if atmospheric temperatures were restored to pre-Industrial Revolution levels, the oceans would continue to experience climatic catastrophe.

“In the deep ocean, the chemical echo of this century’s CO2 pollution will reverberate for thousands of years,” said co-author Hans Joachim Schellnhuber, who directs the Potsdam Institute. “If we do not implement emissions reductions measures in line with the 2C (35.6F) target in time, we will not be able to preserve ocean life as we know it.”

Climate models are even more accurate than you thought (The Guardian)

The difference between modeled and observed global surface temperature changes is 38% smaller than previously thought

Looking across the frozen sea of Ullsfjord in Norway.  Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures.

Looking across the frozen sea of Ullsfjord in Norway. Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures. Photograph: Neale Clark/Robert Harding World Imagery/Corbis

Global climate models aren’t given nearly enough credit for their accurate global temperature change projections. As the 2014 IPCC report showed, observed global surface temperature changes have been within the range of climate model simulations.

Now a new study shows that the models were even more accurate than previously thought. In previous evaluations like the one done by the IPCC, climate model simulations of global surface air temperature were compared to global surface temperature observational records like HadCRUT4. However, over the oceans, HadCRUT4 uses sea surface temperatures rather than air temperatures.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

Thus looking at modeled air temperatures and HadCRUT4 observations isn’t quite an apples-to-apples comparison for the oceans. As it turns out, sea surface temperatures haven’t been warming fast as marine air temperatures, so this comparison introduces a bias that makes the observations look cooler than the model simulations. In reality, the comparisons weren’t quite correct. As lead author Kevin Cowtan told me,

We have highlighted the fact that the planet does not warm uniformly. Air temperatures warm faster than the oceans, air temperatures over land warm faster than global air temperatures. When you put a number on global warming, that number always depends on what you are measuring. And when you do a comparison, you need to ensure you are comparing the same things.

The model projections have generally reported global air temperatures. That’s quite helpful, because we generally live in the air rather than the water. The observations, by mixing air and water temperatures, are expected to slightly underestimate the warming of the atmosphere.

The new study addresses this problem by instead blending the modeled air temperatures over land with the modeled sea surface temperatures to allow for an apples-to-apples comparison. The authors also identified another challenging issue for these model-data comparisons in the Arctic. Over sea ice, surface air temperature measurements are used, but for open ocean, sea surface temperatures are used. As co-author Michael Mann notes, as Arctic sea ice continues to melt away, this is another factor that accurate model-data comparisons must account for.

One key complication that arises is that the observations typically extrapolate land temperatures over sea ice covered regions since the sea surface temperature is not accessible in that case. But the distribution of sea ice changes seasonally, and there is a long-term trend toward decreasing sea ice in many regions. So the observations actually represent a moving target.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias.  Created by Kevin Cowtan.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias. Created by Kevin Cowtan.

When accounting for these factors, the study finds that the difference between observed and modeled temperatures since 1975 is smaller than previously believed. The models had projected a 0.226°C per decade global surface air warming trend for 1975–2014 (and 0.212°C per decade over the geographic area covered by the HadCRUT4 record). However, when matching the HadCRUT4 methods for measuring sea surface temperatures, the modeled trend is reduced to 0.196°C per decade. The observed HadCRUT4 trend is 0.170°C per decade.

So when doing an apples-to-apples comparison, the difference between modeled global temperature simulations and observations is 38% smaller than previous estimates. Additionally, as noted in a 2014 paper led by NASA GISS director Gavin Schmidt, less energy from the sun has reached the Earth’s surface than anticipated in these model simulations, both because solar activity declined more than expected, and volcanic activity was higher than expected. Ed Hawkins, another co-author of this study, wrote about this effect.

Combined, the apparent discrepancy between observations and simulations of global temperature over the past 15 years can be partly explained by the way the comparison is done (about a third), by the incorrect radiative forcings (about a third) and the rest is either due to climate variability or because the models are slightly over sensitive on average. But, the room for the latter effect is now much smaller.

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

As Hawkins notes, the remaining discrepancy between modeled and observed temperatures may come down to climate variability; namely the fact that there has been a preponderance of La Niña events over the past decade, which have a short-term cooling influence on global surface temperatures. When there are more La Niñas, we expect temperatures to fall below the average model projection, and when there are more El Niños, we expect temperatures to be above the projection, as may be the case when 2015 breaks the temperature record.

We can’t predict changes in solar activity, volcanic eruptions, or natural ocean cycles ahead of time. If we want to evaluate the accuracy of long-term global warming model projections, we have to account for the difference between the simulated and observed changes in these factors. When the authors of this study did so, they found that climate models have very accurately projected the observed global surface warming trend.

In other words, as I discussed in my book and Denial101x lecture, climate models have proven themselves reliable in predicting long-term global surface temperature changes. In fact, even more reliable than I realized.

Denial101x climate science success stories lecture by Dana Nuccitelli.

There’s a common myth that models are unreliable, often based on apples-to-oranges comparisons, like looking at satellite estimates of temperatures higher in the atmosphere versus modeled surface air temperatures. Or, some contrarians like John Christy will only consider the temperature high in the atmosphere, where satellite estimates are less reliable, and where people don’t live.

This new study has shown that when we do an apples-to-apples comparison, climate models have done a good job projecting the observed temperatures where humans live. And those models predict that unless we take serious and immediate action to reduce human carbon pollution, global warming will continue to accelerate into dangerous territory.

Nova técnica estima multidões analisando atividade de celulares (BBC Brasil)

3 junho 2015

Multidão em aeroporto | Foto: Getty

Pesquisadores buscam maneiras mais eficientes de medir tamanho de multidões sem depender de imagens

Um estudo de uma universidade britânica desenvolveu um novo meio de estimar multidões em protestos ou outros eventos de massa: através da análise de dados geográficos de celulares e Twitter.

Pesquisadores da Warwick University, na Inglaterra, analisaram a geolocalização de celulares e de mensagens no Twitter durante um período de dois meses em Milão, na Itália.

Em dois locais com números de visitantes conhecidos – um estádio de futebol e um aeroporto – a atividade nas redes sociais e nos celulares aumentou e diminuiu de maneira semelhante ao fluxo de pessoas.

A equipe disse que, utilizando esta técnica, pode fazer medições em eventos como protestos.

Outros pesquisadores enfatizaram o fato de que há limitações neste tipo de dados – por exemplo, somente uma parte da população usa smartphones e Twitter e nem todas as áreas em um espaço estão bem servidos de torres telefônicas.

Mas os autores do estudo dizem que os resultados foram “um excelente ponto de partida” para mais estimativas do tipo – com mais precisão – no futuro.

“Estes números são exemplos de calibração nos quais podemos nos basear”, disse o coautor do estudo, Tobias Preis.

“Obviamente seria melhor termos exemplos em outros países, outros ambientes, outros momentos. O comportamento humano não é uniforme em todo o mundo, mas está é uma base muito boa para conseguir estimativas iniciais.”

O estudo, divulgado na publicação científica Royal Society Open Science, é parte de um campo de pesquisa em expansão que explora o que a atividade online pode revelar sobre o comportamento humano e outros fenômenos reais.

Foto: F. Botta et al

Cientistas compararam dados oficiais de visitantes em aeroporto e estádio com atividade no Twitter e no celular

Federico Botta, estudante de PhD que liderou a análise, afirmou que a metodologia baseada em celulares tem vantagens importantes sobre outros métodos para estimar o tamanho de multidões – que costumam se basear em observações no local ou em imagens.

“Este método é muito rápido e não depende do julgamento humano. Ele só depende dos dados que vêm dos telefones celulares ou da atividade no Twitter”, disse à BBC.

Margem de erro

Com dois meses de dados de celulares fornecidos pela Telecom Italia, Botta e seus colegas se concentraram no aeroporto de Linate e no estádio de futebol San Siro, em Milão.

Eles compararam o número de pessoas que se sabia estarem naqueles locais a cada momento – baseado em horários de voos e na venda de ingressos para os jogos de futebol – com três tipos de atividade em telefones celulares: o número de chamadas feitas e de mensagens de texto enviadas, a quantidade de internet utilizada e o volume de tuítes feitos.

“O que vimos é que estas atividades realmente tinham um comportamento muito semelhante ao número de pessoas no local”, afirma Botta.

Isso pode não parecer tão surpreendente, mas, especialmente no estádio de futebol, os padrões observados pela equipe eram tão confiáveis que eles conseguiam até fazer previsões.

Houve dez jogos de futebol no período em que o experimento foi feito. Com base nos dados de nove jogos, foi possível estimar quantas pessoas estariam no décimo jogo usando apenas os dados dos celulares.

“Nossa porcentagem absoluta média de erro é cerca de 13%. Isso significa que nossas estimativas e o número real de pessoas têm uma diferença entre si, em valores absolutos, de cerca de 13%”, diz Botta.

De acordo com os pesquisadores, esta margem de erro é boa em comparação com as técnicas tradicionais baseadas em imagens e no julgamento humano.

Eles deram o exemplo do manifestação em Washington, capital americana, conhecida como “Million Man March” (Passeata do milhão, em tradução livre) em 1995, em que mesmo as análises mais criteriosas conseguiram produzir estimativas com 20% de erro – depois que medições iniciais variaram entre 400 mil e dois milhões de pessoas.

Multidão em estádio italiano | Foto: Getty

Precisão de dados coletados em estádio de futebol surpreendeu até mesmo a equipe de pesquisadores

Segundo Ed Manley, do Centro para Análise Espacial Avançada do University College London, a técnica tem potencial e as pessoas devem sentir-se “otimistas, mas cautelosas” em relação ao uso de dados de celulares nestas estimativas.

“Temos essas bases de dados enormes e há muito o que pode ser feito com elas… Mas precisamos ter cuidado com o quanto vamos exigir dos dados”, afirmou.

Ele também chama a atenção para o fato de que tais informações não refletem igualitariamente uma população.

“Há vieses importantes aqui. Quem exatamente estamos medindo com essas bases de dados?”, o Twitter, por exemplo, diz Manley, tem uma base de usuários relativamente jovem e de classe alta.

Além destas dificuldades, há o fato de que é preciso escolher com cuidado as atividades que serão medidas, porque as pessoas usam seus telefones de maneira diferente em diferentes lugares – mais chamadas no aeroporto e mais tuítes no futebol, por exemplo.

Outra ressalva importante é o fato de que toda a metodologia de análise defendida por Botta depende do sinal de telefone e internet – que varia muito de lugar para lugar, quando está disponível.

“Se estamos nos baseando nesses dados para saber onde as pessoas estão, o que acontece quando temos um problema com a maneira como os dados são coletados?”, indaga Manley.

Ethnography: A Scientist Discovers the Value of the Social Sciences (The Scholarly Kitchen)

 

Picture from an early ethnographic study

I have always liked to think of myself as a good listener. Whether you are in therapy (or should be), conversing with colleagues, working with customers, embarking on strategic planning, or collaborating on a task, a dose of emotional intelligence – that is, embracing patience and the willingness to listen — is essential.

At the American Mathematical Society, we recently embarked on ambitious strategic planning effort across the organization. On the publishing side we have a number of electronic products, pushing us to consider how we position these products for the next generation of mathematician. We quickly realized that it is easy to be complacent. In our case we have a rich history online, and yet – have we really moved with the times? Does a young mathematician need our products?

We came to a sobering and rather exciting realization: In fact, we do not have a clear idea how mathematicians use online resources to do their research, teaching, hiring, and job hunting. We of course have opinions, but these are not informed by anything other than anecdotal evidence from conversations here and there.

To gain a sense of how mathematicians are using online resources, we embarked on an effort to gather more systematic intelligence embracing a qualitative approach to the research – ethhnography. The concept of ethnographic qualitative research was a new one to me – and it felt right. I quickly felt like I was back in school and a graduate student in ethnography, reading the literature, and thinking through with colleagues how we might apply qualitative research methods to understanding mathematicians’ behavior. It is worth taking a look at two excellent books: Just Enough Research by Erika Hall, and Practical Ethnography: A Guide to Doing Ethnography in the Private Sector by Sam Ladner.

What do we mean by ethnographic research? In essence we are talking about a rich, multi-factorial descriptive approach. While quantitative research uses pre-existing categories in its analysis, qualitative research is open to new ways of categorizing data – in this case, mathematicians’ behavior in using information. The idea is that one observes the subject (“key informant” in technical jargon) in their natural habitat. Imagine you are David Attenborough, exploring an “absolutely marvelous” new species – the mathematician – as they operate in the field. The concept is really quite simple. You just want to understand what your key informants are doing, and preferably why they are doing it. One has to do it in a setting that allows for them to behave naturally – this really requires an interview with one person not a group (because group members may influence each other’s actions).

Perhaps the hardest part is the interview itself. If you are anything like me, you will go charging in saying something along the lines of “look at these great things we are doing. What do you think? Great right?” Well, of course this is plain wrong. While you have a goal going in, perhaps to see how an individual is behaving with respect to a specific product, your questions need to be agnostic in flavor. The idea is to have the key informant do what they normally do, not just say what they think they do – the two things may be quite different. The questions need to be carefully crafted so as not to lead, but to enable gentle probing and discussion as the interview progresses. It is a good idea to record the interview – both in audio form, and ideally with screen capture technology such as Camtasia. When I was involved with this I went out and bought a good, but inexpensive audio recorder.

We decided that rather than approach mathematicians directly, we should work with the library at an academic institution. Libraries are our customers. The remarkable thing about academic libraries is that ethnography is becoming part of the service they provide to their stakeholders at many institutions. We actually began with a remarkable librarian, based at Rice University – Debra Kolah. She is the head of the user experience office at the Fondren Library of Rice University in Texas. She also happens to be the physics, math and statistics librarian at Rice. Debra is remarkable, and has become an expert in ethnographic study of academic user experience. She has multiple projects underway at Rice, working with a range of stakeholders, aiming to foster the activity of the library in the academic community she directly serves. She is a picture of enthusiasm when it comes to serving her community and to gaining insights into the cultural patterns of academic user behavior. Debra was our key to understanding how important it is to work with the library to reach the mathematical community at an institution. The relationship is trusted and symbiotic. This triangle of an institution’s library, academic, and outside entity, such as a society, or publisher, may represent the future of the library.

So the interviews are done – then what? Analysis. You have to try to make sense of all of this material you’ve gathered. First, transcribing audio interviews is no easy task. You have a range of voices and much technical jargon. The best bet is to get one of the many services out there to take the files and do a first pass transcription. They will get most of it right. Perhaps they will write “archive instead of arXiv, but that can be dealt with later. Once you have all this interview text, you need to group it into meaningful categories – what’s called “coding”. The idea is that you try to look at the material with a fresh, unbiased eye, to see what themes emerge from the data. Once these themes are coded, you can then start to think about patterns in the data. Interestingly, qualitative researchers have developed a host of software programs to aid the researcher in doing this. We settled for a relatively simple, web based solution – Dedoose.

With some 62 interviews under our belt, we are beginning to see patterns emerge in the ways that mathematicians behave online. I am not going to reveal our preliminary findings here – I must save that up for when the full results are in – but I am confident that the results will show a number of consistent threads that will help us think through how to better serve our community.

In summary, this experience has been a fascinating one – a new world for me. I have been trained as a scientist. As a scientist, I have ideas about what scientific method is, and what evidence is. I now understand the value of the qualitative approach – hard for a scientist to say. Qualitative research opens a window to descriptive data and analysis. As our markets change, understanding who constitutes our market, and how users behave is more important than ever.

Carry on listening!

Is the universe a hologram? (Science Daily)

Date:
April 27, 2015
Source:
Vienna University of Technology
Summary:
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.

Is our universe a hologram? Credit: TU Wien 

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.

Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.

The Holographic Principle

Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).

Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.

Correspondence Even in Flat Spaces

For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.

However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

Calculated Twice, Same Result

“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.

When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.

“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.

This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.


Journal Reference:

  1. Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat HolographyPhysical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602

Time and Events (Knowledge Ecology)

March 24, 2015 / Adam Robbert

tumblr_nivrggIBpb1qd0i7oo1_1280

[Image: Mohammad Reza Domiri Ganji]

I just came across Massimo Pigliucci’s interesting review of Mangabeira Unger and Lee Smolin’s book The Singular Universe and the Reality of Time. There are more than a few Whiteheadian themes explored throughout the review, including Unger and Smolin’s (U&S) view that time should be read as an abstraction from events and that the “laws” of the universe are better conceptualized as habits or contingent causal connections secured by the ongoingness of those events rather than as eternal, abstract formalisms. (This entangling of laws with phenomena, of events with time, is one of the ways we can think towards an ecological metaphysics.)

But what I am particularly interested in is the short discussion on Platonism and mathematical realism. I sometimes think of mathematical realism as the view that numbers, and thus the abstract formalisms they create, are real, mind-independent entities, and that, given this view, mathematical equations are discovered (i.e., they actually exist in the world) rather than created (i.e., humans made them up to fill this or that pragmatic need). The review makes it clear, though, that this definition doesn’t push things far enough for the mathematical realist. Instead, the mathematical realist argues for not just the mind-independent existence of numbers but also their nature-independence—math as independent not just of all knowers but of all natural phenomena, past, present, or future.

U&S present an alternative to mathematical realisms of this variety that I find compelling and more consistent with the view that laws are habits and that time is an abstraction from events. Here’s the reviewer’s take on U&S’s argument (the review starts with a quote from U&S and then unpacks it a bit):

“The third idea is the selective realism of mathematics. (We use realism here in the sense of relation to the one real natural world, in opposition to what is often described as mathematical Platonism: a belief in the real existence, apart from nature, of mathematical entities.) Now dominant conceptions of what the most basic natural science is and can become have been formed in the context of beliefs about mathematics and of its relation to both science and nature. The laws of nature, the discerning of which has been the supreme object of science, are supposed to be written in the language of mathematics.” (p. xii)

But they are not, because there are no “laws” and because mathematics is a human (very useful) invention, not a mysterious sixth sense capable of probing a deeper reality beyond the empirical. This needs some unpacking, of course. Let me start with mathematics, then move to the issue of natural laws.

I was myself, until recently, intrigued by mathematical Platonism [8]. It is a compelling idea, which makes sense of the “unreasonable effectiveness of mathematics” as Eugene Wigner famously put it [9]. It is a position shared by a good number of mathematicians and philosophers of mathematics. It is based on the strong gut feeling that mathematicians have that they don’t invent mathematical formalisms, they “discover” them, in a way analogous to what empirical scientists do with features of the outside world. It is also supported by an argument analogous to the defense of realism about scientific theories and advanced by Hilary Putnam: it would be nothing short of miraculous, it is suggested, if mathematics were the arbitrary creation of the human mind, and yet time and again it turns out to be spectacularly helpful to scientists [10].

But there are, of course, equally (more?) powerful counterarguments, which are in part discussed by Unger in the first part of the book. To begin with, the whole thing smells a bit too uncomfortably of mysticism: where, exactly, is this realm of mathematical objects? What is its ontological status? Moreover, and relatedly, how is it that human beings have somehow developed the uncanny ability to access such realm? We know how we can access, however imperfectly and indirectly, the physical world: we evolved a battery of sensorial capabilities to navigate that world in order to survive and reproduce, and science has been a continuous quest for expanding the power of our senses by way of more and more sophisticated instrumentation, to gain access to more and more (and increasingly less relevant to our biological fitness!) aspects of the world.

Indeed, it is precisely this analogy with science that powerfully hints to an alternative, naturalistic interpretation of the (un)reasonable effectiveness of mathematics. Math too started out as a way to do useful things in the world, mostly to count (arithmetics) and to measure up the world and divide it into manageable chunks (geometry). Mathematicians then developed their own (conceptual, as opposed to empirical) tools to understand more and more sophisticated and less immediate aspects of the world, in the process eventually abstracting entirely from such a world in pursuit of internally generated questions (what we today call “pure” mathematics).

U&S do not by any means deny the power and effectiveness of mathematics. But they also remind us that precisely what makes it so useful and general — its abstraction from the particularities of the world, and specifically its inability to deal with temporal asymmetries (mathematical equations in fundamental physics are time-symmetric, and asymmetries have to be imported as externally imposed background conditions) — also makes it subordinate to empirical science when it comes to understanding the one real world.

This empiricist reading of mathematics offers a refreshing respite to the resurgence of a certain Idealism in some continental circles (perhaps most interestingly spearheaded by Quentin Meillassoux). I’ve heard mention a few times now that the various factions squaring off within continental philosophy’s avant garde can be roughly approximated as a renewed encounter between Kantian finitude and Hegelian absolutism. It’s probably a bit too stark of a binary, but there’s a sense in which the stakes of these arguments really do center on the ontological status of mathematics in the natural world. It’s not a direct focus of my own research interests, really, but it’s a fascinating set of questions nonetheless.

On pi day, how scientists use this number (Science Daily)

Date: March 12, 2015

Source: NASA/Jet Propulsion Laboratory

Summary: If you like numbers, you will love March 14, 2015. When written as a numerical date, it’s 3/14/15, corresponding to the first five digits of pi (3.1415) — a once-in-a-century coincidence! Pi Day, which would have been the 136th birthday of Albert Einstein, is a great excuse to eat pie, and to appreciate how important the number pi is to math and science.

Take JPL Education’s Pi Day challenge featuring real-world questions about NASA spacecraft — then tweet your answers to @NASAJPL_Edu using the hashtag #PiDay. Answers will be revealed on March 16. Credit: NASA/JPL-Caltech

If you like numbers, you will love March 14, 2015. When written as a numerical date, it’s 3/14/15, corresponding to the first five digits of pi (3.1415) — a once-in-a-century coincidence! Pi Day, which would have been the 136th birthday of Albert Einstein, is a great excuse to eat pie, and to appreciate how important the number pi is to math and science.

Pi is the ratio of circumference to diameter of a circle. Any time you want to find out the distance around a circle when you have the distance across it, you will need this formula.

Despite its frequent appearance in math and science, you can’t write pi as a simple fraction or calculate it by dividing two integers (…3, -2, -1, 0, 1, 2, 3…). For this reason, pi is said to be “irrational.” Pi’s digits extend infinitely and without any pattern, adding to its intrigue and mystery.

Pi is useful for all kinds of calculations involving the volume and surface area of spheres, as well as for determining the rotations of circular objects such as wheels. That’s why pi is important for scientists who work with planetary bodies and the spacecraft that visit them.

At NASA’s Jet Propulsion Laboratory, Pasadena, California, pi makes a frequent appearance. It’s a staple for Marc Rayman, chief engineer and mission director for NASA’s Dawn spacecraft. Dawn went into orbit around dwarf planet Ceres on March 6. Rayman uses a formula involving pi to calculate the length of time it takes the spacecraft to orbit Ceres at any given altitude. You can also use pi to think about Earth’s rotation.

“On Pi Day, I will think about the nature of a day, as Earth’s rotation on its axis carries me on a circle 21,000 miles (34,000 kilometers) in circumference, which I calculated using pi and my latitude,” Rayman said.

Steve Vance, a planetary chemist and astrobiologist at JPL, also frequently uses pi. Lately, he has been using pi in his calculations of how much hydrogen might be available for chemical processes, and possibly biology, in the ocean beneath the surface of Jupiter’s moon Europa.

“To calculate the hydrogen produced in a given unit area, we divide by Europa’s surface area, which is the area of a sphere with a radius of 970 miles (1,561 kilometers),” Vance said.

Luisa Rebull, a research scientist at NASA’s Spitzer Science Center at the California Institute of Technology, Pasadena, also considers pi to be important in astronomy. When calculating the distance between stars in a projection of the sky, scientists use a special kind of geometry called spherical trigonometry. That’s an extension of the geometry you probably learned in middle school, but it takes place on a sphere rather than a flat plane.

“In order to do these calculations, we need to use formulae, the derivation of which uses pi,” she said. “So, this is pi in the sky!”

Make sure to note when the date and time spell out the first 10 digits of pi: 3.141592653. On 3/14/15 at 9:26:53 a.m., it is literally the most perfectly “pi” time of the century — so grab a slice of your favorite pie, and celebrate math!

For more fun with pi, check out JPL Education’s second annual Pi Day challenge, featuring real-world NASA math problems. NASA/JPL education specialists, with input from scientists and engineers, have crafted questions involving pi aimed at students in grades 4 through 11, but open to everyone. Take a crack at them at:

http://www.jpl.nasa.gov/infographics/infographic.view.php?id=11257

Share your answers on Twitter by tweeting to @NASAJPL_Edu with the hashtag #PiDay. Answers will be revealed on March 16 (aka Pi + 2 Day!).

Resources for educators, including printable Pi Day challenge classroom handouts, are available at: www.jpl.nasa.gov/edu/piday2015

Caltech manages JPL for NASA.

Physics’s pangolin (AEON)

Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality

by 

Illustration by Claire ScullyIllustration by Claire Scully

Margaret Wertheim is an Australian-born science writer and director of the Institute For Figuring in Los Angeles. Her latest book is Physics on the Fringe (2011).

Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being.

But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern.

Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses.

In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude

This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’.

On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject?

Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites.

Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not.

Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked.

Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes.

When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude.

Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe.

What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds.

Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system

This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won.

In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions.

On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here.

In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean.

Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society.

‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life.

As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system.

In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’.

According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart.

It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’

In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science.

When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of whatcan be measured and how.

Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured?

In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’.

Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatoreswere also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed.

How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa.Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science.

So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave.

The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged.

Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles.

We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years.

Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light.

Physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions

Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach.

Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world.

That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy.

Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’

Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff.

Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought.

More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required.

To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism

You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote:
The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.

In order to resolve contradictions between how physicists describetime and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws.

This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says.

To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitablyfailed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins.

In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear.

We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction.

3 June 2013

A física e Hollywood (Folha de S.Paulo)

HENRIQUE GOMES

22/02/2015  03h15

RESUMO “Interestelar” integra leva de filmes pautados pela ciência. Nele, a busca pela sobrevivência leva humanos à proximidade de um buraco negro, mote para especulações ligadas à pesquisa de Stephen Hawking -tema de “A Teoria de Tudo”, que, como a ficção científica de Christopher Nolan, disputa hoje categorias do Oscar.

*

Em 2014, houve um “boom” de filmes em Hollywood levando a ciência a sério. “A Teoria de Tudo” e “O Jogo da Imitação” tratam da vida de cientistas importantes do século 21: Stephen Hawking e Alan Turing, respectivamente. Um terceiro longa, a ficção científica “Interestelar”, inova por não só aderir fielmente ao que se sabe sobre o espaço-tempo mas por usar os conhecimentos em prol da narrativa.

Não estou falando aqui sobre incluir efeitos sonoros no espaço. Isso já foi feito antes e não muda significativamente o roteiro. Os responsáveis por “Interestelar” não se contentaram em preencher a burocracia técnica só para escapar dos chatos de plantão. Aplicaram um esforço homérico em inúmeras reuniões com o renomado físico Kip Thorne (que também aparece em “A Teoria de Tudo”), em simulações de buracos negros e efetivamente reescreveram o roteiro para se adequar às diretrizes físicas.

O resultado final não perde em nada –ao menos no quesito de excitar a imaginação e produzir efeitos fantásticos– para catástrofes científicas como “Além da Escuridão – Star Trek” (2013) e “Prometheus” (2012). Em “Star Trek”, por exemplo, Isaac Newton e até Galileu ficariam horrorizados ao ver uma nave entrar em queda livre em direção à Terra, enquanto seus tripulantes, simultaneamente, entram em queda livre em relação à nave. Como bem sabem aqueles que já estiveram em naves em queda livre, tripulantes flutuam, não caem. (Pedras de pesos diferentes caem com a mesma velocidade da torre de Pisa e de outras torres.)

“Interestelar” ultrapassa as regras da ficção científica hollywoodiana. Carl Sagan disse que “a ciência não é só compatível com a espiritualidade; é uma profunda fonte de espiritualidade”. “Interestelar” prova o que nós, cientistas, sabemos há muito tempo: a frase se aplica igualmente ao encanto humano com o desconhecido.

Divulgação
Matthew McConaughey como Cooper em "Interestelar"
Matthew McConaughey como Cooper em “Interestelar”

“Interestelar” e “A Teoria de Tudo” têm alguns temas em comum.

O primeiro é a degeneração –do planeta Terra, em um; de um sistema neuromuscular, em outro. À deterioração do planeta Terra, se busca escapar por meio da exploração interestelar, liderada pelo personagem Cooper ( Matthew McConaughey). À do corpo humano, por meio da mente incansável de Stephen Hawking, vivido pelo excelente Eddie Redmayne.

O segundo tema em comum é justamente uma parte importante da obra de Hawking.

ESTRELAS

Stephen Hawking nasceu em Oxford, em 1942. Aos 21 anos, já no primeiro ano de seu doutorado, recebeu o diagnóstico de ELA (esclerose lateral amiotrófica), doença degenerativa que atinge a comunicação nervosa com os músculos, mas que deixa outras funções cerebrais intactas. Decidido a continuar seus estudos, um dos primeiros problemas a que Hawking se dedicou foi à questão do que acontece quando uma estrela é tão pesada que não aguenta o próprio peso.

O colapso da estrela concentra toda a sua massa em um único ponto, onde a teoria deixa de fazer sentido. Já se antecipando a aplicações em filmes de ficção científica, físicos chamaram esse ponto de singularidade. A uma certa distância desse ponto singular, a atração de toda a massa concentrada é suficiente para que nem a luz consiga escapar. Uma lanterna acesa a (no mínimo) esse raio não pode ser enxergada por alguém a uma distância maior; nada escapa a essa esfera, chamada de buraco negro (por motivos óbvios).

Na época em que Hawking era estudante, existia uma única solução para as equações da relatividade geral de Einstein –que descrevem como concentrações de matéria e energia distorcem a geometria do espaço-tempo– que representava um buraco negro, descoberta pelo alemão Schwarzschild. Um grupo de físicos russos argumentava que essa solução era artificial, nascida do arranjo de partículas colapsando em perfeita sincronização para que chegassem juntas ao centro, formando assim um ponto de densidade infinita: a singularidade.

Hawking e Roger Penrose, matemático de Oxford, demonstraram que, na verdade, essa era uma característica genérica das equações de Einstein –e mais: que o universo teria começado no que se convencionou chamar “singularidade cosmológica”, na qual a noção de tempo deixaria de ter significado. Como Hawking diz no filme, “seria o começo do tempo em si”.

Não há consenso na física teórica moderna sobre o que acontece de fato a quem se aproxima de uma singularidade dentro de um buraco negro. O maior obstáculo ao nosso entendimento é que, a pequenas distâncias da singularidade, precisamos levar em conta efeitos quânticos, e –como comenta a escolada Jane Hawking em “A Teoria de Tudo”– a teoria quântica e a relatividade geral são escritas em linguagens completamente diferentes. Não que seja preciso chegar tão perto da singularidade para sabermos que os efeitos seriam drásticos.

A crítica que mais ouvi de físicos amadores (e não tão amadores) a “Interestelar” é a de que –atenção, spoiler– Cooper seria trucidado ao entrar em Gargantua, um buraco negro gigante. “Trucidado” talvez seja a palavra incorreta: “espaguetificado” é o termo técnico.

MARÉS

O que mataria você ao cair em um buraco negro não é a força absoluta da gravidade. Assim como pedras jogadas por hereges italianos de cima de torres, partes diferentes do seu corpo caem com a mesma aceleração, mesmo que a aceleração em si seja enorme. Essa conclusão vale se a força da gravidade for relativamente constante –quase a mesma no seu pé e na sua cabeça. Apesar de essa condição ser satisfeita na superfície da Terra, a força da gravidade obviamente não é constante. Ela decai com a distância, e é possível observar efeitos da variação dessa força –pequenos até mesmo na escala da torre de Pisa– em corpos bem maiores. O exemplo mais familiar para nós terráqueos é o efeito das marés no nosso planeta. A Lua puxa a Terra com mais força em sua face mais próxima, com os oceanos inchando e desinchando de acordo a essa atração. Apesar da força gravitacional solar absoluta na Terra ser maior, por a Lua estar bem mais próxima de nós do que o Sol, o maior gradiente da força é lunar, e é por isso que nós sentimos mais os efeitos de maré provindos da Lua que do Sol.

Pelo mesmo motivo, assim que entrássemos em buracos negros estaríamos sujeitos a uma força gravitacional imensa, mas, ainda bem longe da singularidade central, não necessariamente sentiríamos força de maré. Essa ausência de efeitos dramáticos nesse estágio da nossa queda é adequadamente chamada de “sem drama” na comunidade, e até ali a entrada de Cooper em Gargantua seria assim: sem drama. Mas não depois. Ao se aproximar de uma singularidade, mesmo antes de precisarmos incluir efeitos quânticos, a força gravitacional pode ser tão diferente dos pés à cabeça que Cooper seria esticado –daí a “espaguetificação”.

Não era possível, claro, incluir essa explicação (macarrônica?) em “Interestelar”. Mesmo assim, uma das cenas mais espantosas do filme envolve justamente marés no planeta Miller, que orbita Gargantua. No filme, marés enormes atingem os protagonistas de hora em hora, de forma bem inconveniente. Para que o efeito de maré fosse aproximadamente correto, o físico Kip Thorne calculou o tamanho do buraco negro, sua rotação e a órbita do planeta. As imagens estarrecedoras do filme são frutos de cálculos.

Ainda que não seja realista esperar igual cuidado em produções futuras, talvez o fato de algumas dessas simulações de buracos negros serem novidade até na comunidade científica estimule alguns produtores/físicos amadores de plantão –uma demografia bem magra– a seguir o exemplo.

Mas voltemos ao destino de Cooper. Já vimos que ele escaparia ileso à entrada no buraco negro. Sem drama até ali. Mas e a tal da “espaguetificação”? Muitos comentaristas, como o popular Neil deGrasse Tyson, argumentaram que simplesmente não sabemos o que acontece dentro de um buraco negro. Passando daquela fronteira, o roteiro adquiriria então imunidade diplomática às leis da física, virando terreno fértil para especulações mais ousadas –para não dizer “terra de ninguém”.

Bem, que me desculpe Tyson, mas isso não é exatamente verdade. Acreditamos que a relatividade geral funcionaria muito bem até antes que efeitos quânticos fossem importantes (para um buraco negro do tamanho de Gargantua). Somada a isso, na solução de Schwarzschild, a aproximação da singularidade é inevitável. Assim como não conseguimos parar o tempo, não conseguiríamos manter a mesma distância do centro, tendo que inexoravelmente aproximarmo-nos mais e mais da singularidade, que se agigantaria à nossa frente, inevitável como o futuro. Nesse caso, Cooper viraria espaguete antes que os trompetes da mecânica quântica pudessem soar a sua (possível) salvação. Felizmente para toda raça humana no filme, esse não é o caso.

PIÕES

Em 1962, 43 anos após a descoberta do buraco negro nas trincheiras da Grande Guerra, o físico matemático neozelandês Roy Kerr, em circunstâncias bastante mais confortáveis, generalizou a solução de Schwarzschild, descobrindo uma solução da teoria de Einstein que correspondia a um buraco negro em rotação –girando como um pião.

Mais tarde, Hawking e colaboradores mostraram que qualquer buraco negro se assenta na forma de Kerr e, adequadamente, Gargantua é um desses, em altíssima rotação. Mas, quando piões como esses giram, eles puxam consigo o próprio espaço-tempo, e há uma espécie de força centrífuga –aquela força que sentimos no carro quando fazemos uma curva fechada– inevitável, que aumenta conforme o centro do buraco negro se aproxima. A uma certa distância do centro, o cabo de guerra entre a força de atração e a centrípeta se equilibra, e a singularidade deixa de ser inevitável.

A partir desse momento, realmente não sabemos bem o que acontece, e Cooper fica livre para fazer o que os roteiristas inventarem. Não que adentrar uma quarta dimensão, ver o tempo como mais uma direção do espaço e todo o resto não tenham nenhum embasamento, mas a partir dali entramos no reino da especulação científica. Pelo menos o fizemos com consciência limpa.

Carl Sagan, no excelente “Cosmos”, mais uma vez nos guia: “Nós não teremos medo de especular. Mas teremos cuidado em separar especulação de fato. O cosmos é cheio, além de qualquer medida, de verdades elegantes, de requintadas inter-relações, do impressionante maquinário da natureza”. O universo é mais estranho (e mais fascinante) do que a ficção. Está mais do que na hora de explorarmos uma ficção, científica não só no nome.

HENRIQUE GOMES, 34, é doutor em física pela Universidade de Nottingham (Reino Unido) e pesquisador no Perimeter Institute for Theoretical Physics (Canadá).

Problem: Your brain (Medium)

I will be talking mainly about development for the web.

Ilya Dorman, Feb 15, 2015

Our puny brain can handle a very limited amount of logic at a time. While programmers proclaim logic as their domain, they are only sometimes and slightly better at managing complexity than the rest of us, mortals. The more logic our app has, the harder it is to change it or introduce new people to it.

The most common mistake programmers do is assuming they write code for a machine to read. While technically that is true, this mindset leads to the hell that is other people’s code.

I have worked in several start-up companies, some of them even considered “lean.” In each, it took me between few weeks to few months to fully understand their code-base, and I have about 6 years of experience with JavaScript. This does not seem reasonable to me at all.

If the code is not easy to read, its structure is already a monument—you can change small things, but major changes—the kind every start-up undergoes on an almost monthly basis—are as fun as a root canal. Once the code reaches a state, that for a proficient programmer, it is harder to read than this article—doom and suffering is upon you.

Why does the code become unreadable? Let’s compare code to plain text: the longer a sentence is, the easier it is for our mind to forget the beginning of it, and once we reach the end, we forget what was the beginning and lose the meaning of the whole sentence. You had to read the previous sentence twice because it was too long to get in one grasp? Exactly! Same with code. Worse, actually—the logic of code can be way more complex than any sentence from a book or a blog post, and each programmer has his own logic which can be total gibberish to another. Not to mention that we also need to remember the logic. Sometimes we come back to it the same day and sometimes after two month. Nobody remembers anything about their code after not looking at it for two month.

To make code readable to other humans we rely on three things:

1. Conventions

Conventions are good, but they are very limited: enforce them too little and the programmer becomes coupled to the code—no one will ever understand what they meant once they are gone. Enforce too much and you will have hour-long debates about every space and colon (true story.) The “habitable zone” is very narrow and easy to miss.

2. Comments

They are probably the most helpful, if done right. Unfortunately many programmers write their comments in the same spirit they write their code—very idiosyncratic. I do not belong to the school claiming good code needs no comments, but even beautifully commented code can still be extremely complicated.

3. “Other people know this programming language as much as I do, so they must understand my writings.”

Well… This is JavaScript:

This is JAVASCRIPT!

4. Tests

Tests are a devil in disguise. ”How do we make sure our code is good and readable? We write more code!” I know many of you might quit this post right here, but bear with me for a few more lines: regardless of their benefit, tests are another layer of logic. They are more code to be read and understood. Tests try to solve this exact problem: your code is too complicated to calculate it’s result in your brain? So you say “well, this is what should happen in the end.” And when it doesn’t, you go digging for the problem. Your code should be simple enough to read a function or a line and understand what should be the result of running it.

Your life as a programmer could be so much easier!

Solution: Radical Minimalism

I will break down this approach into practical points, but the main idea is: use LESS logic.

  • Cut 80% of your product’s features

Yes! Just like that. Simplicity, first of all, comes from the product. Make it easy for people to understand and use. Make it do one thing well, and only then add up (if there is still a need.)

  • Use nothing but what you absolutely must

Do not include a single line of code (especially from libraries) that you are not 100% sure you will use and that it is the simplest, most straightforward solution available. Need a simple chat app and use Angular.js because it’s nice with the two-way binding? You deserve those hours and days of debugging and debating about services vs. providers.

Side note: The JavaScript browser api is event-driven, it is made to respond when stuff (usually user input) happens. This means that events change data. Many new frameworks (Angular, Meteor) reverse this direction and make data changes trigger events. If your app is simple, you might live happily with the new mysterious layer, but if not — you get a whole new layer of complexity that you need to understand and your life will get exponentially more miserable. Unless your app constantly manages big amounts of data, Avoid those frameworks.

  • Use simplest logic possible

Say you need show different HTML on different occasions. You can use client-side routing with controllers and data passed to each controller that renders the HTML from a template. Or you can just use static HTML pages with normal browser navigation, and update manually the HTML. Use the second.

  • Make short Javascript files

Limit the length of your JS files to a single editor page, and make each file do one thing. Can’t cramp all your glorious logic into small modules? Good, that means you should have less of it, so that other humans will understand your code in reasonable time.

  • Avoid pre-compilers and task-runners like AIDS

The more layers there are between what you write and what you see, the more logic your mind needs to remember. You might think grunt or gulp help you to simplify stuff but then you have 30 tasks that you need to remember what they do to your code, how to use them, update them, and teach them to any new coder. Not to mention compiling.

Side note #1: CSS pre-compilers are OK because they have very little logic but they help a lot in terms of readable structure, compared to plain CSS. I barely used HTML pre-compilers so you’ll have to decide for yourself.

Side note #2: Task-runners could save you time, so if you do use them, do it wisely keeping the minimalistic mindset.

  • Use Javascript everywhere

This one is quite specific, and I am not absolutely sure about it, but having the same language in client and server can simplify the data management between them.

  • Write more human code

Give your non trivial variables (and functions) descriptive names. Make shorter lines but only if it does not compromise readability.

Treat your code like poetry and take it to the edge of the bare minimum.

The Paradox of the Proof (Project Wordsworth)

By Caroline Chen

MAY 9, 2013


On August 31, 2012, Japanese mathematician Shinichi Mochizuki posted four papers on the Internet.

The titles were inscrutable. The volume was daunting: 512 pages in total. The claim was audacious: he said he had proved the ABC Conjecture, a famed, beguilingly simple number theory problem that had stumped mathematicians for decades.

Then Mochizuki walked away. He did not send his work to the Annals of Mathematics. Nor did he leave a message on any of the online forums frequented by mathematicians around the world. He just posted the papers, and waited.

Two days later, Jordan Ellenberg, a math professor at the University of Wisconsin-Madison, received an email alert from Google Scholar, a service which scans the Internet looking for articles on topics he has specified. On September 2, Google Scholar sent him Mochizuki’s papers: You might be interested in this.

“I was like, ‘Yes, Google, I am kind of interested in that!’” Ellenberg recalls. “I posted it on Facebook and on my blog, saying, ‘By the way, it seems like Mochizuki solved the ABC Conjecture.’”

The Internet exploded. Within days, even the mainstream media had picked up on the story. “World’s Most Complex Mathematical Theory Cracked,” announced the Telegraph. “Possible Breakthrough in ABC Conjecture,” reported the New York Times, more demurely.

On MathOverflow, an online math forum, mathematicians around the world began to debate and discuss Mochizuki’s claim. The question which quickly bubbled to the top of the forum, encouraged by the community’s “upvotes,” was simple: “Can someone briefly explain the philosophy behind his work and comment on why it might be expected to shed light on questions like the ABC conjecture?” asked Andy Putman, assistant professor at Rice University. Or, in plainer words: I don’t get it. Does anyone?

The problem, as many mathematicians were discovering when they flocked to Mochizuki’s website, was that the proof was impossible to read. The first paper, entitled “Inter-universal Teichmuller Theory I: Construction of Hodge Theaters,” starts out by stating that the goal is “to establish an arithmetic version of Teichmuller theory for number fields equipped with an elliptic curve…by applying the theory of semi-graphs of anabelioids, Frobenioids, the etale theta function, and log-shells.”

This is not just gibberish to the average layman. It was gibberish to the math community as well.

“Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” wrote Ellenberg on his blog.

“It’s very, very weird,” says Columbia University professor Johan de Jong, who works in a related field of mathematics.

Mochizuki had created so many new mathematical tools and brought together so many disparate strands of mathematics that his paper was populated with vocabulary that nobody could understand. It was totally novel, and totally mystifying.

As Tufts professor Moon Duchin put it: “He’s really created his own world.”

It was going to take a while before anyone would be able to understand Mochizuki’s work, let alone judge whether or not his proof was right. In the ensuing months, the papers weighed like a rock in the math community. A handful of people approached it and began examining it. Others tried, then gave up. Some ignored it entirely, preferring to observe from a distance. As for the man himself, the man who had claimed to solve one of mathematics’ biggest problems, there was not a sound.

For centuries, mathematicians have strived towards a single goal: to understand how the universe works, and describe it. To this objective, math itself is only a tool — it is the language that mathematicians have invented to help them describe the known and query the unknown.

This history of mathematical inquiry is marked by milestones that come in the form of theorems and conjectures. Simply put, a theorem is an observation known to be true. The Pythagorean theorem, for example, makes the observation that for all right-angled triangles, the relationship between the lengths of the three sides, ab and is expressed in the equation a2+ b2= c2. Conjectures are predecessors to a theorem — they are proposals for theorems, observations that mathematicians believe to be true, but are yet to be confirmed. When a conjecture is proved, it becomes a theorem and when that happens, mathematicians rejoice, and add the new theorem to their tally of the understood universe.

“The point is not to prove the theorem,” explains Ellenberg. “The point is to understand how the universe works and what the hell is going on.”

Ellenberg is doing the dishes while talking to me over the phone, and I can hear the sound of a small infant somewhere in the background. Ellenberg is passionate about explaining mathematics to the world. He writes a math column for Slate magazine and is working on a book called How Not To Be Wrong, which is supposed to help laypeople apply math to their lives.

The sounds of the dishes pause as Ellenberg explains what motivates him and his fellow mathematicians. I imagine him gesturing in the air with soapy hands: “There’s a feeling that there’s a vast dark area of ignorance, but all of us are pushing together, taking steps together to pick at the boundaries.”

The ABC Conjecture probes deep into the darkness, reaching at the foundations of math itself. First proposed by mathematicians David Masser and Joseph Oesterle in the 1980s, it makes an observation about a fundamental relationship between addition and multiplication. Yet despite its deep implications, the ABC Conjecture is famous because, on the surface, it seems rather simple.

It starts with an easy equation: a + b = c.

The variables ab, and c, which give the conjecture its name, have some restrictions. They need to be whole numbers, and and cannot share any common factors, that is, they cannot be divisible by the same prime number. So, for example, if was 64, which equals 26, then could not be any number that is a multiple of two. In this case, could be 81, which is 34. Now and do not share any factors, and we get the equation 64 + 81 = 145.

It isn’t hard to come up with combinations of and that satisfy the conditions. You could come up with huge numbers, such as 3,072 + 390,625 = 393,697 (3,072 = 210 x 3 and 390,625 = 58, no overlapping factors there), or very small numbers, such as 3 + 125 = 128 (125 = 5 x 5 x5).

What the ABC conjecture then says is that the properties of a and affect the properties of c. To understand the observation, it first helps to rewrite these equations a + b = c into versions made up of the prime factors:

Our first equation, 64 + 81 = 145, is equivalent to 26+ 34= 5 x 29.

Our second example, 3,072 + 390,625 = 393,697 is equivalent to  210 x 3 + 58 = 393,697 (which happens to be prime!)

Our last example, 3 + 125 = 128, is equivalent to 3 + 53= 27

The first two equations are not like the third, because in the first two equations, you have lots of prime factors on the left hand side of the equation, and very few on the right hand side. The third example is the opposite — there are more primes on the right hand side (seven) of the equation than on the left (only four). As it turns out, in all the possible combinations of a, b, and c, situation three is pretty rare. The ABC Conjecture essentially says that when there are lots of prime factors on the left hand of the equation then, usually, there will be not very many on the right side of the equation.

Of course, “lots of,” “not very many,” and “usually” are very vague words, and in a formal version of the ABC Conjecture, all these terms are spelled out in more precise math-speak. But even in this watered-down version, one can begin to appreciate the conjecture’s implications. The equation is based on addition, but the conjecture’s observation is more about multiplication.

“It really is about something very, very basic, about a tight constraint that relates multiplicative and additive properties of numbers,” says Minhyong Kim, professor at Oxford University. “If there’s something new to discover about that, you might expect it to be very influential.”

This is not intuitive. While mathematicians came up with addition and multiplication in the first place, based on their current knowledge of mathematics, there is no reason for them to presume that the additive properties of numbers would somehow influence or affect their multiplicative properties.

“There’s very little evidence for it,” says Peter Sarnak, professor at Princeton University, who is a self-described skeptic of the ABC conjecture. “I’ll only believe it when it’s proved.”

But if it were true? Mathematicians say that it would reveal a deep relationship between addition and multiplication that they never knew of before.

Even Sarnak, the skeptic, acknowledges this.

“If it’s true, then it will be the most powerful thing we have,” he says.

It would be so powerful, in fact, that it would automatically unlock many legendary math puzzles. One of these would be Fermat’s last theorem, an infamous math problem that was proposed in 1637, and solved only recently by Andrew Wiles in 1993. Wiles’ proof earned him more than 100,000 Deutsche marks in prize money (equivalent to about $50,000 in 1997), a reward that was offered almost a century before, in 1908. Wiles did not solve Fermat’s Last Theorem via the ABC conjecture — he took a different route — but if the ABC conjecture were to be true, then the proof for Fermat’s Last Theorem would be an easy consequence.

Because of its simplicity, the ABC Conjecture is well-known by all mathematicians. CUNY professor Lucien Szpiro says that “every professional has tried at least one night” to theorize about a proof. Yet few people have seriously attempted to crack it. Szpiro, whose eponymous conjecture is a precursor of the ABC Conjecture, presented a proof in 2007, but it was soon found to be problematic. Since then, nobody has dared to touch it, not until Mochizuki.

When Mochizuki posted his papers, the math community had much reason to be enthusiastic. They were excited not just because someone had claimed to prove an important conjecture, but because of who that someone was.

Mochizuki was known to be brilliant. Born in Tokyo, he moved to New York with his parents, Kiichi and Anne Mochizuki, when he was 5 years old. He left home for high school, attending Philips Exeter Academy, a selective prep school in New Hampshire. There, he whipped through his academics with lightning speed, graduating after two years, at age 16, with advanced placements in mathematics, physics, American and European history, and Latin.

Then Mochizuki enrolled at Princeton University where, again, he finished ahead of his peers, earning his bachelor’s degree in mathematics in three years and moving quickly onto his Ph.D, which he received at age 23. After lecturing at Harvard University for two years, he returned to Japan, joining the Research Institute for Mathematical Sciences at Kyoto University. In 2002, he became a full professor at the unusually young age of 33. His early papers were widely acknowledged to be very good work.

Academic prowess is not the only characteristic that set Mochizuki apart from his peers. His friend, Oxford professor Minhyong Kim, says that Mochizuki’s most outstanding characteristic is his intense focus on work.

“Even among many mathematicians I’ve known, he seems to have an extremely high tolerance for just sitting and doing mathematics for long, long hours,” says Kim.

Mochizuki and Kim met in the early 1990s, when Mochizuki was still an undergraduate student at Princeton. Kim, on exchange from Yale University, recalls Mochizuki making his way through the works of French mathematician Alexander Grothedieck, whose books on algebraic and arithmetic geometry are a must-read for any mathematician in the field.

“Most of us gradually come to understand [Grothendieck’s works] over many years, after dipping into it here and there,” said Kim. “It adds up to thousands and thousands of pages.”

But not Mochizuki.

“Mochizuki…just read them from beginning to end sitting at his desk,” recalls Kim. “He started this process when he was still an undergraduate, and within a few years, he was just completely done.”

A few years after returning to Japan, Mochizuki turned his focus to the ABC Conjecture. Over the years, word got around that he believed to have cracked the puzzle, and Mochizuki himself said that he expected results by 2012. So when the papers appeared, the math community was waiting, and eager. But then the enthusiasm stalled.

“His other papers – they’re readable, I can understand them and they’re fantastic,” says de Jong, who works in a similar field. Pacing in his office at Columbia University, de Jong shook his head as he recalled his first impression of the new papers. They were different. They were unreadable. After working in isolation for more than a decade, Mochizuki had built up a structure of mathematical language that only he could understand. To even begin to parse the four papers posted in August 2012, one would have to read through hundreds, maybe even thousands, of pages of previous work, none which had been vetted or peer-reviewed. It would take at least a year to read and understand everything. De Jong, who was about to go on sabbatical, briefly considered spending his year on Mochizuki’s papers, but when he saw height of the mountain, he quailed.

“I decided, I can’t possibly work on this. It would drive me nuts,” he said.

Soon, frustration turned into anger. Few professors were willing to directly critique a fellow mathematician, but almost every person I interviewed was quick to point out that Mochizuki was not following community standards. Usually, they said, mathematicians discuss their findings with their colleagues. Normally, they publish pre-prints to widely respected online forums. Then they submit their papers to the Annals of Mathematics, where papers are refereed by eminent mathematicians before publication. Mochizuki was bucking the trend. He was, according to his peers, “unorthodox.”

But what roused their ire most was Mochizuki’s refusal to lecture. Usually, after publication, a mathematician lectures on his papers, travelling to various universities to explain his work and answer questions from his colleagues. Mochizuki has turned down multiple invitations.

“A very prominent research university has asked him, ‘Come explain your result,’ and he said, ‘I couldn’t possibly do that in one talk,’” says Cathy O’Neil, de Jong’s wife, a former math professor better known as the blogger “Mathbabe.”

“And so they said, ‘Well then, stay for a week,’ and he’s like, ‘I couldn’t do it in a week.’

“So they said, ‘Stay for a month. Stay as long as you want,’ and he still said no.

“The guy does not want to do it.”

Kim sympathizes with his frustrated colleagues, but suggests a different reason for the rancor. “It really is painful to read other people’s work,” he says. “That’s all it is… All of us are just too lazy to read them.”

Kim is also quick to defend his friend. He says Mochizuki’s reticence is due to being a “slightly shy character” as well as his assiduous work ethic. “He’s a very hard working guy and he just doesn’t want to spend time on airplanes and hotels and so on.”

O’Neil, however, holds Mochizuki accountable, saying that his refusal to cooperate places an unfair burden on his colleagues.

“You don’t get to say you’ve proved something if you haven’t explained it,” she says. “A proof is a social construct. If the community doesn’t understand it, you haven’t done your job.”

Today, the math community faces a conundrum: the proof to a very important conjecture hangs in the air, yet nobody will touch it. For a brief moment in October, heads turned when Yale graduate student Vesselin Dimitrov pointed out a potential contradiction in the proof, but Mochizuki quickly responded, saying he had accounted for the problem. Dimitrov retreated, and the flicker of activity subsided.

As the months pass, the silence has also begun to call into question a basic premise of mathematical academia. Duchin explains the mainstream view this way: “Proofs are right or wrong. The community passes verdict.”

This foundational stone is one that mathematicians are proud of. The community works together; they are not cut-throat or competitive. Colleagues check each other’s work, spending hours upon hours verifying that a peer got it right. This behavior is not just altruistic, but also necessary: unlike in medical science, where you know you’re right if the patient is cured, or in engineering, where the rocket either launches or it doesn’t, theoretical math, better known as “pure” math, has no physical, visible standard. It is entirely based on logic. To know you’re right means you need someone else, preferably many other people, to walk in your footsteps and confirm that every step was made on solid ground. A proof in a vacuum is no proof at all.

Even an incorrect proof is better than no proof, because if the ideas are novel, they may still be useful for other problems, or inspire another mathematician to figure out the right answer. So the most pressing question isn’t whether or not Mochizuki is right — the more important question is, will the math community fulfill their promise, step up to the plate and read the papers?

The prospects seem thin. Szpiro is among the few who have made attempts to understand short segments of the paper. He holds a weekly workshop with his post-doctoral students at CUNY to discuss the paper, but he says they are limited to “local” analysis and do not understand the big picture yet. The only other known candidate is Go Yamashita, a colleague of Mochizuki at Kyoto University. According to Kim, Mochizuki is holding a private seminar with Yamashita, and Kim hopes that Yamashita will then go on to share and explain the work. If Yamashita does not pull through, it is unclear who else might be up to the task.

For now, all the math community can do is wait. While they wait, they tell stories, and recall great moments in math — the year Wiles cracked Fermat’s Last Theorem; how Perelman proved the Poincaré Conjecture. Columbia professor Dorian Goldfeld tells the story of Kurt Heegner, a high school teacher in Berlin, who solved a classic problem proposed by Gauss. “Nobody believed it. All the famous mathematicians pooh-poohed it and said it was wrong.” Heegner’s paper gathered dust for more than a decade until finally, four years after his death, mathematicians realized that Heegner had been right all along. Kim recalls Yoichi Miyaoka’s proposed proof of Fermat’s Last Theorem in 1988, which garnered a lot of media attention before serious flaws were discovered. “He became very embarrassed,” says Kim.

As they tell these stories, Mochizuki and his proofs hang in the air. All these stories are possible outcomes. The only question is – which?

Kim is one of the few people who remains optimistic about the future of this proof. He is planning a conference at Oxford University this November, and hopes to invite Yamashita to come and share what he has learned from Mochizuki. Perhaps more will be made clear, then.

As for Mochizuki, who has refused all media requests, who seems so reluctant to promote even his own work, one has to wonder if he is even aware of the storm he has created.

On his website, one of the only photos of Mochizuki available on the Internet shows a middle-aged man with old-fashioned 90’s style glasses, staring up and out, somewhere over our heads. A self-given title runs over his head. It is not “mathematician” but, rather, “Inter-universal Geometer.”

What does it mean? His website offers no clues. There are his papers, thousands of pages long, reams upon reams of dense mathematics. His resume is spare and formal. He reports his marital status as “Single (never married).” And then there is a page called Thoughts of Shinichi Mochizuki, which has only 17 entries. “I would like to report on my recent progress,” he writes, February 2009. “Let me report on my progress,” October 2009. “Let me report on my progress,” April 2010, June 2011, January 2012. Then follows math-speak. It is hard to tell if he is excited, daunted, frustrated, or enthralled.

Mochizuki has reported all this progress for years, but where is he going? This “inter-universal geometer,” this possible genius, may have found the key that would redefine number theory as we know it. He has, perhaps, charted a new path into the dark unknown of mathematics. But for now, his footsteps are untraceable. Wherever he is going, he seems to be travelling alone.

Computadores quânticos podem revolucionar teoria da informação (Fapesp)

30 de janeiro de 2015

Por Diego Freire

Agência FAPESP – A perspectiva dos computadores quânticos, com capacidade de processamento muito superior aos atuais, tem levado ao aprimoramento de uma das áreas mais versáteis da ciência, com aplicações nas mais diversas áreas do conhecimento: a teoria da informação. Para discutir essa e outras perspectivas, o Instituto de Matemática, Estatística e Computação Científica (Imecc) da Universidade Estadual de Campinas (Unicamp) realizou, de 19 a 30 de janeiro, a SPCoding School.

O evento ocorreu no âmbito do programa Escola São Paulo de Ciência Avançada (ESPCA), da FAPESP, que oferece recursos para a organização de cursos de curta duração em temas avançados de ciência e tecnologia no Estado de São Paulo.

A base da informação processada pelos computadores largamente utilizados é o bit, a menor unidade de dados que pode ser armazenada ou transmitida. Já os computadores quânticos trabalham com qubits, que seguem os parâmetros da mecânica quântica, ramo da Física que trata das dimensões próximas ou abaixo da escala atômica. Por conta disso, esses equipamentos podem realizar simultaneamente uma quantidade muito maior de cálculos.

“Esse entendimento quântico da informação atribui toda uma complexidade à sua codificação. Mas, ao mesmo tempo em que análises complexas, que levariam décadas, séculos ou até milhares de anos para serem feitas em computadores comuns, poderiam ser executadas em minutos por computadores quânticos, também essa tecnologia ameaçaria o sigilo de informações que não foram devidamente protegidas contra esse tipo de novidade”, disse Sueli Irene Rodrigues Costa, professora do IMECC, à Agência FAPESP.

A maior ameaça dos computadores quânticos à criptografia atual está na sua capacidade de quebrar os códigos usados na proteção de informações importantes, como as de cartão de crédito. Para evitar esse tipo de risco é preciso desenvolver também sistemas criptográficos visando segurança, considerando a capacidade da computação quântica.

“A teoria da informação e a codificação precisam estar um passo à frente do uso comercial da computação quântica”, disse Rodrigues Costa, que coordena o Projeto Temático “Segurança e confiabilidade da informação: teoria e prática”, apoiado pela FAPESP.

“Trata-se de uma criptografia pós-quântica. Como já foi demonstrado no final dos anos 1990, os procedimentos criptográficos atuais não sobreviverão aos computadores quânticos por não serem tão seguros. E essa urgência pelo desenvolvimento de soluções preparadas para a capacidade da computação quântica também impulsiona a teoria da informação a avançar cada vez mais em diversas direções”, disse.

Algumas dessas soluções foram tratadas ao longo da programação da SPCoding School, muitas delas visando sistemas mais eficientes para a aplicação na computação clássica, como o uso de códigos corretores de erros e de reticulados para criptografia. Para Rodrigues Costa, a escalada da teoria da informação em paralelo ao desenvolvimento da computação quântica provocará revoluções em várias áreas do conhecimento.

“A exemplo das múltiplas aplicações da teoria da informação na atualidade, a codificação quântica também elevaria diversas áreas da ciência a novos patamares por possibilitar simulações computacionais ainda mais precisas do mundo físico, lidando com uma quantidade exponencialmente maior de variáveis em comparação aos computadores clássicos”, disse Rodrigues Costa.

A teoria da informação envolve a quantificação da informação e envolve áreas como matemática, engenharia elétrica e ciência da computação. Teve como pioneiro o norte-americano Claude Shannon (1916-2001), que foi o primeiro a considerar a comunicação como um problema matemático.

Revoluções em curso

Enquanto se prepara para os computadores quânticos, a teoria da informação promove grandes modificações na codificação e na transmissão de informações. Amin Shokrollahi, da École Polytechnique Fédérale de Lausanne, na Suíça, apresentou na SPCoding School novas técnicas de codificação para resolver problemas como ruídos na informação e consumo elevado de energia no processamento de dados, inclusive na comunicação chip a chip nos aparelhos.

Shokrollahi é conhecido na área por ter inventado os códigos Raptor e coinventado os códigos Tornado, utilizados em padrões de transmissão móveis de informação, com implementações em sistemas sem fio, satélites e no método de transmissão de sinais televisivos IPTV, que usa o protocolo de internet (IP, na sigla em inglês) para transmitir conteúdo.

“O crescimento do volume de dados digitais e a necessidade de uma comunicação cada vez mais rápida aumentam a susceptibilidade a vários tipos de ruído e o consumo de energia. É preciso buscar novas soluções nesse cenário”, disse.

Shokrollahi também apresentou inovações desenvolvidas na empresa suíça Kandou Bus, da qual é diretor de pesquisa. “Utilizamos algoritmos especiais para codificar os sinais, que são todos transferidos simultaneamente até que um decodificador recupere os sinais originais. Tudo isso é feito evitando que fios vizinhos interfiram entre si, gerando um nível de ruído significativamente menor. Os sistemas também reduzem o tamanho dos chips, aumentam a velocidade de transmissão e diminuem o consumo de energia”, explicou.

De acordo com Rodrigues Costa, soluções semelhantes também estão sendo desenvolvidas em diversas tecnologias largamente utilizadas pela sociedade.

“Os celulares, por exemplo, tiveram um grande aumento de capacidade de processamento e em versatilidade, mas uma das queixas mais frequentes entre os usuários é de que a bateria não dura. Uma das estratégias é descobrir meios de codificar de maneira mais eficiente para economizar energia”, disse.

Aplicações biológicas

Não são só problemas de natureza tecnológica que podem ser abordados ou solucionados por meio da teoria da informação. Professor na City University of New York, nos Estados Unidos, Vinay Vaishampayan coordenou na SPCoding School o painel “Information Theory, Coding Theory and the Real World”, que tratou de diversas aplicações dos códigos na sociedade – entre elas, as biológicas.

“Não existe apenas uma teoria da informação e suas abordagens, entre computacionais e probabilísticas, podem ser aplicadas a praticamente todas as áreas do conhecimento. Nós tratamos no painel das muitas possibilidades de pesquisa à disposição de quem tem interesse em estudar essas interfaces dos códigos com o mundo real”, disse à Agência FAPESP.

Vaishampayan destacou a Biologia como área de grande potencial nesse cenário. “A neurociência apresenta questionamentos importantes que podem ser respondidos com a ajuda da teoria da informação. Ainda não sabemos em profundidade como os neurônios se comunicam entre si, como o cérebro funciona em sua plenitude e as redes neurais são um campo de estudo muito rico também do ponto de vista matemático, assim como a Biologia Molecular”, disse.

Isso porque, de acordo com Max Costa, professor da Faculdade de Engenharia Elétrica e de Computação da Unicamp e um dos palestrantes, os seres vivos também são feitos de informação.

“Somos codificados por meio do DNA das nossas células. Descobrir o segredo desse código, o mecanismo que há por trás dos mapeamentos que são feitos e registrados nesse contexto, é um problema de enorme interesse para a compreensão mais profunda do processo da vida”, disse.

Para Marcelo Firer, professor do Imecc e coordenador da SPCoding School, o evento proporcionou a estudantes e pesquisadores de diversos campos novas possibilidades de pesquisa.

“Os participantes compartilharam oportunidades de engajamento em torno dessas e muitas outras aplicações da Teoria da Informação e da Codificação. Foram oferecidos desde cursos introdutórios, destinados a estudantes com formação matemática consistente, mas não necessariamente familiarizados com codificação, a cursos de maior complexidade, além de palestras e painéis de discussão”, disse Firer, membro da coordenação da área de Ciência e Engenharia da Computação da FAPESP.

Participaram do evento cerca de 120 estudantes de 70 universidades e 25 países. Entre os palestrantes estrangeiros estiveram pesquisadores do California Institute of Technology (Caltech), da Maryland University e da Princeton University, nos Estados Unidos; da Chinese University of Hong Kong, na China; da Nanyang Technological University, em Cingapura; da Technische Universiteit Eindhoven, na Holanda; da Universidade do Porto, em Portugal; e da Tel Aviv University, em Israel.

Mais informações em www.ime.unicamp.br/spcodingschool.

Matemática evolutiva (Folha de S.Paulo)

Hélio Schwartsman

26 de janeiro de 2015

SÃO PAULO – Para quem gosta de matemática, uma boa leitura é “Mathematics and the Real World” (matemática e o mundo real), de Zvi Artstein, professor do Instituto Weizmann, de Israel.

O autor começa dividindo a matemática em duas, uma mais natural, que a evolução nos preparou (e também a outros bichos) para compreender, e outra totalmente abstrata, cuja intelecção exige refrear todas as nossas intuições. No primeiro grupo estão a aritmética e parte da geometria. No segundo, destacam-se lógica formal, estatística, teoria dos conjuntos e o grosso do material sobre o qual se debruçam hoje os matemáticos.

Egípicios, babilônios, indianos e outros povos da Antiguidade desenvolveram razoavelmente bem a matemática natural. Fizeram-no por razões práticas, como facilitar o comércio e o cálculo astrológico. Foram os gregos, contudo, que, tentando escapar ao que consideravam ilusões de ótica do mundo sensível, resolveram fiar-se na matemática para descobrir o “real”. É aqui que a matemática ganha autonomia para florescer para além das intuições.

Na sequência, Artstein traça uma interessantíssima história da ciência, destacando quais transformações foram necessárias na matemática para que pudessem firmar-se teorias e modelos como heliocentrismo, gravitação universal, relatividade, mecânica quântica, cordas etc. Não foge, embora nem sempre desenvolva muito, das implicações filosóficas.

O autor discute também assuntos mais classicamente matemáticos, como incerteza, caos, infinito, os teoremas da incompletude de Gödel. Numa concessão ao mundo prático, aborda quase apressadamente algumas questões da sociologia e da computação. Finaliza advogando por reformas no ensino da matemática.

O bacana do livro é que Artstein consegue transformar um assunto potencialmente árido num texto que se lê com a fluidez de um romance. Não é para qualquer um.

Monitoramento e análise de dados – A crise nos mananciais de São Paulo (Probabit)

Situação 25.1.2015

4,2 milímetros de chuva em 24.1.2015 nos reservatórios de São Paulo (média ponderada).

305 bilhões de litros (13,60%) de água em estoque. Em 24 horas, o volume subiu 4,4 bilhões de litros (0,19%).

134 dias até acabar toda a água armazenada, com chuvas de 996 mm/ano e mantida a eficiência corrente do sistema.

66% é a redução no consumo necessária para equilibrar o sistema nas condições atuais e 33% de perdas na distribuição.


Para entender a crise

Como ler este gráfico?

Os pontos no gráfico mostram 4040 intervalos de 1 ano para o acumulado de chuva e a variação no estoque total de água (do dia 1º de janeiro de 2003/2004 até hoje). O padrão mostra que mais chuva faz o estoque variar para cima e menos chuva para baixo, como seria de se esperar.

Este e os demais gráficos desta página consideram sempre a capacidade total de armazenamento de água em São Paulo (2,24 trilhões de litros), isto é, a soma dos reservatórios dos Sistemas Cantareira, Alto Tietê, Guarapiranga, Cotia, Rio Grande e Rio Claro. Quer explorar os dados?

A região de chuva acumulada de 1.400 mm a 1.600 mm ao ano concentra a maioria dos pontos observados de 2003 para cá. É para esse padrão usual de chuvas que o sistema foi projetado. Nessa região, o sistema opera sem grandes desvios de seu equilíbrio: máximo de 15% para cima ou para baixo em um ano. Por usar como referência a variação em 1 ano, esse modo de ver os dados elimina a oscilação sazonal de chuvas e destaca as variações climáticas de maior amplitude. Ver padrões ano a ano.

Uma segunda camada de informação no mesmo gráfico são as zonas de risco. A zona vermelha é delimitada pelo estoque atual de água em %. Todos os pontos dentro dessa área (com frequência indicada à direita) representam, portanto, situações que se repetidas levarão ao colapso do sistema em menos de 1 ano. A zona amarela mostra a incidência de casos que se repetidos levarão à diminuição do estoque. Só haverá recuperação efetiva do sistema se ocorrerem novos pontos acima da faixa amarela.

Para contextualizar o momento atual e dar uma ideia de tendência, pontos interligados em azul destacam a leitura adicionada hoje (acumulado de chuva e variação entre hoje e mesmo dia do ano passado) e as leituras de 30, 60 e 90 atrás (em tons progressivamente mais claros).


Discussão a partir de um modelo simples

O ajuste de um modelo linear aos casos observados mostra que existe uma razoável correlação entre o acumulado de chuva e a variação no estoque hídrico, como o esperado.

Ao mesmo tempo, fica clara a grande dispersão de comportamento do sistema, especialmente na faixa de chuvas entre 1.400 mm e 1.500 mm. Acima de 1.600 mm há dois caminhos bem separados, o inferior corresponde ao perído entre 2009 e 2010 quando os reservatórios ficaram cheios e não foi possível estocar a chuva excedente.

Além de uma gestão deliberadamente mais ou menos eficiente da água disponível, podem contribuir para as flutuações observadas as variações combinadas no consumo, nas perdas e na efetividade da captação de água. Entretanto, não há dados para examinarmos separadamente o efeito de cada uma dessas variáveis.

Simulação 1: Efeito do aumento do estoque de água

Nesta simulação foi hipoteticamente incluído no sistema de abastecimento a reserva adicional da represa Billings, com volume de 998 bilhões de litros (já descontados o braço “potável” do reservatório Rio Grande).

Aumentar o estoque disponível não muda o ponto de equilíbrio, mas altera a inclinação da reta que representa a relação entre a chuva e a variação no estoque. A diferença de inclinação entre a linha azul (simulada) e a vermelha (real) mostra o efeito da ampliação do estoque.

Se a Billings não fosse hoje um depósito gigante de esgotos, poderíamos estar fora da situação crítica. Entretanto, vale enfatizar que o simples aumento de estoque não é capaz de evitar indefinidamente a escassez se a quantidade de chuva persistir abaixo do ponto de equilíbrio.

Simulação 2: Efeito da melhoria na eficiência

O único modo de manter o estoque estável quando as chuvas se tornam mais escassas é mudar a ‘curva de eficiência’ do sistema. Em outras palavras, é preciso consumir menos e se adaptar a uma menor entrada de água no sistema.

A linha azul no gráfico ao lado indica o eixo ao redor do qual os pontos precisariam flutuar para que o sistema se equilibrasse com uma oferta anual de 1.200 mm de chuva.

A melhoria da eficiência pode ser alcançada por redução no consumo, redução nas perdas e melhoria na tecnologia de captação de água (por exemplo pela recuperação das matas ciliares e nascentes em torno dos mananciais).

Se persistir a situação desenhada de 2013 a 2015, com chuvas em torno de 1.000 mm será necessário atingir uma curva de eficiência que está muito distante do que já se conseguiu praticar, acima mesmo dos melhores casos já observados.

Com o equilíbrio de “projeto” em torno de 1.500 mm, a conta é mais ou menos assim: a Sabesp perde 500 mm (33% da água distribuída), a população consume 1.000 mm. Para chegar rapidamente ao equilíbrio em 1.000 mm, o consumo deveria ser de 500 mm, uma vez que as perdas não poderão ser rapidamente evitadas e acontecem antes do consumo.

Se 1/3 da água distribuída não fosse sistematicamente perdida não haveria crise. Os 500 mm de chuva disperdiçados anualmente pela precariedade do sistema de distribução não fazem falta quando chove 1.500 mm, mas com 1.000 mm cada litro jogado fora de um lado é um litro que terá de ser economizado do outro.

Simulação 3: Eficiência corrente e economia necessária

Para estimar a eficiência corrente são usadas as últimas 120 observações do comportamento do sistema.

A curva de eficiência corrente permite estimar o ponto de equilíbrio atual do sistema (ponto vermelho em destaque).

O ponto azul indica a última observação do acumulado anual de chuvas. A diferença entre os dois mede o tamanho do desequilíbrio.

Apenas para estancar a perda de água do sistema, é preciso reduzir em 49% o fluxo de retirada. Como esse fluxo inclui todas as perdas, se depender apenas da redução no consumo, a economia precisa ser de 66% se as perdas forem de 33%, ou de 56% se as perdas forem de 17%.

Parece incrível que a eficiência do sistema esteja tão baixa em meio a uma crise tão grave. A tentativa de contenção no consumo está aumentando o consumo? Volumes menores e mais rasos evaporam mais? As pessoas ainda não perceberam a tamanho do desastre?


Prognóstico

Supondo que novos estoques de água não serão incorporados no curto prazo, o prognóstico sobre se e quando a água vai acabar depende da quantidade de chuva e da eficiência do sistema.

O gráfico mostra quantos dias restam de água em função do acumulado de chuva, considerando duas curvas de eficiência: a média e a corrente (estimada a partir dos últimos 120 dias).

O ponto em destaque considera a observação mais recente de chuva acumulada no ano e mostra quantos dias restam de água se persistirem as condições atuais de chuva e de eficiência.

O prognóstico é uma referência que varia de acordo com as novas observações e não tem probabilidade definida. Trata-se de uma projeção para melhor visualizar as condições necessárias para escapar do colapso.

Porém, lembrando que a média histórica de chuvas em São Paulo é de 1.441 mm ao ano, uma curva que cruze esse limite significa um sistema com mais de 50% de chances de colapsar em menos de um ano. Somos capazes de evitar o desastre?


Os dados

O ponto de partida são os dados divulgados diariamente pela Sabesp. A série de dados original atualizada está disponível aqui.

Porém, há duas importantes limitações nesses dados que podem distorcer a interpretação da realidade: 1) a Sabesp usa somente porcentagens para se referir a reservatórios com volumes totais muito diferentes; 2) a entrada de novos volumes não altera a base-de-cálculo sobre o qual essa porcentagem é medida.

Por isso, foi necessário corrigir as porcentagens da série de dados original em relação ao volume total atual, uma vez que os volumes que não eram acessíveis se tornaram acessíveis e, convenhamos, sempre estiveram lá nas represas. A série corrigida pode ser obtida aqui. Ela contém uma coluna adicional com os dados dos volumes reais (em bilhões de litros: hm3)

Além disso, decidimos tratar os dados de forma consolidada, como se toda a água estivesse em um único grande reservatório. A série de dados usada para gerar os gráficos desta página contém apenas a soma ponderada do estoque (%) e da chuva (mm) diários e também está disponível.

As correções realizadas eliminam os picos causados pelas entradas dos volumes mortos e permitem ver com mais clareza o padrão de queda do estoque em 2014.


Padrões ano a ano


Média e quartis do estoque durante o ano


Sobre este estudo

Preocupado com a escassez de água, comecei a estudar o problema ao final de 2014. Busquei uma abordagem concisa e consistente de apresentar os dados, dando destaque para as três variáveis que realmente importam: a chuva, o estoque total e a eficiência do sistema. O site entrou no ar em 16 de janeiro de 2015. Todos os dias, os modelos e os gráficos são refeitos com as novas informações.

Espero que esta página ajude a informar a real dimensão da crise da água em São Paulo e estimule mais ações para o seu enfrentamento.

Mauro Zackiewicz

maurozacgmail.com

scientia probabitlaboratório de dados essenciais

The Cathedral of Computation (The Atlantic)

We’re not living in an algorithmic culture so much as a computational theocracy.

Algorithms are everywhere, supposedly. We are living in an “algorithmic culture,” to use the author and communication scholar Ted Striphas’s name for it. Google’s search algorithms determine how we access information. Facebook’s News Feed algorithms determine how we socialize. Netflix’s and Amazon’s collaborative filtering algorithms choose products and media for us. You hear it everywhere. “Google announced a change to its algorithm,” a journalist reports. “We live in a world run by algorithms,” a TED talk exhorts. “Algorithms rule the world,” a news report threatens. Another upgrades rule to dominion: “The 10 Algorithms that Dominate Our World.”

Here’s an exercise: The next time you hear someone talking about algorithms, replace the term with “God” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.

It’s part of a larger trend. The scientific revolution was meant to challenge tradition and faith, particularly a faith in religious superstition. But today, Enlightenment ideas like reason and science are beginning to flip into their opposites. Science and technology have become so pervasive and distorted, they have turned into a new type of theology.

The worship of the algorithm is hardly the only example of the theological reversal of the Enlightenment—for another sign, just look at the surfeit of nonfiction books promising insights into “The Science of…” anything, from laughter to marijuana. But algorithms hold a special station in the new technological temple because computers have become our favorite idols.

In fact, our purported efforts to enlighten ourselves about algorithms’ role in our culture sometimes offer an unexpected view into our zealous devotion to them. The media scholar Lev Manovich had this to say about “The Algorithms of Our Lives”:

Software has become a universal language, the interface to our imagination and the world. What electricity and the combustion engine were to the early 20th century, software is to the early 21st century. I think of it as a layer that permeates contemporary societies.

This is a common account of algorithmic culture, that software is a fundamental, primary structure of contemporary society. And like any well-delivered sermon, it seems convincing at first. Until we think a little harder about the historical references Manovich invokes, such as electricity and the engine, and how selectively those specimens characterize a prior era. Yes, they were important, but is it fair to call them paramount and exceptional?

It turns out that we have a long history of explaining the present via the output of industry. These rationalizations are always grounded in familiarity, and thus they feel convincing. But mostly they are metaphorsHere’s Nicholas Carr’s take on metaphorizing progress in terms of contemporary technology, from the 2008 Atlantic cover story that he expanded into his bestselling book The Shallows:

The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.”

Carr’s point is that there’s a gap between the world and the metaphors people use to describe that world. We can see how erroneous or incomplete or just plain metaphorical these metaphors are when we look at them in retrospect.

Take the machine. In his book Images of Organization, Gareth Morgan describes the way businesses are seen in terms of different metaphors, among them the organization as machine, an idea that forms the basis for Taylorism.

Gareth Morgan’s metaphors of organization (Venkatesh Rao/Ribbonfarm)

We can find similar examples in computing. For Larry Lessig, the accidental homophony between “code” as the text of a computer program and “code” as the text of statutory law becomes the fulcrum on which his argument that code is an instrument of social control balances.

Each generation, we reset a belief that we’ve reached the end of this chain of metaphors, even though history always proves us wrong precisely because there’s always another technology or trend offering a fresh metaphor. Indeed, an exceptionalism that favors the present is one of the ways that science has become theology.

In fact, Carr fails to heed his own lesson about the temporariness of these metaphors. Just after having warned us that we tend to render current trends into contingent metaphorical explanations, he offers a similar sort of definitive conclusion:

Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.

As with the machinic and computational metaphors that he critiques, Carr settles on another seemingly transparent, truth-yielding one. The real firmament is neurological, and computers are fitzing with our minds, a fact provable by brain science. And actually, software and neuroscience enjoy a metaphorical collaboration thanks to artificial intelligence’s idea that computing describes or mimics the brain. Compuplasting-as-thought reaches the rank of religious fervor when we choose to believe, as some do, that we can simulate cognition through computation and achieve the singularity.

* * *

The metaphor of mechanical automation has always been misleading anyway, with or without the computation. Take manufacturing. The goods people buy from Walmart appear safely ensconced in their blister packs, as if magically stamped out by unfeeling, silent machines (robots—those original automata—themselves run by the tinier, immaterial robots algorithms).

But the automation metaphor breaks down once you bother to look at how even the simplest products are really produced. The photographer Michael Wolf’s images of Chinese factory workers and the toys they fabricate show that finishing consumer goods to completion requires intricate, repetitive human effort.

Michael Wolf Photography

Eyelashes must be glued onto dolls’ eyelids. Mickey Mouse heads must be shellacked. Rubber ducky eyes must be painted white. The same sort of manual work is required to create more complex goods too. Like your iPhone—you know, the one that’s designed in California but “assembled in China.” Even though injection-molding machines and other automated devices help produce all the crap we buy, the metaphor of the factory-as-automated machine obscures the fact that manufacturing isn’t as machinic nor as automated as we think it is.

The algorithmic metaphor is just a special version of the machine metaphor, one specifying a particular kind of machine (the computer) and a particular way of operating it (via a step-by-step procedure for calculation). And when left unseen, we are able to invent a transcendental ideal for the algorithm. The canonical algorithm is not just a model sequence but a concise and efficient one. In its ideological, mythic incarnation, the ideal algorithm is thought to be some flawless little trifle of lithe computer code, processing data into tapestry like a robotic silkworm. A perfect flower, elegant and pristine, simple and singular. A thing you can hold in your palm and caress. A beautiful thing. A divine one.

But just as the machine metaphor gives us a distorted view of automated manufacture as prime mover, so the algorithmic metaphor gives us a distorted, theological view of computational action.

“The Google search algorithm” names something with an initial coherence that quickly scurries away once you really look for it. Googling isn’t a matter of invoking a programmatic subroutine—not on its own, anyway. Google is a monstrosity. It’s a confluence of physical, virtual, computational, and non-computational stuffs—electricity, data centers, servers, air conditioners, security guards, financial markets—just like the rubber ducky is a confluence of vinyl plastic, injection molding, the hands and labor of Chinese workers, the diesel fuel of ships and trains and trucks, the steel of shipping containers.

Once you start looking at them closely, every algorithm betrays the myth of unitary simplicity and computational purity. You may remember the Netflix Prize, a million dollar competition to build a better collaborative filtering algorithm for film recommendations. In 2009, the company closed the book on the prize, adding a faux-machined “completed” stamp to its website.

But as it turns out, that method didn’t really improve Netflix’s performance very much. The company ended up downplaying the ratings and instead using something different to manage viewer preferences: very specific genres like “Emotional Hindi-Language Movies for Hopeless Romantics.” Netflix calls them “altgenres.”

An example of a Netflix altgenre in action (tumblr/Genres of Netflix)

While researching an in-depth analysis of altgenres published a year ago at The Atlantic, Alexis Madrigal scraped the Netflix site, downloading all 76,000+ micro-genres using not an algorithm but a hackneyed, long-running screen-scraping apparatus. After acquiring the data, Madrigal and I organized and analyzed it (by hand), and I built a generator that allowed our readers to fashion their own altgenres based on different grammars (like “Deep Sea Forbidden Love Mockumentaries” or “Coming-of-Age Violent Westerns Set in Europe About Cats”).

Netflix VP Todd Yellin explained to Madrigal why the process of generating altgenres is no less manual than our own process of reverse engineering them. Netflix trains people to watch films, and those viewers laboriously tag the films with lots of metadata, including ratings of factors like sexually suggestive content or plot closure. These tailored altgenres are then presented to Netflix customers based on their prior viewing habits.

One of the hypothetical, “gonzo” altgenres created by The Atlantic‘s Netflix Genre Generator (The Atlantic)

Despite the initial promise of the Netflix Prize and the lurid appeal of a “million dollar algorithm,” Netflix operates by methods that look more like the Chinese manufacturing processes Michael Wolf’s photographs document. Yes, there’s a computer program matching viewing habits to a database of film properties. But the overall work of the Netflix recommendation system is distributed amongst so many different systems, actors, and processes that only a zealot would call the end result an algorithm.

The same could be said for data, the material algorithms operate upon. Data has become just as theologized as algorithms, especially “big data,” whose name is meant to elevate information to the level of celestial infinity. Today, conventional wisdom would suggest that mystical, ubiquitous sensors are collecting data by the terabyteful without our knowledge or intervention. Even if this is true to an extent, examples like Netflix’s altgenres show that data is created, not simply aggregated, and often by means of laborious, manual processes rather than anonymous vacuum-devices.

Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture.

* * *

If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work.

Unfortunately, most computing systems don’t want to admit that they are burlesques. They want to be innovators, disruptors, world-changers, and such zeal requires sectarian blindness. The exception is games, which willingly admit that they are caricatures—and which suffer the consequences of this admission in the court of public opinion. Games know that they are faking it, which makes them less susceptible to theologization. SimCity isn’t an urban planning tool, it’s  a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like.

A Google Maps Street View vehicle roams the streets of Washington D.C. Google Maps entails algorithms, but also other things, like internal combustion engine automobiles. (justgrimes/Flickr)

Just as it’s not really accurate to call the manufacture of plastic toys “automated,” it’s not quite right to call Netflix recommendations or Google Maps “algorithmic.” Yes, true, there are algorithmsw involved, insofar as computers are involved, and computers run software that processes information. But that’s just a part of the story, a theologized version of the diverse, varied array of people, processes, materials, and machines that really carry out the work we shorthand as “technology.” The truth is as simple as it is uninteresting: The world has a lot of stuff in it, all bumping and grinding against one another.

I don’t want to downplay the role of computation in contemporary culture. Striphas and Manovich are right—there are computers in and around everything these days. But the algorithm has taken on a particularly mythical role in our technology-obsessed era, one that has allowed it wear the garb of divinity. Concepts like “algorithm” have become sloppy shorthands, slang terms for the act of mistaking multipart complex systems for simple, singular ones. Of treating computation theologically rather than scientifically or culturally.

This attitude blinds us in two ways. First, it allows us to chalk up any kind of computational social change as pre-determined and inevitable. It gives us an excuse not to intervene in the social shifts wrought by big corporations like Google or Facebook or their kindred, to see their outcomes as beyond our influence. Second, it makes us forget that particular computational systems are abstractions, caricatures of the world, one perspective among many. The first error turns computers into gods, the second treats their outputs as scripture.

Computers are powerful devices that have allowed us to mimic countless other machines all at once. But in so doing, when pushed to their limits, that capacity to simulate anything reverses into the inability or unwillingness to distinguish one thing from anything else. In its Enlightenment incarnation, the rise of reason represented not only the ascendency of science but also the rise of skepticism, of incredulity at simplistic, totalizing answers, especially answers that made appeals to unseen movers. But today even as many scientists and technologists scorn traditional religious practice, they unwittingly invoke a new theology in so doing.

Algorithms aren’t gods. We need not believe that they rule the world in order to admit that they influence it, sometimes profoundly. Let’s bring algorithms down to earth again. Let’s keep the computer around without fetishizing it, without bowing down to it or shrugging away its inevitable power over us, without melting everything down into it as a new name for fate. I don’t want an algorithmic culture, especially if that phrase just euphemizes a corporate, computational theocracy.

But a culture with computers in it? That might be all right.

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: arxiv.org/abs/1404.1499 : A Ballistic Monte Carlo Approximation of π

Modeling the past to understand the future of a stronger El Niño (Science Daily)

Date:

November 26, 2014

Source:

University of Wisconsin-Madison

Summary:

El Nino is not a contemporary phenomenon; it’s long been the Earth’s dominant source of year-to-year climate fluctuation. But as the climate warms and the feedbacks that drive the cycle change, researchers want to know how El Nino will respond.

141126132628-large

Using state-of-the-art computer models maintained at the National Center for Atmospheric Research, researchers determined that El Niño has intensified over the last 6,000 years. This pier and cafe are in Ocean Beach, California. Credit: Jon Sullivan

It was fishermen off the coast of Peru who first recognized the anomaly, hundreds of years ago. Every so often, their usually cold, nutrient-rich water would turn warm and the fish they depended on would disappear. Then there was the ceaseless rain.

They called it “El Nino,” The Boy — or Christmas Boy — because of its timing near the holiday each time it returned, every three to seven years.

El Nino is not a contemporary phenomenon; it’s long been Earth’s dominant source of year-to-year climate fluctuation. But as the climate warms and the feedbacks that drive the cycle change, researchers want to know how El Nino will respond. A team of researchers led by the University of Wisconsin’s Zhengyu Liu published the latest findings in this quest Nov. 27, 2014 in Nature.

“We can’t see the future; the only thing we can do is examine the past,” says Liu, a professor in the Department of Atmospheric and Oceanic Sciences. “The question people are interested in now is whether it’s going to be stronger or weaker, and this requires us to first check if our model can simulate its past history.”

The study examines what has influenced El Nino over the last 21,000 years in order to understand its future and to prepare for the consequences. It is valuable knowledge for scientists, land managers, policymakers and many others, as people across the globe focus on adapting to a changing climate.

Using state-of-the-art computer models maintained at the National Center for Atmospheric Research in Colorado, the researchers — also from Peking University in China, the University of Hawaii at Manoa, and Georgia Institute of Technology — determined that El Nino has intensified over the last 6,000 years.

The findings corroborate data from previous studies, which relied on observations like historical sediments off the Central American coast and changes in fossilized coral. During warm, rainy El Nino years, the coastal sediments consist of larger mixed deposits of lighter color, and the coral provides a unique signature, akin to rings on a tree.

“There have been some observations that El Nino has been changing,” says Liu, also a professor in the Nelson Institute for Environmental Studies Center for Climatic Research. “Previous studies seem to indicate El Nino has increased over the last 5,000 to 7,000 years.”

But unlike previous studies, the new model provides a continuous look at the long history of El Nino, rather than a snapshot in time.

It examines the large-scale influences that have impacted the strength of El Nino over the last 21,000 years, such as atmospheric carbon dioxide, ice sheet melting and changes to Earth’s orbit.

El Nino is driven by an intricate tango between the ocean and Earth’s atmosphere. In non-El Nino years, trade winds over the tropical Pacific Ocean drive the seas westward, from the coast of Central America toward Indonesia, adding a thick, warm layer to the surface of the western part of the ocean while cooler water rises to the surface in the east. This brings rain to the west and dry conditions to the east.

During El Nino, the trade winds relax and the sea surface temperature differences between the Western and Eastern Pacific Ocean are diminished. This alters the heat distribution in both the water and the air in each region, forcing a cascade of global climate-related changes.

“It has an impact on Madison winter temperatures — when Peru is warm, it’s warm here,” says Liu. “It has global impact. If there are changes in the future, will it change the pattern?”

Before the start of the Holocene — which began roughly 12,000 years ago — pulses of melting water during deglaciation most strongly influenced El Nino, the study found. But since that time, changes in Earth’s orbit have played the greatest role in intensifying it.

Like an uptick in tempo, the feedbacks between ocean and atmosphere — such as how wind and seas interact — have grown stronger.

However, even with the best data available, some features of the simulated El Nino — especially prior to 6,000 years ago — can’t be tested unambiguously, Liu says. The current observational data feeding the model is sparse and the resolution too low to pick up subtle shifts in El Nino over the millennia.

The study findings indicate better observational data is needed to refine the science, like more coral samples and sediment measurements from different locations in the Central Pacific. Like all science, better understanding what drives El Nino and how it might change is a process, and one that will continue to evolve over time.

“It’s really an open door; we need more data to get a more significant model,” he says. “With this study, we are providing the first benchmark for the next five, 10, 20 years into the future.”

Story Source:

The above story is based on materials provided by University of Wisconsin-Madison. The original article was written by Kelly April Tyrrell. Note: Materials may be edited for content and length.

Journal Reference:

  1. Zhengyu Liu, Zhengyao Lu, Xinyu Wen, B. L. Otto-Bliesner, A. Timmermann, K. M. Cobb. Evolution and forcing mechanisms of El Niño over the past 21,000 years. Nature, 2014; 515 (7528): 550 DOI: 10.1038/nature13963

Doing math with your body (Science Daily)

Date: October 2, 2014

Source: Radboud University

Summary: You do math in your head most of the time, but you can also teach your body how to do it. Researchers investigated how our brain processes and understands numbers and number size. They show that movements and sensory perception help us understand numbers.


In this example the physically largest number (2) is the smallest in terms of meaning. It was harder for test subjects to identify a 2 as the physically largest number then it was for them to identify a 9 as the largest number. Credit: Image courtesy of Radboud University

You do math in your head most of the time, but you can also teach your body how to do it. Florian Krause investigated how our brain processes and understands numbers and number size. He shows that movements and sensory perception help us understand numbers. Krause defends his thesis on October 10 at Radboud University.

When learning to do math, it helps to see that two marbles take up less space than twenty. Or to feel that a bag with ten apples weighs more than a bag with just one. During his PhD at Radboud University’s Donders Institute, Krause investigated which brain areas represent size and how these areas work together. He concludes that number size is associated with sizes experienced by our body.

Physically perceived size

Krause asked tests subjects to find the physically largest number in an image with eighteen numbers. Sometimes this number was also the largest in terms of meaning, but sometimes it wasn’t. Subjects found the largest number faster when it was also the largest in terms of meaning. ‘This shows how sensory information about small and large is associated with our understanding of numbers’, Krause says. ‘Combining this knowledge about size makes our processing of numbers more effective.’

More fruit, more force

Even very young children have a sensory understanding of size. In a computer game, Krause asked them to lift up a platform carrying a few or many pieces of fruit by pressing a button. Although the amount of force applied to the button did not matter — simply pressing it was adequate — children pushed harder when there was a lot of fruit on the platform and less hard when there was little fruit on the platform.

Applications in education

Krause believes his results can provide applications in math education. ‘If numerical size and other body-related size information are indeed represented together in the brain, strengthening this link during education might be beneficial. For instance by using a ‘rekenstok’ which makes you experience how long a meter or ten centimeter is when holding it with both hands. This general idea can be extended to other experiencable magnitudes besides spatial length, by developing tools which make you see an amount of light or hear an amount of sound that correlates with the number size in a calculation.’

Adding uncertainty to improve mathematical models (Science Daily)

Date: September 29, 2014

Source: Brown University

Summary: Mathematicians have introduced a new element of uncertainty into an equation used to describe the behavior of fluid flows. While being as certain as possible is generally the stock and trade of mathematics, the researchers hope this new formulation might ultimately lead to mathematical models that better reflect the inherent uncertainties of the natural world.

Burgers’ equation. Named for Johannes Martinus Burgers (1895–1981), the equation describes fluid flows, as when two air masses meet and create a front. A new development accounts for many more complexities and uncertainties, making predictions more robust, less sterile. Credit: Image courtesy of Brown University

Ironically, allowing uncertainty into a mathematical equation that models fluid flows makes the equation much more capable of correctly reflecting the natural world — like the formation, strength, and position of air masses and fronts in the atmosphere.

Mathematicians from Brown University have introduced a new element of uncertainty into an equation used to describe the behavior of fluid flows. While being as certain as possible is generally the stock and trade of mathematics, the researchers hope this new formulation might ultimately lead to mathematical models that better reflect the inherent uncertainties of the natural world.

The research, published in Proceedings of the Royal Society A, deals with Burgers’ equation, which is used to describe turbulence and shocks in fluid flows. The equation can be used, for example, to model the formation of a front when airflows run into each other in the atmosphere.

“Say you have a wave that’s moving very fast in the atmosphere,” said George Karniadakis, the Charles Pitts Robinson and John Palmer Barstow Professor of Applied Mathematics at Brown and senior author of the new research. “If the rest of the air in the domain is at rest, then flow one goes over the other. That creates a very stiff front or a shock, and that’s what Burgers’ equation describes.”

It does so, however, in what Karniadakis describes as “a very sterilized” way, meaning the flows are modeled in the absence of external influences.

For example, when modeling turbulence in the atmosphere, the equations don’t take into consideration the fact that the airflows are interacting not just with each other, but also with whatever terrain may be below — be it a mountain, a valley or a plain. In a general model designed to capture any random point of the atmosphere, it’s impossible to know what landforms might lie underneath. But the effects of whatever those landforms might be can still be accounted for in the equation by adding a new term — one that treats those effects as a “random forcing.”

In this latest research, Karniadakis and his colleagues showed that Burgers’ equation can indeed be solved in the presence of this additional random term. The new term produces a range of solutions that accounts for uncertain external conditions that could be acting on the model system.

The work is part of a larger effort and a burgeoning field in mathematics called uncertainty quantification (UQ). Karniadakis is leading a Multidisciplinary University Research Initiative centered at Brown to lay out the mathematical foundations of UQ.

“The general idea in UQ,” Karniadakis said, “is that when we model a system, we have to simplify it. When we simplify it, we throw out important degrees of freedom. So in UQ, we account for the fact that we committed a crime with our simplification and we try to reintroduce some of those degrees of freedom as a random forcing. It allows us to get more realism from our simulations and our predictions.”

Solving these equations is computationally expensive, and only in recent years has computing power reached a level that makes such calculations possible.

“This is something people have thought about for years,” Karniadakis said. “During my career, computing power has increased by a factor of a billion, so now we can think about harnessing that power.”

The aim, ultimately, is to make the mathematical models describing all kinds of phenomena — from atmospheric currents to the cardiovascular system to gene expression — that better reflect the uncertainties of the natural world.

Heyrim Cho and Daniele Venturi were co-authors on the paper.

Journal Reference:

  1. H. Cho, D. Venturi, G. E. Karniadakis. Statistical analysis and simulation of random shocks in stochastic Burgers equation. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2014; 470 (2171): 20140080 DOI: 10.1098/rspa.2014.0080

Global Warming’s Terrifying New Math (Rolling Stone)

Three simple numbers that add up to global catastrophe – and that make clear who the real enemy is

reckoning illo
Illustration by Edel Rodriguez
BY | July 19, 2012

If the pictures of those towering wildfires in Colorado haven’t convinced you, or the size of your AC bill this summer, here are some hard numbers about climate change: June broke or tied 3,215 high-temperature records across the United States. That followed the warmest May on record for the Northern Hemisphere – the 327th consecutive month in which the temperature of the entire globe exceeded the 20th-century average, the odds of which occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.

Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the “largest temperature departure from average of any season on record.” The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet’s history.

Not that our leaders seemed to notice. Last month the world’s nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn’t even attend. It was “a ghost of the glad, confident meeting 20 years ago,” the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls “once thronged by multitudes.” Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I’ve spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we’re losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.

When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn’t yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.

The First Number: 2° Celsius

If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world’s nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the “most important gathering since the Second World War, given what is at stake.” As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: “This is our chance. If we miss it, it could take years before we get a new and better one. If ever.”

In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving “Copenhagen Accord” that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. “Copenhagen is a crime scene tonight,” an angry Greenpeace official declared, “with the guilty men and women fleeing to the airport.” Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.

The accord did contain one important number, however. In Paragraph 1, it formally recognized “the scientific view that the increase in global temperature should be below two degrees Celsius.” And in the very next paragraph, it declared that “we agree that deep cuts in global emissions are required… so as to hold the increase in global temperature below two degrees Celsius.” By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.

Some context: So far, we’ve raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. “Any number much above one degree involves a gamble,” writes Kerry Emanuel of MIT, a leading authority on hurricanes, “and the odds become less and less favorable as the temperature goes up.” Thomas Lovejoy, once the World Bank’s chief biodiversity adviser, puts it like this: “If we’re seeing what we’re seeing today at 0.8 degrees Celsius, two degrees is simply too much.” NASA scientist James Hansen, the planet’s most prominent climatologist, is even blunter: “The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster.” At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: “Some countries will flat-out disappear.” When delegates from developing nations were warned that two degrees would represent a “suicide pact” for drought-stricken Africa, many of them started chanting, “One degree, one Africa.”

Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it’s fair to say that it’s the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world’s carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can’t raise the temperature more than two degrees Celsius – it’s become the bottomest of bottom lines. Two degrees.

The Second Number: 565 Gigatons

Scientists estimate that humans can pour roughly 565 more gigatons of carbon dioxide into the atmosphere by midcentury and still have some reasonable hope of staying below two degrees. (“Reasonable,” in this case, means four chances in five, or somewhat worse odds than playing Russian roulette with a six-shooter.)

This idea of a global “carbon budget” emerged about a decade ago, as scientists began to calculate how much oil, coal and gas could still safely be burned. Since we’ve increased the Earth’s temperature by 0.8 degrees so far, we’re currently less than halfway to the target. But, in fact, computer models calculate that even if we stopped increasing CO2 now, the temperature would likely still rise another 0.8 degrees, as previously released carbon continues to overheat the atmosphere. That means we’re already three-quarters of the way to the two-degree target.

How good are these numbers? No one is insisting that they’re exact, but few dispute that they’re generally right. The 565-gigaton figure was derived from one of the most sophisticated computer-simulation models that have been built by climate scientists around the world over the past few decades. And the number is being further confirmed by the latest climate-simulation models currently being finalized in advance of the next report by the Intergovernmental Panel on Climate Change. “Looking at them as they come in, they hardly differ at all,” says Tom Wigley, an Australian climatologist at the National Center for Atmospheric Research. “There’s maybe 40 models in the data set now, compared with 20 before. But so far the numbers are pretty much the same. We’re just fine-tuning things. I don’t think much has changed over the last decade.” William Collins, a senior climate scientist at the Lawrence Berkeley National Laboratory, agrees. “I think the results of this round of simulations will be quite similar,” he says. “We’re not getting any free lunch from additional understanding of the climate system.”

We’re not getting any free lunch from the world’s economies, either. With only a single year’s lull in 2009 at the height of the financial crisis, we’ve continued to pour record amounts of carbon into the atmosphere, year after year. In late May, the International Energy Agency published its latest figures – CO2 emissions last year rose to 31.6 gigatons, up 3.2 percent from the year before. America had a warm winter and converted more coal-fired power plants to natural gas, so its emissions fell slightly; China kept booming, so its carbon output (which recently surpassed the U.S.) rose 9.3 percent; the Japanese shut down their fleet of nukes post-Fukushima, so their emissions edged up 2.4 percent. “There have been efforts to use more renewable energy and improve energy efficiency,” said Corinne Le Quéré, who runs England’s Tyndall Centre for Climate Change Research. “But what this shows is that so far the effects have been marginal.” In fact, study after study predicts that carbon emissions will keep growing by roughly three percent a year – and at that rate, we’ll blow through our 565-gigaton allowance in 16 years, around the time today’s preschoolers will be graduating from high school. “The new data provide further evidence that the door to a two-degree trajectory is about to close,” said Fatih Birol, the IEA’s chief economist. In fact, he continued, “When I look at this data, the trend is perfectly in line with a temperature increase of about six degrees.” That’s almost 11 degrees Fahrenheit, which would create a planet straight out of science fiction.

So, new data in hand, everyone at the Rio conference renewed their ritual calls for serious international action to move us back to a two-degree trajectory. The charade will continue in November, when the next Conference of the Parties (COP) of the U.N. Framework Convention on Climate Change convenes in Qatar. This will be COP 18 – COP 1 was held in Berlin in 1995, and since then the process has accomplished essentially nothing. Even scientists, who are notoriously reluctant to speak out, are slowly overcoming their natural preference to simply provide data. “The message has been consistent for close to 30 years now,” Collins says with a wry laugh, “and we have the instrumentation and the computer power required to present the evidence in detail. If we choose to continue on our present course of action, it should be done with a full evaluation of the evidence the scientific community has presented.” He pauses, suddenly conscious of being on the record. “I should say, a fuller evaluation of the evidence.”

So far, though, such calls have had little effect. We’re in the same position we’ve been in for a quarter-century: scientific warning followed by political inaction. Among scientists speaking off the record, disgusted candor is the rule. One senior scientist told me, “You know those new cigarette packs, where governments make them put a picture of someone with a hole in their throats? Gas pumps should have something like that.”

The Third Number: 2,795 Gigatons

This number is the scariest of all – one that, for the first time, meshes the political and scientific dimensions of our dilemma. It was highlighted last summer by the Carbon Tracker Initiative, a team of London financial analysts and environmentalists who published a report in an effort to educate investors about the possible risks that climate change poses to their stock portfolios. The number describes the amount of carbon already contained in the proven coal and oil and gas reserves of the fossil-fuel companies, and the countries (think Venezuela or Kuwait) that act like fossil-fuel companies. In short, it’s the fossil fuel we’re currently planning to burn. And the key point is that this new number – 2,795 – is higher than 565. Five times higher.

The Carbon Tracker Initiative – led by James Leaton, an environmentalist who served as an adviser at the accounting giant PricewaterhouseCoopers – combed through proprietary databases to figure out how much oil, gas and coal the world’s major energy companies hold in reserve. The numbers aren’t perfect – they don’t fully reflect the recent surge in unconventional energy sources like shale gas, and they don’t accurately reflect coal reserves, which are subject to less stringent reporting requirements than oil and gas. But for the biggest companies, the figures are quite exact: If you burned everything in the inventories of Russia’s Lukoil and America’s ExxonMobil, for instance, which lead the list of oil and gas companies, each would release more than 40 gigatons of carbon dioxide into the atmosphere.

Which is exactly why this new number, 2,795 gigatons, is such a big deal. Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.

We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.

Yes, this coal and gas and oil is still technically in the soil. But it’s already economically aboveground – it’s figured into share prices, companies are borrowing money against it, nations are basing their budgets on the presumed returns from their patrimony. It explains why the big fossil-fuel companies have fought so hard to prevent the regulation of carbon dioxide – those reserves are their primary asset, the holding that gives their companies their value. It’s why they’ve worked so hard these past years to figure out how to unlock the oil in Canada’s tar sands, or how to drill miles beneath the sea, or how to frack the Appalachians.

If you told Exxon or Lukoil that, in order to avoid wrecking the climate, they couldn’t pump out their reserves, the value of their companies would plummet. John Fullerton, a former managing director at JP Morgan who now runs the Capital Institute, calculates that at today’s market value, those 2,795 gigatons of carbon emissions are worth about $27 trillion. Which is to say, if you paid attention to the scientists and kept 80 percent of it underground, you’d be writing off $20 trillion in assets. The numbers aren’t exact, of course, but that carbon bubble makes the housing bubble look small by comparison. It won’t necessarily burst – we might well burn all that carbon, in which case investors will do fine. But if we do, the planet will crater. You can have a healthy fossil-fuel balance sheet, or a relatively healthy planet – but now that we know the numbers, it looks like you can’t have both. Do the math: 2,795 is five times 565. That’s how the story ends.

So far, as I said at the start, environmental efforts to tackle global warming have failed. The planet’s emissions of carbon dioxide continue to soar, especially as developing countries emulate (and supplant) the industries of the West. Even in rich countries, small reductions in emissions offer no sign of the real break with the status quo we’d need to upend the iron logic of these three numbers. Germany is one of the only big countries that has actually tried hard to change its energy mix; on one sunny Saturday in late May, that northern-latitude nation generated nearly half its power from solar panels within its borders. That’s a small miracle – and it demonstrates that we have the technology to solve our problems. But we lack the will. So far, Germany’s the exception; the rule is ever more carbon.

This record of failure means we know a lot about what strategies don’t work. Green groups, for instance, have spent a lot of time trying to change individual lifestyles: the iconic twisty light bulb has been installed by the millions, but so have a new generation of energy-sucking flatscreen TVs. Most of us are fundamentally ambivalent about going green: We like cheap flights to warm places, and we’re certainly not going to give them up if everyone else is still taking them. Since all of us are in some way the beneficiaries of cheap fossil fuel, tackling climate change has been like trying to build a movement against yourself – it’s as if the gay-rights movement had to be constructed entirely from evangelical preachers, or the abolition movement from slaveholders.

People perceive – correctly – that their individual actions will not make a decisive difference in the atmospheric concentration of CO2; by 2010, a poll found that “while recycling is widespread in America and 73 percent of those polled are paying bills online in order to save paper,” only four percent had reduced their utility use and only three percent had purchased hybrid cars. Given a hundred years, you could conceivably change lifestyles enough to matter – but time is precisely what we lack.

A more efficient method, of course, would be to work through the political system, and environmentalists have tried that, too, with the same limited success. They’ve patiently lobbied leaders, trying to convince them of our peril and assuming that politicians would heed the warnings. Sometimes it has seemed to work. Barack Obama, for instance, campaigned more aggressively about climate change than any president before him – the night he won the nomination, he told supporters that his election would mark the moment “the rise of the oceans began to slow and the planet began to heal.” And he has achieved one significant change: a steady increase in the fuel efficiency mandated for automobiles. It’s the kind of measure, adopted a quarter-century ago, that would have helped enormously. But in light of the numbers I’ve just described, it’s obviously a very small start indeed.

At this point, effective action would require actually keeping most of the carbon the fossil-fuel industry wants to burn safely in the soil, not just changing slightly the speed at which it’s burned. And there the president, apparently haunted by the still-echoing cry of “Drill, baby, drill,” has gone out of his way to frack and mine. His secretary of interior, for instance, opened up a huge swath of the Powder River Basin in Wyoming for coal extraction: The total basin contains some 67.5 gigatons worth of carbon (or more than 10 percent of the available atmospheric space). He’s doing the same thing with Arctic and offshore drilling; in fact, as he explained on the stump in March, “You have my word that we will keep drilling everywhere we can… That’s a commitment that I make.” The next day, in a yard full of oil pipe in Cushing, Oklahoma, the president promised to work on wind and solar energy but, at the same time, to speed up fossil-fuel development: “Producing more oil and gas here at home has been, and will continue to be, a critical part of an all-of-the-above energy strategy.” That is, he’s committed to finding even more stock to add to the 2,795-gigaton inventory of unburned carbon.

Sometimes the irony is almost Borat-scale obvious: In early June, Secretary of State Hillary Clinton traveled on a Norwegian research trawler to see firsthand the growing damage from climate change. “Many of the predictions about warming in the Arctic are being surpassed by the actual data,” she said, describing the sight as “sobering.” But the discussions she traveled to Scandinavia to have with other foreign ministers were mostly about how to make sure Western nations get their share of the estimated $9 trillion in oil (that’s more than 90 billion barrels, or 37 gigatons of carbon) that will become accessible as the Arctic ice melts. Last month, the Obama administration indicated that it would give Shell permission to start drilling in sections of the Arctic.

Almost every government with deposits of hydrocarbons straddles the same divide. Canada, for instance, is a liberal democracy renowned for its internationalism – no wonder, then, that it signed on to the Kyoto treaty, promising to cut its carbon emissions substantially by 2012. But the rising price of oil suddenly made the tar sands of Alberta economically attractive – and since, as NASA climatologist James Hansen pointed out in May, they contain as much as 240 gigatons of carbon (or almost half of the available space if we take the 565 limit seriously), that meant Canada’s commitment to Kyoto was nonsense. In December, the Canadian government withdrew from the treaty before it faced fines for failing to meet its commitments.

The same kind of hypocrisy applies across the ideological board: In his speech to the Copenhagen conference, Venezuela’s Hugo Chavez quoted Rosa Luxemburg, Jean-Jacques Rousseau and “Christ the Redeemer,” insisting that “climate change is undoubtedly the most devastating environmental problem of this century.” But the next spring, in the Simon Bolivar Hall of the state-run oil company, he signed an agreement with a consortium of international players to develop the vast Orinoco tar sands as “the most significant engine for a comprehensive development of the entire territory and Venezuelan population.” The Orinoco deposits are larger than Alberta’s – taken together, they’d fill up the whole available atmospheric space.

So: the paths we have tried to tackle global warming have so far produced only gradual, halting shifts. A rapid, transformative change would require building a movement, and movements require enemies. As John F. Kennedy put it, “The civil rights movement should thank God for Bull Connor. He’s helped it as much as Abraham Lincoln.” And enemies are what climate change has lacked.

But what all these climate numbers make painfully, usefully clear is that the planet does indeed have an enemy – one far more committed to action than governments or individuals. Given this hard math, we need to view the fossil-fuel industry in a new light. It has become a rogue industry, reckless like no other force on Earth. It is Public Enemy Number One to the survival of our planetary civilization. “Lots of companies do rotten things in the course of their business – pay terrible wages, make people work in sweatshops – and we pressure them to change those practices,” says veteran anti-corporate leader Naomi Klein, who is at work on a book about the climate crisis. “But these numbers make clear that with the fossil-fuel industry, wrecking the planet is their business model. It’s what they do.”

According to the Carbon Tracker report, if Exxon burns its current reserves, it would use up more than seven percent of the available atmospheric space between us and the risk of two degrees. BP is just behind, followed by the Russian firm Gazprom, then Chevron, ConocoPhillips and Shell, each of which would fill between three and four percent. Taken together, just these six firms, of the 200 listed in the Carbon Tracker report, would use up more than a quarter of the remaining two-degree budget. Severstal, the Russian mining giant, leads the list of coal companies, followed by firms like BHP Billiton and Peabody. The numbers are simply staggering – this industry, and this industry alone, holds the power to change the physics and chemistry of our planet, and they’re planning to use it.

They’re clearly cognizant of global warming – they employ some of the world’s best scientists, after all, and they’re bidding on all those oil leases made possible by the staggering melt of Arctic ice. And yet they relentlessly search for more hydrocarbons – in early March, Exxon CEO Rex Tillerson told Wall Street analysts that the company plans to spend $37 billion a year through 2016 (about $100 million a day) searching for yet more oil and gas.

There’s not a more reckless man on the planet than Tillerson. Late last month, on the same day the Colorado fires reached their height, he told a New York audience that global warming is real, but dismissed it as an “engineering problem” that has “engineering solutions.” Such as? “Changes to weather patterns that move crop-production areas around – we’ll adapt to that.” This in a week when Kentucky farmers were reporting that corn kernels were “aborting” in record heat, threatening a spike in global food prices. “The fear factor that people want to throw out there to say, ‘We just have to stop this,’ I do not accept,” Tillerson said. Of course not – if he did accept it, he’d have to keep his reserves in the ground. Which would cost him money. It’s not an engineering problem, in other words – it’s a greed problem.

You could argue that this is simply in the nature of these companies – that having found a profitable vein, they’re compelled to keep mining it, more like efficient automatons than people with free will. But as the Supreme Court has made clear, they are people of a sort. In fact, thanks to the size of its bankroll, the fossil-fuel industry has far more free will than the rest of us. These companies don’t simply exist in a world whose hungers they fulfill – they help create the boundaries of that world.

Left to our own devices, citizens might decide to regulate carbon and stop short of the brink; according to a recent poll, nearly two-thirds of Americans would back an international agreement that cut carbon emissions 90 percent by 2050. But we aren’t left to our own devices. The Koch brothers, for instance, have a combined wealth of $50 billion, meaning they trail only Bill Gates on the list of richest Americans. They’ve made most of their money in hydrocarbons, they know any system to regulate carbon would cut those profits, and they reportedly plan to lavish as much as $200 million on this year’s elections. In 2009, for the first time, the U.S. Chamber of Commerce surpassed both the Republican and Democratic National Committees on political spending; the following year, more than 90 percent of the Chamber’s cash went to GOP candidates, many of whom deny the existence of global warming. Not long ago, the Chamber even filed a brief with the EPA urging the agency not to regulate carbon – should the world’s scientists turn out to be right and the planet heats up, the Chamber advised, “populations can acclimatize to warmer climates via a range of behavioral, physiological and technological adaptations.” As radical goes, demanding that we change our physiology seems right up there.

Environmentalists, understandably, have been loath to make the fossil-fuel industry their enemy, respecting its political power and hoping instead to convince these giants that they should turn away from coal, oil and gas and transform themselves more broadly into “energy companies.” Sometimes that strategy appeared to be working – emphasis on appeared. Around the turn of the century, for instance, BP made a brief attempt to restyle itself as “Beyond Petroleum,” adapting a logo that looked like the sun and sticking solar panels on some of its gas stations. But its investments in alternative energy were never more than a tiny fraction of its budget for hydrocarbon exploration, and after a few years, many of those were wound down as new CEOs insisted on returning to the company’s “core business.” In December, BP finally closed its solar division. Shell shut down its solar and wind efforts in 2009. The five biggest oil companies have made more than $1 trillion in profits since the millennium – there’s simply too much money to be made on oil and gas and coal to go chasing after zephyrs and sunbeams.

Much of that profit stems from a single historical accident: Alone among businesses, the fossil-fuel industry is allowed to dump its main waste, carbon dioxide, for free. Nobody else gets that break – if you own a restaurant, you have to pay someone to cart away your trash, since piling it in the street would breed rats. But the fossil-fuel industry is different, and for sound historical reasons: Until a quarter-century ago, almost no one knew that CO2 was dangerous. But now that we understand that carbon is heating the planet and acidifying the oceans, its price becomes the central issue.

If you put a price on carbon, through a direct tax or other methods, it would enlist markets in the fight against global warming. Once Exxon has to pay for the damage its carbon is doing to the atmosphere, the price of its products would rise. Consumers would get a strong signal to use less fossil fuel – every time they stopped at the pump, they’d be reminded that you don’t need a semimilitary vehicle to go to the grocery store. The economic playing field would now be a level one for nonpolluting energy sources. And you could do it all without bankrupting citizens – a so-called “fee-and-dividend” scheme would put a hefty tax on coal and gas and oil, then simply divide up the proceeds, sending everyone in the country a check each month for their share of the added costs of carbon. By switching to cleaner energy sources, most people would actually come out ahead.

There’s only one problem: Putting a price on carbon would reduce the profitability of the fossil-fuel industry. After all, the answer to the question “How high should the price of carbon be?” is “High enough to keep those carbon reserves that would take us past two degrees safely in the ground.” The higher the price on carbon, the more of those reserves would be worthless. The fight, in the end, is about whether the industry will succeed in its fight to keep its special pollution break alive past the point of climate catastrophe, or whether, in the economists’ parlance, we’ll make them internalize those externalities.

It’s not clear, of course, that the power of the fossil-fuel industry can be broken. The U.K. analysts who wrote the Carbon Tracker report and drew attention to these numbers had a relatively modest goal – they simply wanted to remind investors that climate change poses a very real risk to the stock prices of energy companies. Say something so big finally happens (a giant hurricane swamps Manhattan, a megadrought wipes out Midwest agriculture) that even the political power of the industry is inadequate to restrain legislators, who manage to regulate carbon. Suddenly those Chevron reserves would be a lot less valuable, and the stock would tank. Given that risk, the Carbon Tracker report warned investors to lessen their exposure, hedge it with some big plays in alternative energy.

“The regular process of economic evolution is that businesses are left with stranded assets all the time,” says Nick Robins, who runs HSBC’s Climate Change Centre. “Think of film cameras, or typewriters. The question is not whether this will happen. It will. Pension systems have been hit by the dot-com and credit crunch. They’ll be hit by this.” Still, it hasn’t been easy to convince investors, who have shared in the oil industry’s record profits. “The reason you get bubbles,” sighs Leaton, “is that everyone thinks they’re the best analyst – that they’ll go to the edge of the cliff and then jump back when everyone else goes over.”

So pure self-interest probably won’t spark a transformative challenge to fossil fuel. But moral outrage just might – and that’s the real meaning of this new math. It could, plausibly, give rise to a real movement.

Once, in recent corporate history, anger forced an industry to make basic changes. That was the campaign in the 1980s demanding divestment from companies doing business in South Africa. It rose first on college campuses and then spread to municipal and state governments; 155 campuses eventually divested, and by the end of the decade, more than 80 cities, 25 states and 19 counties had taken some form of binding economic action against companies connected to the apartheid regime. “The end of apartheid stands as one of the crowning accomplishments of the past century,” as Archbishop Desmond Tutu put it, “but we would not have succeeded without the help of international pressure,” especially from “the divestment movement of the 1980s.”

The fossil-fuel industry is obviously a tougher opponent, and even if you could force the hand of particular companies, you’d still have to figure out a strategy for dealing with all the sovereign nations that, in effect, act as fossil-fuel companies. But the link for college students is even more obvious in this case. If their college’s endowment portfolio has fossil-fuel stock, then their educations are being subsidized by investments that guarantee they won’t have much of a planet on which to make use of their degree. (The same logic applies to the world’s largest investors, pension funds, which are also theoretically interested in the future – that’s when their members will “enjoy their retirement.”) “Given the severity of the climate crisis, a comparable demand that our institutions dump stock from companies that are destroying the planet would not only be appropriate but effective,” says Bob Massie, a former anti-apartheid activist who helped found the Investor Network on Climate Risk. “The message is simple: We have had enough. We must sever the ties with those who profit from climate change – now.”

Movements rarely have predictable outcomes. But any campaign that weakens the fossil-fuel industry’s political standing clearly increases the chances of retiring its special breaks. Consider President Obama’s signal achievement in the climate fight, the large increase he won in mileage requirements for cars. Scientists, environmentalists and engineers had advocated such policies for decades, but until Detroit came under severe financial pressure, it was politically powerful enough to fend them off. If people come to understand the cold, mathematical truth – that the fossil-fuel industry is systematically undermining the planet’s physical systems – it might weaken it enough to matter politically. Exxon and their ilk might drop their opposition to a fee-and-dividend solution; they might even decide to become true energy companies, this time for real.

Even if such a campaign is possible, however, we may have waited too long to start it. To make a real difference – to keep us under a temperature increase of two degrees – you’d need to change carbon pricing in Washington, and then use that victory to leverage similar shifts around the world. At this point, what happens in the U.S. is most important for how it will influence China and India, where emissions are growing fastest. (In early June, researchers concluded that China has probably under-reported its emissions by up to 20 percent.) The three numbers I’ve described are daunting – they may define an essentially impossible future. But at least they provide intellectual clarity about the greatest challenge humans have ever faced. We know how much we can burn, and we know who’s planning to burn more. Climate change operates on a geological scale and time frame, but it’s not an impersonal force of nature; the more carefully you do the math, the more thoroughly you realize that this is, at bottom, a moral issue; we have met the enemy and they is Shell.

Meanwhile the tide of numbers continues. The week after the Rio conference limped to its conclusion, Arctic sea ice hit the lowest level ever recorded for that date. Last month, on a single weekend, Tropical Storm Debby dumped more than 20 inches of rain on Florida – the earliest the season’s fourth-named cyclone has ever arrived. At the same time, the largest fire in New Mexico history burned on, and the most destructive fire in Colorado’s annals claimed 346 homes in Colorado Springs – breaking a record set the week before in Fort Collins. This month, scientists issued a new study concluding that global warming has dramatically increased the likelihood of severe heat and drought – days after a heat wave across the Plains and Midwest broke records that had stood since the Dust Bowl, threatening this year’s harvest. You want a big number? In the course of this month, a quadrillion kernels of corn need to pollinate across the grain belt, something they can’t do if temperatures remain off the charts. Just like us, our crops are adapted to the Holocene, the 11,000-year period of climatic stability we’re now leaving… in the dust.

This story is from the August 2nd, 2012 issue of Rolling Stone.

Read more: http://www.rollingstone.com/politics/news/global-warmings-terrifying-new-math-20120719#ixzz3DcnjPUtj
Follow us: @rollingstone on Twitter | RollingStone on Facebook