Arquivo da tag: Física

Quantum physics enables revolutionary imaging method (Science Daily)

Date: August 28, 2014

Source: University of Vienna

Summary: Researchers have developed a fundamentally new quantum imaging technique with strikingly counter-intuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

A new quantum imaging technique generates images with photons that have never touched to object — in this case a sketch of a cat. This alludes to the famous Schrödinger cat paradox, in which a cat inside a closed box is said to be simultaneously dead and alive as long there is no information outside the box to rule out one option over the other. Similarly, the new imaging technique relies on a lack of information regarding where the photons are created and which path they take. Credit: Copyright: Patricia Enigl, IQOQI

Researchers from the Institute for Quantum Optics and Quantum Information (IQOQI), the Vienna Center for Quantum Science and Technology (VCQ), and the University of Vienna have developed a fundamentally new quantum imaging technique with strikingly counterintuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

In general, to obtain an image of an object one has to illuminate it with a light beam and use a camera to sense the light that is either scattered or transmitted through that object. The type of light used to shine onto the object depends on the properties that one would like to image. Unfortunately, in many practical situations the ideal type of light for the illumination of the object is one for which cameras do not exist.

The experiment published in Nature this week for the first time breaks this seemingly self-evident limitation. The object (e.g. the contour of a cat) is illuminated with light that remains undetected. Moreover, the light that forms an image of the cat on the camera never interacts with it. In order to realise their experiment, the scientists use so-called “entangled” pairs of photons. These pairs of photons — which are like interlinked twins — are created when a laser interacts with a non-linear crystal. In the experiment, the laser illuminates two separate crystals, creating one pair of twin photons (consisting of one infrared photon and a “sister” red photon) in either crystal. The object is placed in between the two crystals. The arrangement is such that if a photon pair is created in the first crystal, only the infrared photon passes through the imaged object. Its path then goes through the second crystal where it fully combines with any infrared photons that would be created there.

With this crucial step, there is now, in principle, no possibility to find out which crystal actually created the photon pair. Moreover, there is now no information in the infrared photon about the object. However, due to the quantum correlations of the entangled pairs the information about the object is now contained in the red photons — although they never touched the object. Bringing together both paths of the red photons (from the first and the second crystal) creates bright and dark patterns, which form the exact image of the object.

Stunningly, all of the infrared photons (the only light that illuminated the object) are discarded; the picture is obtained by only detecting the red photons that never interacted with the object. The camera used in the experiment is even blind to the infrared photons that have interacted with the object. In fact, very low light infrared cameras are essentially unavailable on the commercial market. The researchers are confident that their new imaging concept is very versatile and could even enable imaging in the important mid-infrared region. It could find applications where low light imaging is crucial, in fields such as biological or medical imaging.

 

Journal Reference:

  1. Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelow, Radek Lapkiewicz, Anton Zeilinger. Quantum imaging with undetected photons.Nature, 2014; 512 (7515): 409 DOI: 10.1038/nature13586

The Quantum Cheshire Cat: Can neutrons be located at a different place than their own spin? (Science Daily)

Date: July 29, 2014

Source: Vienna University of Technology, TU Vienna

Summary: Can neutrons be located at a different place than their own spin? A quantum experiment demonstrates a new kind of quantum paradox. The Cheshire Cat featured in Lewis Caroll’s novel “Alice in Wonderland” is a remarkable creature: it disappears, leaving its grin behind. Can an object be separated from its properties? It is possible in the quantum world. In an experiment, neutrons travel along a different path than one of their properties — their magnetic moment. This “Quantum Cheshire Cat” could be used to make high precision measurements less sensitive to external perturbations.


The basic idea of the Quantum Cheshire Cat: In an interferometer, an object is separated from one if its properties – like a cat, moving on a different path than its own grin. Credit: Image courtesy of Vienna University of Technology, TU Vienna

Can neutrons be located at a different place than their own spin? A quantum experiment, carried out by a team of researchers from the Vienna University of Technology, demonstrates a new kind of quantum paradox.

The Cheshire Cat featured in Lewis Caroll’s novel “Alice in Wonderland” is a remarkable creature: it disappears, leaving its grin behind. Can an object be separated from its properties? It is possible in the quantum world. In an experiment, neutrons travel along a different path than one of their properties — their magnetic moment. This “Quantum Cheshire Cat” could be used to make high precision measurements less sensitive to external perturbations.

At Different Places at Once

According to the law of quantum physics, particles can be in different physical states at the same time. If, for example, a beam of neutrons is divided into two beams using a silicon crystal, it can be shown that the individual neutrons do not have to decide which of the two possible paths they choose. Instead, they can travel along both paths at the same time in a quantum superposition.

“This experimental technique is called neutron interferometry,” says Professor Yuji Hasegawa from the Vienna University of Technology. “It was invented here at our institute in the 1970s, and it has turned out to be the perfect tool to investigate fundamental quantum mechanics.”

To see if the same technique could separate the properties of a particle from the particle itself, Yuji Hasegawa brought together a team including Tobis Denkmayr, Hermann Geppert and Stephan Sponar, together with Alexandre Matzkin from CNRS in France, Professor Jeff Tollaksen from Chapman University in California and Hartmut Lemmel from the Institut Laue-Langevin to develop a brand new quantum experiment.

The experiment was done at the neutron source at the Institut Laue-Langevin (ILL) in Grenoble, where a unique kind of measuring station is operated by the Viennese team, supported by Hartmut Lemmel from ILL.

Where is the Cat …?

Neutrons are not electrically charged, but they carry a magnetic moment. They have a magnetic direction, the neutron spin, which can be influenced by external magnetic fields.

First, a neutron beam is split into two parts in a neutron interferometer. Then the spins of the two beams are shifted into different directions: The upper neutron beam has a spin parallel to the neutrons’ trajectory, the spin of the lower beam points into the opposite direction. After the two beams have been recombined, only those neutrons are chosen, which have a spin parallel to their direction of motion. All the others are just ignored. “This is called postselection,” says Hermann Geppert. “The beam contains neutrons of both spin directions, but we only analyse part of the neutrons.”

These neutrons, which are found to have a spin parallel to its direction of motion, must clearly have travelled along the upper path — only there, the neutrons have this spin state. This can be shown in the experiment. If the lower beam is sent through a filter which absorbs some of the neutrons, then the number of the neutrons with spin parallel to their trajectory stays the same. If the upper beam is sent through a filter, than the number of these neutrons is reduced.

… and Where is the Grin?

Things get tricky, when the system is used to measure where the neutron spin is located: the spin can be slightly changed using a magnetic field. When the two beams are recombined appropriately, they can amplify or cancel each other. This is exactly what can be seen in the measurement, if the magnetic field is applied at the lower beam — but that is the path which the neutrons considered in the experiment are actually never supposed to take. A magnetic field applied to the upper beam, on the other hand, does not have any effect.

“By preparing the neurons in a special initial state and then postselecting another state, we can achieve a situation in which both the possible paths in the interferometer are important for the experiment, but in very different ways,” says Tobias Denkmayr. “Along one of the paths, the particles themselves couple to our measurement device, but only the other path is sensitive to magnetic spin coupling. The system behaves as if the particles were spatially separated from their properties.”

High Hopes for High-Precision Measurements

This counter intuitive effect is very interesting for high precision measurements, which are very often based on the principle of quantum interference. “When the quantum system has a property you want to measure and another property which makes the system prone to perturbations, the two can be separated using a Quantum Cheshire Cat, and possibly the perturbation can be minimized,” says Stephan Sponar.

The idea of the Quantum Cheshire Cat was first developed by Prof. Jeff Tollaksen and Prof. Yakir Aharonov from the Chapman University. An experimental proposal was published last year. The measurements which have now been presented are the first experimental proof of this phenomenon.

Journal Reference:

  1. Tobias Denkmayr, Hermann Geppert, Stephan Sponar, Hartmut Lemmel, Alexandre Matzkin, Jeff Tollaksen, Yuji Hasegawa. Observation of a quantum Cheshire Cat in a matter-wave interferometer experiment. Nature Communications, 2014; 5 DOI: 10.1038/ncomms5492

Is the universe a bubble? Let’s check: Making the multiverse hypothesis testable (Science Daily)

Date: July 17, 2014

Source: Perimeter Institute

Summary: Scientists are working to bring the multiverse hypothesis, which to some sounds like a fanciful tale, firmly into the realm of testable science. Never mind the Big Bang; in the beginning was the vacuum. The vacuum simmered with energy (variously called dark energy, vacuum energy, the inflation field, or the Higgs field). Like water in a pot, this high energy began to evaporate — bubbles formed.

Screenshot from a video of Matthew Johnson explaining the related concepts of inflation, eternal inflation, and the multiverse (see http://youtu.be/w0uyR6JPkz4). Credit: Image courtesy of Perimeter Institute

Perimeter Associate Faculty member Matthew Johnson and his colleagues are working to bring the multiverse hypothesis, which to some sounds like a fanciful tale, firmly into the realm of testable science.

Never mind the big bang; in the beginning was the vacuum. The vacuum simmered with energy (variously called dark energy, vacuum energy, the inflation field, or the Higgs field). Like water in a pot, this high energy began to evaporate — bubbles formed.

Each bubble contained another vacuum, whose energy was lower, but still not nothing. This energy drove the bubbles to expand. Inevitably, some bubbles bumped into each other. It’s possible some produced secondary bubbles. Maybe the bubbles were rare and far apart; maybe they were packed close as foam.

But here’s the thing: each of these bubbles was a universe. In this picture, our universe is one bubble in a frothy sea of bubble universes.

That’s the multiverse hypothesis in a bubbly nutshell.

It’s not a bad story. It is, as scientists say, physically motivated — not just made up, but rather arising from what we think we know about cosmic inflation.

Cosmic inflation isn’t universally accepted — most cyclical models of the universe reject the idea. Nevertheless, inflation is a leading theory of the universe’s very early development, and there is some observational evidence to support it.

Inflation holds that in the instant after the big bang, the universe expanded rapidly — so rapidly that an area of space once a nanometer square ended up more than a quarter-billion light years across in just a trillionth of a trillionth of a trillionth of a second. It’s an amazing idea, but it would explain some otherwise puzzling astrophysical observations.

Inflation is thought to have been driven by an inflation field — which is vacuum energy by another name. Once you postulate that the inflation field exists, it’s hard to avoid an “in the beginning was the vacuum” kind of story. This is where the theory of inflation becomes controversial — when it starts to postulate multiple universes.

Proponents of the multiverse theory argue that it’s the next logical step in the inflation story. Detractors argue that it is not physics, but metaphysics — that it is not science because it cannot be tested. After all, physics lives or dies by data that can be gathered and predictions that can be checked.

That’s where Perimeter Associate Faculty member Matthew Johnson comes in. Working with a small team that also includes Perimeter Faculty member Luis Lehner, Johnson is working to bring the multiverse hypothesis firmly into the realm of testable science.

“That’s what this research program is all about,” he says. “We’re trying to find out what the testable predictions of this picture would be, and then going out and looking for them.”

Specifically, Johnson has been considering the rare cases in which our bubble universe might collide with another bubble universe. He lays out the steps: “We simulate the whole universe. We start with a multiverse that has two bubbles in it, we collide the bubbles on a computer to figure out what happens, and then we stick a virtual observer in various places and ask what that observer would see from there.”

Simulating the whole universe — or more than one — seems like a tall order, but apparently that’s not so.

“Simulating the universe is easy,” says Johnson. Simulations, he explains, are not accounting for every atom, every star, or every galaxy — in fact, they account for none of them.

“We’re simulating things only on the largest scales,” he says. “All I need is gravity and the stuff that makes these bubbles up. We’re now at the point where if you have a favourite model of the multiverse, I can stick it on a computer and tell you what you should see.”

That’s a small step for a computer simulation program, but a giant leap for the field of multiverse cosmology. By producing testable predictions, the multiverse model has crossed the line between appealing story and real science.

In fact, Johnson says, the program has reached the point where it can rule out certain models of the multiverse: “We’re now able to say that some models predict something that we should be able to see, and since we don’t in fact see it, we can rule those models out.”

For instance, collisions of one bubble universe with another would leave what Johnson calls “a disk on the sky” — a circular bruise in the cosmic microwave background. That the search for such a disk has so far come up empty makes certain collision-filled models less likely.

Meanwhile, the team is at work figuring out what other kinds of evidence a bubble collision might leave behind. It’s the first time, the team writes in their paper, that anyone has produced a direct quantitative set of predictions for the observable signatures of bubble collisions. And though none of those signatures has so far been found, some of them are possible to look for.

The real significance of this work is as a proof of principle: it shows that the multiverse can be testable. In other words, if we are living in a bubble universe, we might actually be able to tell.

Video: https://www.youtube.com/watch?v=w0uyR6JPkz4

Journal References:

  1. Matthew C. Johnson, Hiranya V. Peiris, Luis Lehner. Determining the outcome of cosmic bubble collisions in full general relativityPhysical Review D, 2012; 85 (8) DOI: 10.1103/PhysRevD.85.083516
  2. Carroll L. Wainwright, Matthew C. Johnson, Hiranya V. Peiris, Anthony Aguirre, Luis Lehner, Steven L. Liebling. Simulating the universe(s): from cosmic bubble collisions to cosmological observables with numerical relativity.Journal of Cosmology and Astroparticle Physics, 2014; 2014 (03): 030 DOI:10.1088/1475-7516/2014/03/030
  3. Carroll L. Wainwright, Matthew C. Johnson, Anthony Aguirre, Hiranya V. Peiris.Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full General Relativitysubmitted to arXiv, 2014 [link]
  4. Stephen M. Feeney, Matthew C. Johnson, Jason D. McEwen, Daniel J. Mortlock, Hiranya V. Peiris. Hierarchical Bayesian detection algorithm for early-universe relics in the cosmic microwave backgroundPhysical Review D, 2013; 88 (4) DOI: 10.1103/PhysRevD.88.043012

Experimento demonstra decaimento do bóson de Higgs em componentes da matéria (Fapesp)

Comprovação corrobora a hipótese de que o bóson é o gerador das massas das partículas constituintes da matéria. Descoberta foi anunciada na Nature Physics por grupo com participação brasileira (CMS)
02/07/2014

Por José Tadeu Arantes

Agência FAPESP – O decaimento direto do bóson de Higgs emférmions – corroborando a hipótese de que ele é o gerador das massas das partículas constituintes da matéria – foi comprovado no Grande Colisor de Hádrons (LHC, na sigla em inglês), o gigantesco complexo experimental mantido pela Organização Europeia para a Pesquisa Nuclear (Cern) na fronteira da Suíça com a França.

O anúncio da descoberta acaba de ser publicado na revista Nature Physics pelo grupo de pesquisadores ligado ao detector Solenoide Compacto de Múons (CMS, na sigla em inglês).

Da equipe internacional do CMS, composta por cerca de 4.300 integrantes (entre físicos, engenheiros, técnicos, estudantes e pessoal administrativo), participam dois grupos de cientistas brasileiros: um sediado no Núcleo de Computação Científica (NCC) da Universidade Estadual Paulista (Unesp), em São Paulo, e outro no Centro Brasileiro de Pesquisas Físicas, do Ministério da Ciência, Tecnologia e Inovação (MCTI), e na Universidade do Estado do Rio de Janeiro (Uerj), no Rio de Janeiro.

“O experimento mediu, pela primeira vez, os decaimentos do bóson de Higgs em quarks bottom e léptons tau. E mostrou que eles são consistentes com a hipótese de as massas dessas partículas também serem geradas por meio do mecanismo de Higgs”, disse o físico Sérgio Novaes, professor da Unesp, à Agência FAPESP.

Novaes é líder do grupo da universidade paulista no experimento CMS e pesquisador principal do Projeto Temático “Centro de Pesquisa e Análise de São Paulo” (Sprace), integrado ao CMS e apoiado pela FAPESP.

O novo resultado reforçou a convicção de que o objeto cuja descoberta foi oficialmente anunciada em 4 de julho de 2012 é realmente o bóson de Higgs, a partícula que confere massa às demais partículas, de acordo com o Modelo Padrão, o corpo teórico que descreve os componentes e as interações supostamente fundamentais do mundo material.

“Desde o anúncio oficial da descoberta do bóson de Higgs, muitas evidências foram coletadas, mostrando que a partícula correspondia às predições do Modelo Padrão. Foram, fundamentalmente, estudos envolvendo seu decaimento em outros bósons (partículas responsáveis pelas interações da matéria), como os fótons (bósons da interação eletromagnética) e o W e o Z (bósons da interação fraca)”, disse Novaes.

“Porém, mesmo admitindo que o bóson de Higgs fosse responsável pela geração das massas do W e do Z, não era óbvio que ele devesse gerar também as massas dos férmions (partículas que constituem a matéria, como os quarks, que compõem os prótons e os nêutrons; e os léptons, como o elétron e outros), porque o mecanismo é um pouco diferente, envolvendo o chamado ‘acoplamento de Yukawa’ entre essas partículas e o campo de Higgs”, prosseguiu.

Os pesquisadores buscavam uma evidência direta de que o decaimento do bóson de Higgs nesses campos de matéria obedeceria à receita do Modelo Padrão. Porém, essa não era uma tarefa fácil, porque, exatamente pelo fato de conferir massa, o Higgs tem a tendência de decair nas partículas mais massivas, como os bósons W e Z, por exemplo, que possuem massas cerca de 80 e 90 vezes superiores à do próton, respectivamente.

“Além disso, havia outros complicadores. No caso particular do quark bottom, por exemplo, um par bottom-antibottom pode ser produzido de muitas outras maneiras, além do decaimento do Higgs. Então era preciso filtrar todas essas outras possibilidades. E, no caso do lépton tau, a probabilidade de decaimento do Higgs nele é muito pequena”, contou Novaes.

“Para se ter ideia, a cada trilhão de colisões realizadas no LHC, existe um evento com bóson de Higgs. Destes, menos de 10% correspondem ao decaimento do Higgs em um par de taus. Ademais, o par de taus também pode ser produzido de outras maneiras, como, por exemplo, a partir de um fóton, com frequência muito maior”, disse.

Para comprovar com segurança o decaimento do bóson de Higgs no quark bottom e no lépton tau, a equipe do CMS precisou coletar e processar uma quantidade descomunal de dados. “Por isso nosso artigo na Nature demorou tanto tempo para sair. Foi literalmente mais difícil do que procurar uma agulha no palheiro”, afirmou Novaes.

Mas o interessante, segundo o pesquisador, foi que, mesmo nesses casos, em que se considerava que o Higgs poderia fugir à receita do Modelo Padrão, isso não ocorreu. Os experimentos foram muito coerentes com as predições teóricas.

“É sempre surpreendente verificar o acordo entre o experimento e a teoria. Durante anos, o bóson de Higgs foi considerado apenas um artifício matemático, para dar coerência interna ao Modelo Padrão. Muitos físicos apostavam que ele jamais seria descoberto. Essa partícula foi procurada por quase meio século e acabou sendo admitida pela falta de uma proposta alternativa, capaz de responder por todas as predições, com a mesma margem de acerto. Então, esses resultados que estamos obtendo agora no LHC são realmente espetaculares. A gente costuma se espantar quando a ciência não dá certo. Mas o verdadeiro espanto é quando ela dá certo”, disse Novaes.

“Em 2015, o LHC deverá rodar com o dobro de energia. A expectativa é chegar a 14 teraelétrons-volt (TeV) (14 trilhões de elétrons-volt). Nesse patamar de energia, os feixes de prótons serão acelerados a mais de 99,99% da velocidade da luz. É instigante imaginar o que poderemos descobrir”, afirmou.

O artigo Evidence for the direct decay of the 125 GeV Higgs boson to fermions (doi:10.1038/nphys3005), da colaboração CMS, pode ser lido emhttp://nature.com/nphys/journal/vaop/ncurrent/full/nphys3005.html

 

GLOSSÁRIO

Modelo Padrão

Modelo elaborado ao longo da segunda metade do século XX, a partir da colaboração de um grande número de físicos de vários países, com alto poder de predição dos eventos que ocorrem no mundo subatômico. Engloba três das quatro interações conhecidas (eletromagnética, fraca e forte), mas não incorpora a interação gravitacional. O Modelo Padrão baseia-se no conceito de partículas elementares, agrupadas em férmions (partículas constituintes da matéria), bósons (partículas mediadoras das interações) e o bóson de Higgs (partícula que confere massa às demais partículas).

Férmions

Assim chamados em homenagem ao físico italiano Enrico Fermi (1901-1954), prêmio Nobel de Física de 1938. Segundo o Modelo Padrão, são as partículas constituintes da matéria. Compõem-se de seis quarks (up, down, charm, strange, top, bottom), seis léptons (elétron, múon, tau, neutrino do elétron, neutrino do múon, neutrino do tau) e suas respectivas antipartículas. Os quarks agrupam-se em tríades para formar os baryons (prótons e nêutrons) e em pares quark-antiquark para formar os mésons. Em conjunto, baryons e mésons constituem os hádrons.

Bósons

Assim chamados em homenagem ao físico indiano Satyendra Nath Bose (1894-1974). Segundo o Modelo Padrão, os bósons vetoriais são as partículas mediadoras das interações. Compõem-se do fóton (mediador da interação eletromagnética); do W+, W− e Z (mediadores da interação fraca); e de oito tipos de glúons (mediadores da interação forte). O gráviton (suposto mediador da interação gravitacional) ainda não foi encontrado nem faz parte do Modelo Padrão.

Bóson de Higgs

Nome em homenagem ao físico britânico Peter Higgs (nascido em 1929). Segundo o Modelo Padrão, é o único bóson elementar escalar (os demais bósons elementares são vetoriais). De forma simplificada, diz-se que é a partícula que confere massa às demais partículas. Foi postulado para explicar por que todas as partículas elementares do Modelo Padrão possuem massa, exceto o fóton e os glúons. Sua massa, de 125 a 127 GeV/c2 (gigaelétrons-volt divididos pela velocidade da luz ao quadrado), equivale a aproximadamente 134,2 a 136,3 vezes a massa do próton. Sendo uma das partículas mais massivas propostas pelo Modelo Padrão, só pode ser produzido em contextos de altíssima energia (como aqueles que teriam existido logo depois do Big Bang ou os agora alcançados no LHC ), decaindo quase imediatamente em partículas de massas menores. Após quase meio século de buscas, desde a postulação teórica em 1964, sua descoberta foi oficialmente anunciada no dia 4 de julho de 2012. O anúncio foi feito, de forma independente, pelas duas principais equipes do LHC, ligadas aos detectores CMS e Atlas do LHC. Em reconhecimento à descoberta, a Real Academia Sueca concedeu o Prêmio Nobel de Física de 2013 a Peter Higgs e ao belga François Englert, dois dos propositores da partícula.

Decaimento

Processo espontâneo por meio do qual uma partícula se transforma em outras, dotadas de massas menores. Se as partículas geradas não são estáveis, o processo de decaimento pode continuar. No caso mencionado no artigo, o decaimento do bóson de Higgs em férmions (especificamente, no quark bottom e no lépton tau) é tomado como evidência de que o Higgs é o gerador das massas dessas partículas.

LHC

O Grande Colisor de Hádrons é o maior e mais sofisticado complexo experimental já possuído pela humanidade. Construído pelo Cern ao longo de 10 anos, entre 1998 e 2008, consiste basicamente em um túnel circular de 27 quilômetros de extensão, situado a 175 metros abaixo da superfície do solo, na fronteira entre a França e a Suíça. Nele, feixes de prótons são acelerados em sentidos contrários e levados a colidir em patamares altíssimos de energia, gerando, a cada colisão, outros tipos de partículas, que possibilitam investigar a estrutura da matéria. A expectativa, para 2015, é produzir colisões de 14 TeV (14 trilhões de elétrons-volt), com os prótons movendo-se a mais de 99,99% da velocidade da luz. O LHC é dotado de sete detectores, sendo os dois principais o CMS e o Atlas.

Com a corda no pescoço (Folha de S.Paulo)

São Paulo, domingo, 05 de novembro de 2006

Físico americano revela em livro a celeuma travada nos bastidores da academia em torno da teoria de cordas e argumenta que talvez o Universo não seja elegante, afinal

FLÁVIO DE CARVALHO SERPA
COLABORAÇÃO PARA A FOLHA

Há tempos a comunidade dos físicos está dividida numa guerra surda, abafada pelos muros da academia. Agora, pela primeira vez, dois livros trazem a público os detalhes dessa desavença, que põe em xeque o modo de produzir a ciência moderna, revelando uma doença que pode estar se espalhando por todo o edifício acadêmico.

“The Trouble With Physics” (“A Crise da Física”), livro lançado no mês passado nos EUA e ainda sem tradução no Brasil, do físico teórico Lee Smolin, abre uma discussão que muitos prefeririam manter longe do grande público: está a física moderna completamente estagnada há três décadas?

“A história que vou contar”, escreve Smolin, “pode ser lida como uma tragédia. Para ser claro e antecipar o desfecho: nós fracassamos”, diz ele, invocando o cargo de porta-voz de toda uma geração de cientistas. Pior: a razão da estagnação seria a formação de gangues de cientistas, incluindo as mentes mais brilhantes do mundo, para afastar dos postos acadêmicos os teóricos dissidentes.

Os principais acusados são os físicos adeptos da chamada teoria de cordas, que promete, desde o início da década de 1970, unificar todas as forças e partículas do Universo conhecido. “A teoria de cordas tem uma posição tão dominante na academia”, escreve Smolin, “que é praticamente suicídio de carreira para um jovem teórico não juntar-se à onda”.

Smolin, um polêmico e respeitado físico teórico, com PhD em Harvard e professorado em Yale, não está só. Também o físico matemático Peter Woit disparou contra os físicos das cordas uma acusação pesada que transparece já no título de seu livro: “Not Even Wrong” (“Nem Sequer Errado”). Esse era o pior insulto que o legendário físico Wolfgang Pauli reservava para os trabalhos e teses mal feitas. Afinal, se uma tese fica comprovadamente errada, ela tem o lado positivo de fechar becos sem saída na busca do caminho certo.

Mas o alerta de Smolin não está restrito ao desenvolvimento teórico da física. Para manter privilégios acadêmicos, a comunidade dos teóricos de cordas tomou conta das principais universidades e centros de pesquisas, barrando a carreira de pesquisadores com enfoques alternativos. Smolin,que já namorou a teoria de cordas, produzindo 18 artigos sobre o assunto, emerge na arena científica como uma espécie de mafioso desertor, disparando sua metralhadora giratória.

Modelo Padrão
O mais surpreendente é que a confusão tenha começado logo após décadas de avanços contínuos no século que começa com Einstein e a consolidação da mecânica quântica.

O último capítulo dessa epopéia -e a raiz da bagunça- foi o espetacular sucesso do chamado Modelo Padrão das Forças e Partículas Elementares. Essa formulação, obra de gênios como Richard Feynman, Freeman Dyson, Murray Gell-Mann e outros, teve como canto do cisne a comprovação teórica e experimental da unificação da força fraca e o eletromagnetismo, feita pelos Prêmios Nobel Abdus Salam e Steven Weinberg. A unificação de forças é o santo graal da física desde Johannes Kepler (unificação das órbitas celestes), passando por Isaac Newton (unificação da gravidade e movimento orbital) James Maxwell (unificação da luz, eletricidade e magnetismo) e Einstein (unificação da energia e matéria) .

Mas o portentoso edifício do Modelo Padrão, tinha (e tem) graves rachaduras. Apesar de descrever todas as partículas e forças detectadas e previstas com incrível precisão, não incorporava a força da gravidade nem dizia nada sobre a histórica divisão entre os excludentes mundos da relatividade geral e da mecânica quântica.

Mas todos físicos da área de partículas e altas energias, teóricos e experimentais, mergulharam nas furiosas calculeiras do Modelo Padrão. Absorvidos no que se chama o modo de produção da ciência normal (em oposição aos períodos de erupção revolucionária, como o da relatividade), as mais brilhantes mentes do mundo chegaram a um beco sem saída: quase todas as previsões experimentais do Modelo Padrão foram vitoriosamente testadas. O que fazer depois?

Boas vibrações
É quando emergem as cordas. Em vez de partículas pontuais quase sem dimensão como constituintes básicos da matéria, surge a idéia revolucionária das entidades elementares serem na verdade literalmente cordas bidimensionais. Idênticas às dos violinos (no sentido matemático), só que de dimensões minúsculas (da ordem de um trilhão de vezes menores que um próton) e, mais espantoso, vibrando num Universo de mais do que as três dimensões habituais. Nas últimas formulações, nada menos que 11, incluindo o tempo.

No começo o progresso foi espantoso: a força da gravidade, uma deserdada da mecânica quântica e do Modelo Padrão, emergia naturalmente das harmonias de cordas, como ressuscitando as intuições pitagóricas. Todas as forças e partículas foram descritas matematicamente como formas particulares de oscilação de poucos tipos básicos de corda.

Mas logo as complicações começaram também a brotar descontroladamente das equações. Se o Modelo Padrão exigia 19 constantes, ajustadas na marra pelos teóricos para coincidir com a realidade, os desdobramentos da teoria de cordas passaram a exigir centenas delas.

No princípio a beleza da teoria de cordas vem de existir apenas o parâmetro da tensão de corda. Cada partícula ou força seria apenas uma variação das cordas básicas, mudando apenas sua tensão e modo de vibrar. A gravidade, por exemplo, seria uma corda fechada, como um elástico de borracha de prender cédulas. Elétrons seriam cordas oscilando com apenas uma extremidade presa.

A cada ajuste na geometria para tornar a teoria compatível com o Universo observável, foi tornando o modelo cada vez mais complicado, de maneira parecida ao modelo cósmico do astrônomo egípcio Ptolomeu, com as adições de ciclos e epiciclos para explicar os movimentos dos planetas.

Macumba
Veio então a explosão final. Logo surgiram cinco alternativas de teorias de cordas. Depois a conjectura de existir uma tal teoria M, que agruparia todas com casos especiais. Finalmente, a teoria de cordas, que prometia simplicidade de beleza tão clara como a célebre E= mc2, revelou-se capaz de produzir nada menos que 10500 (o número 1 seguido de 500 zeros) soluções possíveis, cada uma delas representando um Universo alternativo, com forças e partículas diferentes. Ou seja, há mais soluções para as contas dos físicos de cordas do que há partículas e átomos no Universo inteiro.

Pior, uma parcela mais maluca da comunidade dos teóricos de cordas acha isso muito natural e insinua agora que a necessidade de prova experimental é um ranço arcaico da ciência.

“Vale a pena tentar ensinar mecânica quântica para um cachorro?” -perguntam eles. Seria igualmente inútil para nossos cérebros tentar entender e provar experimentalmente a grande bagunça instalada na ciência nos últimos 30 anos?

É claro que a maioria dos mais brilhantes teóricos de cordas não endossa esse impasse epistemológico. O próprio Brian Greene, físico americano e principal divulgador da concepção de cordas, autor do best-seller (mais falado do que lido, é verdade) “O Universo Elegante”, escreveu um artigo para o jornal “The New York Times” ressaltando que a prova experimental é essencial e que a questão levantada por Smolin é procedente. “O rigor matemático e a elegância não bastam para demonstrar a relevância de uma teoria. Para ser considerada uma descrição correta do Universo, uma teoria deve fazer previsões confirmadas por experimentos.


As dimensões das cordas e as energias que elas envolvem para serem comprovadas estão fora de alcance. Um acelerador de partículas para produzi-las artificialmente, deveria ser maior do que o Sistema Solar


E, quando um pequeno mas barulhento grupo de críticos da teoria de cordas ressalta isso com razão, a teoria de cordas ainda tem de fazer isso. Essa é uma questão chave e merece um escrutínio sério.”

Enquanto o diálogo entre Greene e Smolin tem sido diplomático, nos blogs das comunidades científicas a guerra está vários pontos para baixo. No diário on-line do físico Lubos Motl, de Harvard (motls.blogspot.com), por exemplo, já foram até excluídos posts da cosmóloga brasileira Christine Dantas (christinedantas.blogspot .com). “Na verdade não existe uma guerra entre os muros da academia”, ameniza Victor Rivelles, do Instituto de Física da USP. “O que é novo é que a internet, e particularmente os blogs, amplificam essa discussão dando a impressão de que é muito maior do que na realidade é.”

Saída pela esquerda
Para contornar a questão apareceu o que se chama princípio antrópico: entre os incontáveis Universos possíveis, os observáveis seriam apenas os feitos sob medida para os humanos. Uma interpretação que resvala para o misticismo e devolve o homem ao centro do Universo, como na Idade Média.

Lamentavelmente, a física experimental, a juíza última das verdades desde os tempos de Galileu e Kepler, pouca coisa pode fazer. As dimensões das cordas elementares e as energias que elas envolvem para serem comprovadas estão fora de alcance. Um acelerador de partículas para produzi-las artificialmente, como foi feito na comprovação do Modelo Padrão, deveria ter uma dimensão maior que a do Sistema Solar.

Todas as esperanças de todos os físicos se voltam agora para o Grande Colisor de Hádrons (prótons ou nêutrons), a ser ligado a partir do ano que vem perto de Genebra, na fronteira da Suíça com a França, na sede do Cern (Centro Europeu de Pesquisas Nucleares). Pela primeira vez, esse acelerador, um túnel ultrafrio com 27 km de circunferência, vai atingir energias suficientes para produzir indícios indiretos da existência de uma quarta dimensão espacial. Lamentavelmente isso não prova nem refuta a teoria de cordas, pois o postulado de dimensões adicionais não é uma exclusividade desse modelo. A pendenga na comunidade dos físicos, portanto, pode persistir.

Pano de fundo
A linha teórica desenvolvida por Smolin, por outro lado, é igualmente nebulosa. Ele é um dos principais articuladores da gravitação quântica de laço, que pretende retomar o enfoque einsteniano de unificação. A teoria geral da relatividade, explica Smolin, independe da geometria do espaço-tempo. Mas para toda a teoria de cordas, e mesmo o modelo padrão, as forças e partículas são como atores num cenário ou pano de fundo de uma paisagem espaço-temporal definida.

É o que ele chama de teorias dependente do fundo. A gravitação quântica de laço, ao contrário, é independente do fundo. É uma conjectura arrojada: em vez de partículas e forças elementares, Smolin sugere que as entidades fundamentais são nós ou laços no tecido do espaço-tempo.

Assim como a teoria de cordas deriva todas as partículas e forças a partir de modos diferentes das cordas elementares vibrarem, Smolin acredita que essas entidades surjam de enroscos no tecido do espaço-tempo. Assim, as dimensões espaciais e a passagem do tempo emergem não como cenário do teatro das partículas, mas como sua gênese. Outra conseqüência da teoria é que o espaço-tempo não é contínuo: ele também é quantizado, existindo tamanhos mínimos, como átomos de espaço-tempo.

Lamentavelmente esses enroscos também são indetectáveis, mesmo nos mais poderosos aceleradores. No fim, pateticamente, Smolin admite que não se saiu melhor do que os teóricos de cordas e que seu livro “é uma forma de procrastinação”.

Mas as questões sociológicas colocadas nos últimos capítulos do livro de Smolin não podem mais ficar no limbo. A acusação da formação de gangues nos centros de pesquisa é agora uma questão pública, que envolve a aplicação do dinheiro dos impostos e a estagnação das ciências e, indiretamente, da tecnologia que ela deveria gerar.


LIVRO – “The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next” 
Lee Smolin; Houghton Mifflin, 392 páginas US$26.

‘Dressed’ laser aimed at clouds may be key to inducing rain, lightning (Science Daily)

Date: April 18, 2014

Source: University of Central Florida

Summary: The adage “Everyone complains about the weather but nobody does anything about it” may one day be obsolete if researchers further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning. Other possible uses of this technique could be used in long-distance sensors and spectrometers to identify chemical makeup.

The adage “Everyone complains about the weather but nobody does anything about it,” may one day be obsolete if researchers at the University of Central Florida’s College of Optics & Photonics and the University of Arizona further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning. Credit: © Maksim Shebeko / Fotolia

The adage “Everyone complains about the weather but nobody does anything about it” may one day be obsolete if researchers at the University of Central Florida’s College of Optics & Photonics and the University of Arizona further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning.

The solution? Surround the beam with a second beam to act as an energy reservoir, sustaining the central beam to greater distances than previously possible. The secondary “dress” beam refuels and helps prevent the dissipation of the high-intensity primary beam, which on its own would break down quickly. A report on the project, “Externally refueled optical filaments,” was recently published in Nature Photonics.

Water condensation and lightning activity in clouds are linked to large amounts of static charged particles. Stimulating those particles with the right kind of laser holds the key to possibly one day summoning a shower when and where it is needed.

Lasers can already travel great distances but “when a laser beam becomes intense enough, it behaves differently than usual — it collapses inward on itself,” said Matthew Mills, a graduate student in the Center for Research and Education in Optics and Lasers (CREOL). “The collapse becomes so intense that electrons in the air’s oxygen and nitrogen are ripped off creating plasma — basically a soup of electrons.”

At that point, the plasma immediately tries to spread the beam back out, causing a struggle between the spreading and collapsing of an ultra-short laser pulse. This struggle is called filamentation, and creates a filament or “light string” that only propagates for a while until the properties of air make the beam disperse.

“Because a filament creates excited electrons in its wake as it moves, it artificially seeds the conditions necessary for rain and lightning to occur,” Mills said. Other researchers have caused “electrical events” in clouds, but not lightning strikes.

But how do you get close enough to direct the beam into the cloud without being blasted to smithereens by lightning?

“What would be nice is to have a sneaky way which allows us to produce an arbitrary long ‘filament extension cable.’ It turns out that if you wrap a large, low intensity, doughnut-like ‘dress’ beam around the filament and slowly move it inward, you can provide this arbitrary extension,” Mills said. “Since we have control over the length of a filament with our method, one could seed the conditions needed for a rainstorm from afar. Ultimately, you could artificially control the rain and lightning over a large expanse with such ideas.”

So far, Mills and fellow graduate student Ali Miri have been able to extend the pulse from 10 inches to about 7 feet. And they’re working to extend the filament even farther.

“This work could ultimately lead to ultra-long optically induced filaments or plasma channels that are otherwise impossible to establish under normal conditions,” said professor Demetrios Christodoulides, who is working with the graduate students on the project.

“In principle such dressed filaments could propagate for more than 50 meters or so, thus enabling a number of applications. This family of optical filaments may one day be used to selectively guide microwave signals along very long plasma channels, perhaps for hundreds of meters.”

Other possible uses of this technique could be used in long-distance sensors and spectrometers to identify chemical makeup. Development of the technology was supported by a $7.5 million grant from the Department of Defense.

Journal Reference:

  1. Maik Scheller, Matthew S. Mills, Mohammad-Ali Miri, Weibo Cheng, Jerome V. Moloney, Miroslav Kolesik, Pavel Polynkin, Demetrios N. Christodoulides.Externally refuelled optical filamentsNature Photonics, 2014; 8 (4): 297 DOI:10.1038/nphoton.2014.47

In the eye of a chicken, a new state of matter comes into view (Science Daily)

Date: February 24, 2014

Source: Princeton University

Summary: Along with eggs, soup and rubber toys, the list of the chicken’s most lasting legacies may eventually include advanced materials, according to scientists. The researchers report that the unusual arrangement of cells in a chicken’s eye constitutes the first known biological occurrence of a potentially new state of matter known as ‘disordered hyperuniformity,’ which has been shown to have unique physical properties.

Researchers from Princeton University and Washington University in St. Louis report that the unusual arrangement of cells in a chicken’s eye … Credit: Courtesy of Joseph Corbo and Timothy Lau, Washington University in St. Louis

Along with eggs, soup and rubber toys, the list of the chicken’s most lasting legacies may eventually include advanced materials such as self-organizing colloids, or optics that can transmit light with the efficiency of a crystal and the flexibility of a liquid.

The unusual arrangement of cells in a chicken’s eye constitutes the first known biological occurrence of a potentially new state of matter known as “disordered hyperuniformity,” according to researchers from Princeton University and Washington University in St. Louis. Research in the past decade has shown that disordered hyperuniform materials have unique properties when it comes to transmitting and controlling light waves, the researchers report in the journal Physical Review E.

States of disordered hyperuniformity behave like crystal and liquid states of matter, exhibiting order over large distances and disorder over small distances. Like crystals, these states greatly suppress variations in the density of particles — as in the individual granules of a substance — across large spatial distances so that the arrangement is highly uniform. At the same time, disordered hyperuniform systems are similar to liquids in that they have the same physical properties in all directions. Combined, these characteristics mean that hyperuniform optical circuits, light detectors and other materials could be controlled to be sensitive or impervious to certain light wavelengths, the researchers report.

“Disordered hyperuniform materials possess a hidden order,” explained co-corresponding author Salvatore Torquato, a Princeton professor of chemistry. It was Torquato who, with Frank Stillinger, a senior scientist in Princeton’s chemistry department, first identified hyperuniformity in a 2003 paper in Physical Review E.

“We’ve since discovered that such physical systems are endowed with exotic physical properties and therefore have novel capabilities,” Torquato said. “The more we learn about these special disordered systems, the more we find that they really should be considered a new distinguishable state of matter.”

The researchers studied the light-sensitive cells known as cones that are in the eyes of chickens and most other birds active in daytime. These birds have four types of cones for color — violet, blue, green and red — and one type for detecting light levels, and each cone type is a different size. The cones are packed into a single epithelial, or tissue, layer called the retina. Yet, they are not arranged in the usual way, the researchers report.

In many creatures’ eyes, visual cells are evenly distributed in an obvious pattern such as the familiar hexagonal compact eyes of insects. In many creatures, the different types of cones are laid out so that they are not near cones of the same type. At first glance, however, the chicken eye appears to have a scattershot of cones distributed in no particular order.

The lab of co-corresponding author Joseph Corbo, an associate professor of pathology and immunology, and genetics at Washington University in St. Louis, studies how the chicken’s unusual visual layout evolved. Thinking that perhaps it had something to do with how the cones are packed into such a small space, Corbo approached Torquato, whose group studies the geometry and dynamics of densely packed objects such as particles.

Torquato then worked with the paper’s first author Yang Jiao, who received his Ph.D. in mechanical and aerospace engineering from Princeton in 2010 and is now an assistant professor of materials science and engineering at Arizona State University. Torquato and Jiao developed a computer-simulation model that went beyond standard packing algorithms to mimic the final arrangement of chicken cones and allowed them to see the underlying method to the madness.

It turned out that each type of cone has an area around it called an “exclusion region” that other cones cannot enter. Cones of the same type shut out each other more than they do unlike cones, and this variant exclusion causes distinctive cone patterns. Each type of cone’s pattern overlays the pattern of another cone so that the formations are intertwined in an organized but disordered way — a kind of uniform disarray. So, while it appeared that the cones were irregularly placed, their distribution was actually uniform over large distances. That’s disordered hyperuniformity, Torquato said.

“Because the cones are of different sizes it’s not easy for the system to go into a crystal or ordered state,” Torquato said. “The system is frustrated from finding what might be the optimal solution, which would be the typical ordered arrangement. While the pattern must be disordered, it must also be as uniform as possible. Thus, disordered hyperuniformity is an excellent solution.”

The researchers’ findings add a new dimension called multi-hyperuniformity. This means that the elements that make up the arrangement are themselves hyperuniform. While individual cones of the same type appear to be unconnected, they are actually subtly linked by exclusion regions, which they use to self-organize into patterns. Multi-hyperuniformity is crucial for the avian system to evenly sample incoming light, Torquato said. He and his co-authors speculate that this behavior could provide a basis for developing materials that can self-assemble into a disordered hyperuniform state.

“You also can think of each one of these five different visual cones as hyperuniform,” Torquato said. “If I gave you the avian system with these cones and removed the red, it’s still hyperuniform. Now, let’s remove the blue — what remains is still hyperuniform. That’s never been seen in any system, physical or biological. If you had asked me to recreate this arrangement before I saw this data I might have initially said that it would be very difficult to do.”

The discovery of hyperuniformity in a biological system could mean that the state is more common than previously thought, said Remi Dreyfus, a researcher at the Pennsylvania-based Complex Assemblies of Soft Matter lab (COMPASS) co-run by the University of Pennsylvania, the French National Centre for Scientific Research and the French chemical company Solvay. Previously, disordered hyperuniformity had only been observed in specialized physical systems such as liquid helium, simple plasmas and densely packed granules.

“It really looks like this idea of hyperuniformity, which started from a theoretical basis, is extremely general and that we can find them in many places,” said Dreyfus, who is familiar with the research but had no role in it. “I think more and more people will look back at their data and figure out whether there is hyperuniformity or not. They will find this kind of hyperuniformity is more common in many physical and biological systems.”

The findings also provide researchers with a detailed natural model that could be useful in efforts to construct hyperuniform systems and technologies, Dreyfus said. “Nature has found a way to make multi-hyperuniformity,” he said. “Now you can take the cue from what nature has found to create a multi-hyperuniform pattern if you intend to.”

Evolutionarily speaking, the researchers’ results show that nature found a unique workaround to the problem of cramming all those cones into the compact avian eye, Corbo said. The ordered pattern of cells in most other animals’ eyes are thought to be the “optimal” arrangement, and anything less would result in impaired vision. Yet, birds with the arrangement studied here — including chickens — have impeccable vision, Corbo said.

“These findings are significant because they suggest that the arrangement of photoreceptors in the bird, although not perfectly regular, are, in fact, as regular as they can be given the packing constraints in the epithelium,” Corbo said.

“This result indicates that evolution has driven the system to the ‘optimal’ arrangement possible, given these constraints,” he said. “We still know nothing about the cellular and molecular mechanisms that underlie this beautiful and highly organized arrangement in birds. So, future research directions will include efforts to decipher how these patterns develop in the embryo.”

The paper, “Avian photoreceptor patterns represent a disordered hyperuniform solution to a multiscale packing problem,” was published Feb. 24 in Physical Review E. The work was supported by grants from the National Science Foundation (grant no. DMS-1211087), National Cancer Institute (grant no. U54CA143803); the National Institutes of Health (grant nos. EY018826, HG006346 and HG006790); the Human Frontier Science Program; the German Research Foundation (DFG); and the Simons Foundation (grant no. 231015).

Journal Reference:

  1. J. T. Miller, A. Lazarus, B. Audoly, P. M. Reis. Shapes of a Suspended Curly HairPhysical Review Letters, 2014; 112 (6) DOI:10.1103/PhysRevLett.112.068103

A challenge to the genetic interpretation of biology (University of Eastern Finland)

19-Feb-2014

Keith Baverstock

A proposal for reformulating the foundations of biology, based on the 2nd law of thermodynamics and which is in sharp contrast to the prevailing genetic view, is published today in the Journal of the Royal Society Interface under the title “Genes without prominence: a reappraisal of the foundations of biology”. The authors, Arto Annila, Professor of physics at Helsinki University and Keith Baverstock, Docent and former professor at the University of Eastern Finland, assert that the prominent emphasis currently given to the gene in biology is based on a flawed interpretation of experimental genetics and should be replaced by more fundamental considerations of how the cell utilises energy. There are far-reaching implications, both in research and for the current strategy in many countries to develop personalised medicine based on genome-wide sequencing.

This shows the inactive linear peptide molecule with a sequence of amino acids derived from the gene coding sequence folds to a protein.

Is it in your genes?

By “it” we mean intelligence, sexual orientation, increased risk of cancer, stroke or heart attack, criminal behaviour, political preference and religious beliefs, etcetera. Genes have been implicated in influencing, wholly or partly, all these aspects of our lives by researchers. Genes cannot cause any of these features, although geneticists have found associations between specific genes and all of these features, many of which are entirely spurious and a few are fortuitous.

How can we be so sure?

When a gene, a string of bases on the DNA molecule, is deployed, it is first transcribed and then translated into a peptide – a string of amino acids. To give rise to biological properties it needs to “fold” into a protein.

This process consumes energy and is therefore governed by the 2nd law, but also by the environment in which the folding takes place. These two factors mean that there is no causal relationship between the original gene coding sequence and the biological activity of the protein.

Is there any empirical evidence to support this?

Yes, a Nordic study of twins conducted in 2000 showed there was no evidence that cancer was a “genetic” disease – that is – that genes play no role in the causation of cancer. A wider international study involving 50,000 identical twin pairs published in 2012, showed that this conclusion applied to other common disease as well. Since the sequencing of the human genome was completed in 2001 it has not proved possible to relate abnormal gene sequences to common diseases giving rise to the problem of the “missing heritability”.

What is the essence of the reformulation?

At the most fundamental level organisms are energy-consuming systems and the appropriate foundation in physics is that of complex dissipative systems. As energy flows in and out and within, the complex molecular system called the cell, fundamental physical considerations, dictated by the 2nd law of thermodynamics, demand that these flows, called actions, are maximally efficient (follow the path of least resistance) in space and time. Energy exchanges can give rise to new emergent properties that modify the actions and give rise to more new emergent properties and so on. The result is evolution from simpler to more complex and diverse organisms in both form and function, without the need to invoke genes. This model is supported by earlier computer simulations to create a virtual ecosystem by Mauno Rönkkö of the University of Eastern Finland.

What implications does this have in practice?

There are many, but two are urgent.

1) to assume that genes are unavoidable influences on our health and behaviour will distract attention from the real causes of disease, many of which arise from our environment;

2) the current strategy towards basing healthcare on genome-wide sequencing, so called “personalised healthcare”, will prove costly and ineffective.

What is personalised health care?

This is the idea that it will be possible to predict at birth, by determining the total DNA sequence (genome-wide sequence), health outcomes in the future and take preventive measures. Most European countries have research programmes in this and in the UK a pilot study with 100,000 participants is underway.

Física dos Sistemas Complexos pode prever impactos das mudanças ambientais (Fapesp)

Avaliação é de Jan-Michael Rost, pesquisador do Instituto Max Planck (foto: Nina Wagner/DWIH-SP)

19/02/2014

Elton Alisson

Agência FAPESP – Além da aplicação em áreas como a Engenharia e Tecnologias da Informação e Comunicação (TICs), a Física dos Sistemas Complexos – nos quais cada elemento contribui individualmente para o surgimento de propriedades somente observadas em conjunto – pode ser útil para avaliar os impactos de mudanças ambientais no planeta, como o desmatamento.

A avaliação foi feita por Jan-Michael Rost, pesquisador do Instituto Max-Planck para Física dos Sistemas Complexos, durante uma mesa-redonda sobre sistemas complexos e sustentabilidade, realizada no dia 14 de fevereiro no Hotel Pergamon, em São Paulo.

O encontro foi organizado pelo Centro Alemão de Ciência e Inovação São Paulo (DWIH-SP) e pela Sociedade Max Planck, em parceria com a FAPESP e o Serviço Alemão de Intercâmbio Acadêmico (DAAD), e fez parte de uma programação complementar de atividades da exposição científica Túnel da Ciência Max Planck.

“Os sistemas complexos, como a vida na Terra, estão no limiar entre a ordem e a desordem e levam um determinado tempo para se adaptar a mudanças”, disse Rost.

“Se houver grandes alterações nesses sistemas, como o desmatamento desenfreado de florestas, em um período curto de tempo, e for atravessado o limiar entre a ordem e a desordem, essas mudanças podem ser irreversíveis e colocar em risco a preservação da complexidade e a possibilidade de evolução das espécies”, afirmou o pesquisador.

De acordo com Rost, os sistemas complexos começaram a chamar a atenção dos cientistas nos anos 1950. A fim de estudá-los, porém, não era possível utilizar as duas grandes teorias que revolucionaram a Física no século 20: a da Relatividade, estabelecida por Albert Einstein (1879-1955), e da mecânica quântica, desenvolvida pelo físico alemão Werner Heisenberg (1901-1976) e outros cientistas.

Isso porque essas teorias podem ser aplicadas apenas a sistemas fechados, como os motores, que não sofrem interferência do meio externo e nos quais as reações de equilíbrio, ocorridas em seu interior, são reversíveis, afirmou Rost.

Por essa razão, segundo ele, essas teorias não são suficientes para estudar sistemas abertos, como máquinas dotadas de inteligência artificial e as espécies de vida na Terra, que interagem com o meio ambiente, são adaptativas e cujas reações podem ser irreversíveis. Por isso, elas deram lugar a teorias relacionadas à Física dos sistemas complexos, como a do caos e a da dinâmica não linear, mais apropriadas para essa finalidade.

“Essas últimas teorias tiveram um desenvolvimento espetacular nas últimas décadas, paralelamente às da mecânica clássica”, afirmou Rost.

“Hoje já se reconhece que os sistemas não são fechados, mas se relacionam com o exterior e podem apresentar reações desproporcionais à ação que sofreram. É nisso que a Engenharia se baseia atualmente para desenvolver produtos e equipamentos”, afirmou.

Categorias de sistemas complexos

De acordo com Rost, os sistemas complexos podem ser divididos em quatro categorias que se diferenciam pelo tempo de reação a uma determinada ação sofrida. A primeira delas é a dos sistemas complexos estáticos, que reagem instantaneamente a uma ação.

A segunda é a de sistemas adaptativos, como a capacidade de farejamento dos cães. Ao ser colocado na direção de uma trilha de rastros deixados por uma pessoa perdida em uma mata, por exemplo, os cães farejadores fazem movimentos de ziguezague.

Isso porque, segundo Rost, esses animais possuem um sistema de farejamento adaptativo. Isto é, ao sentir um determinado cheiro em um local, a sensibilidade olfativa do animal àquele odor diminui drasticamente e ele perde a capacidade de identificá-lo.

Ao sair do rastro em que estava, o animal recupera rapidamente a sensibilidade olfativa ao odor e é capaz de identificá-lo em uma próxima pegada. “O limiar da percepção olfativa desses animais é adaptado constantemente”, afirmou Rost.

A terceira categoria de sistemas complexos é a de sistemas autônomos, que utilizam a evolução como um sistema de adaptação e é impossível prever como será a reação a uma determinada mudança.

Já a última categoria é a de sistemas evolucionários ou transgeracionais, em que se inserem os seres humanos e outras espécies de vida na Terra, e na qual a reação a uma determinada alteração em seus sistemas de vida demora muito tempo para acontecer, afirmou Rost.

“Os sistemas transgeracionais recebem estímulos durante a vida toda e a reação de uma determinada geração não é comparável com a anterior”, disse o pesquisador.

“Tentar prever o tempo que um determinado sistema transgeracional, como a humanidade, leva para reagir a uma ação, como as mudanças ambientais, pode ser útil para assegurar a sustentabilidade do planeta”, avaliou Rost.

New application of physics tools used in biology (Science Daily)

Date: 

February 7, 2014

Source: DOE/Lawrence Livermore National Laboratory

Summary: A physicist and his colleagues have found a new application for the tools and mathematics typically used in physics to help solve problems in biology.

This DNA molecule is wrapped twice around a histone octamer, the major structural protein of chromosomes. New studies show they play a role in preserving biological memory when cells divide. Image courtesy of Memorial University of Newfoundland. Credit: Image courtesy of DOE/Lawrence Livermore National Laboratory

A Lawrence Livermore National Laboratory physicist and his colleagues have found a new application for the tools and mathematics typically used in physics to help solve problems in biology.

Specifically, the team used statistical mechanics and mathematical modeling to shed light on something known as epigenetic memory — how an organism can create a biological memory of some variable condition, such as quality of nutrition or temperature.

“The work highlights the interdisciplinary nature of modern molecular biology, in particular, how the tools and models from mathematics and physics can help clarify problems in biology,” said Ken Kim, a LLNL physicist and one of the authors of a paper appearing in the Feb. 7 issue ofPhysical Review Letters.

Not all characteristics of living organisms can be explained by their genes alone. Epigenetic processes react with great sensitivity to genes’ immediate biochemical surroundings — and further, they pass those reactions on to the next generation.

The team’s work on the dynamics of histone protein modification is central to epigenetics. Like genetic changes, epigenetic changes are preserved when a cell divides. Histone proteins were once thought to be static, structural components in chromosomes, but recent studies have shown that histones play an important dynamical role in the machinery responsible for epigenetic regulation.

When histones undergo chemical alterations (histone modification) as a result of some external stimulus, they trigger short-term biological memory of that stimulus within a cell, which can be passed down to its daughter cells. This memory also can be reversed after a few cell division cycles.

Epigenetic modifications are essential in the development and function of cells, but also play a key role in cancer, according to Jianhua Xing, a former LLNL postdoc and current professor at Virginia Tech. “For example, changes in the epigenome can lead to the activation or deactivation of signaling pathways that can lead to tumor formation,” Xing added.

The molecular mechanism underlying epigenetic memory involves complex interactions between histones, DNA and enzymes, which produce modification patterns that are recognized by the cell. To gain insight into such complex systems, the team constructed a mathematical model that captures the essential features of the histone-induced epigenetic memory. The model highlights the “engineering” challenge a cell must constantly face during molecular recognition. It is analogous to restoring a picture with missing parts. The molecular properties of a species have been evolutionarily selected to allow them to “reason” what the missing parts are based on incomplete information pattern inherited from the mother cell.

Story Source:

The above story is based on materials provided by DOE/Lawrence Livermore National Laboratory. The original article was written by Anne M Stark. Note: Materials may be edited for content and length.

A New Physics Theory of Life (Quanta Magazine)

Jeremy England

Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life. (Katherine Taylor for Quanta Magazine).

By: Natalie Wolchover

January 22, 2014

Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

Plagiomnium affine

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

A computer simulation by Jeremy England and colleagues shows a system of particles confined inside a viscous fluid in which the turquoise particles are driven by an oscillating force. Over time (from top to bottom), the force triggers the formation of more bonds among the particles.

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

This situation changed in the late 1990s, due primarily to the work of Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.

Using Jarzynski and Crooks’ formulation, he derived a generalization of the second law of thermodynamics that holds for systems of particles with certain characteristics: The systems are strongly driven by an external energy source such as an electromagnetic wave, and they can dump heat into a surrounding bath. This class of systems includes all living things. England then determined how such systems tend to evolve over time as they increase their irreversibility. “We can show very simply from the formula that the more likely evolutionary outcomes are going to be the ones that absorbed and dissipated more energy from the environment’s external drives on the way to getting there,” he said. The finding makes intuitive sense: Particles tend to dissipate more energy when they resonate with a driving force, or move in the direction it is pushing them, and they are more likely to move in that direction than any other at any given moment.

“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.

Self Replicating Microstructures

Self-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time. As England put it, “A great way of dissipating more is to make more copies of yourself.” In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating. He also showed that RNA, the nucleic acid that many scientists believe served as the precursor to DNA-based life, is a particularly cheap building material. Once RNA arose, he argues, its “Darwinian takeover” was perhaps not surprising.

The chemistry of the primordial soup, random mutations, geography, catastrophic events and countless other factors have contributed to the fine details of Earth’s diverse flora and fauna. But according to England’s theory, the underlying principle driving the whole process is dissipation-driven adaptation of matter.

This principle would apply to inanimate matter as well. “It is very tempting to speculate about what phenomena in nature we can now fit under this big tent of dissipation-driven adaptive organization,” England said. “Many examples could just be right under our nose, but because we haven’t been looking for them we haven’t noticed them.”

Scientists have already observed self-replication in nonliving systems. According to new research led by Philip Marcus of the University of California, Berkeley, and reported in Physical Review Letters in August, vortices in turbulent fluids spontaneously replicate themselves by drawing energy from shear in the surrounding fluid. And in a paper appearing online this week in Proceedings of the National Academy of Sciences, Michael Brenner, a professor of applied mathematics and physics at Harvard, and his collaborators present theoretical models and simulations of microstructures that self-replicate. These clusters of specially coated microspheres dissipate energy by roping nearby spheres into forming identical clusters. “This connects very much to what Jeremy is saying,” Brenner said.

Besides self-replication, greater structural organization is another means by which strongly driven systems ramp up their ability to dissipate energy. A plant, for example, is much better at capturing and routing solar energy through itself than an unstructured heap of carbon atoms. Thus, England argues that under certain conditions, matter will spontaneously self-organize. This tendency could account for the internal order of living things and of many inanimate structures as well. “Snowflakes, sand dunes and turbulent vortices all have in common that they are strikingly patterned structures that emerge in many-particle systems driven by some dissipative process,” he said. Condensation, wind and viscous drag are the relevant processes in these particular cases.

“He is making me think that the distinction between living and nonliving matter is not sharp,” said Carl Franck, a biological physicist at Cornell University, in an email. “I’m particularly impressed by this notion when one considers systems as small as chemical circuits involving a few biomolecules.”

Snowflake

England’s bold idea will likely face close scrutiny in the coming years. He is currently running computer simulations to test his theory that systems of particles adapt their structures to become better at dissipating energy. The next step will be to run experiments on living systems.

Prentiss, who runs an experimental biophysics lab at Harvard, says England’s theory could be tested by comparing cells with different mutations and looking for a correlation between the amount of energy the cells dissipate and their replication rates. “One has to be careful because any mutation might do many things,” she said. “But if one kept doing many of these experiments on different systems and if [dissipation and replication success] are indeed correlated, that would suggest this is the correct organizing principle.”

Brenner said he hopes to connect England’s theory to his own microsphere constructions and determine whether the theory correctly predicts which self-replication and self-assembly processes can occur — “a fundamental question in science,” he said.

Having an overarching principle of life and evolution would give researchers a broader perspective on the emergence of structure and function in living things, many of the researchers said. “Natural selection doesn’t explain certain characteristics,” said Ard Louis, a biophysicist at Oxford University, in an email. These characteristics include a heritable change to gene expression called methylation, increases in complexity in the absence of natural selection, and certain molecular changes Louis has recently studied.

If England’s approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that “the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve,” Louis said.

“People often get stuck in thinking about individual problems,” Prentiss said.  Whether or not England’s ideas turn out to be exactly right, she said, “thinking more broadly is where many scientific breakthroughs are made.”

Emily Singer contributed reporting.

Correction: This article was revised on January 22, 2014, to reflect that Ilya Prigogine won the Nobel Prize in chemistry, not physics.

No Qualms About Quantum Theory (Science Daily)

Nov. 26, 2013 — A colloquium paper published inThe European Physical Journal D looks into the alleged issues associated with quantum theory. Berthold-Georg Englert from the National University of Singapore reviews a selection of the potential problems of the theory. In particular, he discusses cases when mathematical tools are confused with the actual observed sub-atomic scale phenomena they are describing. Such tools are essential to provide an interpretation of the observations, but cannot be confused with the actual object of studies.

The author sets out to demystify a selected set of objections targeted against quantum theory in the literature. He takes the example of Schrödinger’s infamous cat, whose vital state serves as the indicator of the occurrence of radioactive decay, whereby the decay triggers a hammer mechanism designed to release a lethal substance. The term ‘Schrödinger’s cat state’ is routinely applied to superposition of so-called quantum states of a particle. However, this imagined superposition of a dead and live cat has no reality. Indeed, it confuses a physical object with its description. Something as abstract as the wave function − which is a mathematical tool describing the quantum state − cannot be considered a material entity embodied by a cat, regardless of whether it is dead or alive.

Other myths debunked in this paper include the provision of proof that quantum theory is well defined, has a clear interpretation, is a local theory, is not reversible, and does not feature any instant action at a distance. It also demonstrates that there is no measurement problem, despite the fact that the measure is commonly known to disturb the system under measurement. Hence, since the establishment of quantum theory in the 1920s, its concepts are now clearer, but its foundations remain unchanged.

Journal Reference:

  1. Berthold-Georg Englert. On quantum theoryThe European Physical Journal D, 2013; 67 (11) DOI: 10.1140/epjd/e2013-40486-5

Photons Run out of Loopholes: Quantum World Really Is in Conflict With Our Everyday Experience (Science Daily)

Apr. 15, 2013 — A team led by the Austrian physicist Anton Zeilinger has now carried out an experiment with photons in which they have closed an important loophole. The researchers have thus provided the most complete experimental proof that the quantum world is in conflict with our everyday experience.

Lab IQOQI, Vienna 2012. (Credit: Copyright: Jacqueline Godany)

The results of this study appear this week in the journal Nature (Advance Online Publication/AOP).

When we observe an object, we make a number of intuitive assumptions, among them that the unique properties of the object have been determined prior to the observation and that these properties are independent of the state of other, distant objects. In everyday life, these assumptions are fully justified, but things are different at the quantum level. In the past 30 years, a number of experiments have shown that the behaviour of quantum particles — such as atoms, electrons or photons — can be in conflict with our basic intuition. However, these experiments have never delivered definite answers. Each previous experiment has left open the possibility, at least in principle, that the observed particles ‘exploited’ a weakness of the experimental setup.

Quantum physics is an exquisitely precise tool for understanding the world around us at a very fundamental level. At the same time, it is a basis for modern technology: semiconductors (and therefore computers), lasers, MRI scanners, and numerous other devices are based on quantum-physical effects. However, even after more than a century of intensive research, fundamental aspects of quantum theory are not yet fully understood. On a regular basis, laboratories worldwide report results that seem at odds with our everyday intuition but that can be explained within the framework of quantum theory.

On the trail of the quantum entanglement mystery

The physicists in Vienna report not a new effect, but a deep investigation into one of the most fundamental phenomena of quantum physics, known as ‘entanglement.’ The effect of quantum entanglement is amazing: when measuring a quantum object that has an entangled partner, the state of the one particle depends on measurements performed on the partner. Quantum theory describes entanglement as independent of any physical separation between the particles. That is, entanglement should also be observed when the two particles are sufficiently far apart from each other that, even in principle, no information can be exchanged between them (the speed of communication is fundamentally limited by the speed of light). Testing such predictions regarding the correlations between entangled quantum particles is, however, a major experimental challenge.

Towards a definitive answer

The young academics in Anton Zeilinger’s group including Marissa Giustina, Alexandra Mech, Rupert Ursin, Sven Ramelow and Bernhard Wittmann, in an international collaboration with the National Institute of Standards and Technology/NIST (USA), the Physikalisch-Technische Bundesanstalt (Germany), and the Max-Planck-Institute of Quantum Optics (Germany), have now achieved an important step towards delivering definitive experimental evidence that quantum particles can indeed do things that classical physics does not allow them to do. For their experiment, the team built one of the best sources for entangled photon pairs worldwide and employed highly efficient photon detectors designed by experts at NIST. These technological advances together with a suitable measurement protocol enabled the researchers to detect entangled photons with unprecedented efficiency. In a nutshell: “Our photons can no longer duck out of being measured,” says Zeilinger.

This kind of tight monitoring is important as it closes an important loophole. In previous experiments on photons, there has always been the possibility that although the measured photons do violate the laws of classical physics, such non-classical behaviour would not have been observed if all photons involved in the experiment could have been measured. In the new experiment, this loophole is now closed. “Perhaps the greatest weakness of photons as a platform for quantum experiments is their vulnerability to loss — but we have just demonstrated that this weakness need not be prohibitive,” explains Marissa Giustina, lead author of the paper.

Now one last step

Although the new experiment makes photons the first quantum particles for which, in several separate experiments, every possible loophole has been closed, the grand finale is yet to come, namely, a single experiment in which the photons are deprived of all possibilities of displaying their counterintuitive behaviour through means of classical physics. Such an experiment would also be of fundamental significance for an important practical application: ‘quantum cryptography,’ which relies on quantum mechanical principles and is considered to be absolutely secure against eavesdropping. Eavesdropping is still theoretically possible, however, as long as there are loopholes. Only when all of these are closed is a completely secure exchange of messages possible.

An experiment without any loopholes, says Zeilinger, “is a big challenge, which attracts groups worldwide.” These experiments are not limited to photons, but also involve atoms, electrons, and other systems that display quantum mechanical behaviour. The experiment of the Austrian physicists highlights the photons’ potential. Thanks to these latest advances, the photon is running out of places to hide, and quantum physicists are closer than ever to conclusive experimental proof that quantum physics defies our intuition and everyday experience to the degree suggested by research of the past decades.

This work was completed in a collaboration including the following institutions: Institute for Quantum Optics and Quantum Information — Vienna / IQOQI Vienna (Austrian Academy of Sciences), Quantum Optics, Quantum Nanophysics and Quantum Information, Department of Physics (University of Vienna), Max-Planck-Institute of Quantum Optics, National Institute of Standards and Technology / NIST, Physikalisch-Technische Bundesanstalt, Berlin.

This work was supported by: ERC (Advanced Grant), Austrian Science Fund (FWF), grant Q-ESSENCE, Marie Curie Research Training Network EMALI, and John Templeton Foundation. This work was also supported by NIST Quantum Information Science Initiative (QISI).

Journal Reference:

  1. Marissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Jörn Beyer, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, Anton Zeilinger. Bell violation using entangled photons without the fair-sampling assumptionNature, 2013; DOI: 10.1038/nature12012

Stephen Hawking Says Humans Won’t Survive Another 1,000 Years On Earth (PosSci)

http://www.popsci.com/science/article/2013-04/stephen-hawking-says-we-have-get-rock-within-1000-years

By Colin LecherPosted 04.12.2013 at 2:30 pm
Stephen Hawking In Space

Stephen Hawking Wikimedia Commons

Physicist Stephen Hawking, speaking at the Cedars-Sinai Medical Center in Los Angeles, said if humans don’t migrate from the planet Earth to colonize other planets, they’ll face extinction in 1,000 years. So, phew, we’re good, guys. We’ve got like 900-plus years to just sit on this. That’s a relief. [RT]

The 500 Phases of Matter: New System Successfully Classifies Symmetry-Protected Phases (Science Daily)

Dec. 21, 2012 — Forget solid, liquid, and gas: there are in fact more than 500 phases of matter. In a major paper in a recent issue of Science, Perimeter Faculty member Xiao-Gang Wen reveals a modern reclassification of all of them.

Artist’s impression of a string-net of light and electrons. String-nets are a theoretical kind of topologically ordered matter. (Credit: Xiao-Gang Wen/ Perimeter Institute)

Condensed matter physics — the branch of physics responsible for discovering and describing most of these phases — has traditionally classified phases by the way their fundamental building blocks — usually atoms — are arranged. The key is something called symmetry.

To understand symmetry, imagine flying through liquid water in an impossibly tiny ship: the atoms would swirl randomly around you and every direction — whether up, down, or sideways — would be the same. The technical term for this is “symmetry” — and liquids are highly symmetric. Crystal ice, another phase of water, is less symmetric. If you flew through ice in the same way, you would see the straight rows of crystalline structures passing as regularly as the girders of an unfinished skyscraper. Certain angles would give you different views. Certain paths would be blocked, others wide open. Ice has many symmetries — every “floor” and every “room” would look the same, for instance — but physicists would say that the high symmetry of liquid water is broken.

Classifying the phases of matter by describing their symmetries and where and how those symmetries break is known as the Landau paradigm. More than simply a way of arranging the phases of matter into a chart, Landau’s theory is a powerful tool which both guides scientists in discovering new phases of matter and helps them grapple with the behaviours of the known phases. Physicists were so pleased with Landau’s theory that for a long time they believed that all phases of matter could be described by symmetries. That’s why it was such an eye-opening experience when they discovered a handful of phases that Landau couldn’t describe.

Beginning in the 1980s, condensed matter researchers, including Xiao-Gang Wen — now a faculty member at Perimeter Institute — investigated new quantum systems where numerous ground states existed with the same symmetry. Wen pointed out that those new states contain a new kind of order: topological order. Topological order is a quantum mechanical phenomenon: it is not related to the symmetry of the ground state, but instead to the global properties of the ground state’s wave function. Therefore, it transcends the Landau paradigm, which is based on classical physics concepts.

Topological order is a more general understanding of quantum phases and the transitions between them. In the new framework, the phases of matter were described not by the patterns of symmetry in the ground state, but by the patterns of a decidedly quantum property — entanglement. When two particles are entangled, certain measurements performed on one of them immediately affect the other, no matter how far apart the particles are. The patterns of such quantum effects, unlike the patterns of the atomic positions, could not be described by their symmetries. If you were to describe a city as a topologically ordered state from the cockpit of your impossibly tiny ship, you’d no longer be describing the girders and buildings of the crystals you passed, but rather invisible connections between them — rather like describing a city based on the information flow in its telephone system.

This more general description of matter developed by Wen and collaborators was powerful — but there were still a few phases that didn’t fit. Specifically, there were a set of short-range entangled phases that did not break the symmetry, the so-called symmetry-protected topological phases. Examples of symmetry-protected phases include some topological superconductors and topological insulators, which are of widespread immediate interest because they show promise for use in the coming first generation of quantum electronics.

In the paper featured in Science, Wen and collaborators reveal a new system which can, at last, successfully classify these symmetry-protected phases.

Using modern mathematics — specifically group cohomology theory and group super-cohomology theory — the researchers have constructed and classified the symmetry-protected phases in any number of dimensions and for any symmetries. Their new classification system will provide insight about these quantum phases of matter, which may in turn increase our ability to design states of matter for use in superconductors or quantum computers.

This paper is a revealing look at the intricate and fascinating world of quantum entanglement, and an important step toward a modern reclassification of all phases of matter.

Journal Reference:

  1. X. Chen, Z.-C. Gu, Z.-X. Liu, X.-G. Wen. Symmetry-Protected Topological Orders in Interacting Bosonic SystemsScience, 2012; 338 (6114): 1604 DOI:10.1126/science.1227224

Do We Live in a Computer Simulation Run by Our Descendants? Researchers Say Idea Can Be Tested (Science Daily)

The conical (red) surface shows the relationship between energy and momentum in special relativity, a fundamental theory concerning space and time developed by Albert Einstein, and is the expected result if our universe is not a simulation. The flat (blue) surface illustrates the relationship between energy and momentum that would be expected if the universe is a simulation with an underlying cubic lattice. (Credit: Martin Savage)

Dec. 10, 2012 — A decade ago, a British philosopher put forth the notion that the universe we live in might in fact be a computer simulation run by our descendants. While that seems far-fetched, perhaps even incomprehensible, a team of physicists at the University of Washington has come up with a potential test to see if the idea holds water.

The concept that current humanity could possibly be living in a computer simulation comes from a 2003 paper published inPhilosophical Quarterly by Nick Bostrom, a philosophy professor at the University of Oxford. In the paper, he argued that at least one of three possibilities is true:

  • The human species is likely to go extinct before reaching a “posthuman” stage.
  • Any posthuman civilization is very unlikely to run a significant number of simulations of its evolutionary history.
  • We are almost certainly living in a computer simulation.

He also held that “the belief that there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.”

With current limitations and trends in computing, it will be decades before researchers will be able to run even primitive simulations of the universe. But the UW team has suggested tests that can be performed now, or in the near future, that are sensitive to constraints imposed on future simulations by limited resources.

Currently, supercomputers using a technique called lattice quantum chromodynamics and starting from the fundamental physical laws that govern the universe can simulate only a very small portion of the universe, on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom, said Martin Savage, a UW physics professor.

Eventually, more powerful simulations will be able to model on the scale of a molecule, then a cell and even a human being. But it will take many generations of growth in computing power to be able to simulate a large enough chunk of the universe to understand the constraints on physical processes that would indicate we are living in a computer model.

However, Savage said, there are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum.

The supercomputers performing lattice quantum chromodynamics calculations essentially divide space-time into a four-dimensional grid. That allows researchers to examine what is called the strong force, one of the four fundamental forces of nature and the one that binds subatomic particles called quarks and gluons together into neutrons and protons at the core of atoms.

“If you make the simulations big enough, something like our universe should emerge,” Savage said. Then it would be a matter of looking for a “signature” in our universe that has an analog in the current small-scale simulations.

Savage and colleagues Silas Beane of the University of New Hampshire, who collaborated while at the UW’s Institute for Nuclear Theory, and Zohreh Davoudi, a UW physics graduate student, suggest that the signature could show up as a limitation in the energy of cosmic rays.

In a paper they have posted on arXiv, an online archive for preprints of scientific papers in a number of fields, including physics, they say that the highest-energy cosmic rays would not travel along the edges of the lattice in the model but would travel diagonally, and they would not interact equally in all directions as they otherwise would be expected to do.

“This is the first testable signature of such an idea,” Savage said.

If such a concept turned out to be reality, it would raise other possibilities as well. For example, Davoudi suggests that if our universe is a simulation, then those running it could be running other simulations as well, essentially creating other universes parallel to our own.

“Then the question is, ‘Can you communicate with those other universes if they are running on the same platform?'” she said.

Journal References:

  1. Silas R. Beane, Zohreh Davoudi, Martin J. Savage.Constraints on the Universe as a Numerical SimulationArxiv, 2012 [link]
  2. Nick Bostrom. Are You Living in a Computer Simulation? Philosophical Quarterly, (2003) Vol. 53, No. 211, pp. 243-255 [link]

Traços exóticos da ‘partícula de Deus’ surpreendem físicos (Folha de são Paulo)

JC e-mail 4559, de 10 de Agosto de 2012.

Estudo com participação de brasileiro indica que bóson de Higgs pode não se encaixar em teoria mais aceita hoje. Análise preliminar dá pistas de partículas ainda desconhecidas; outros cientistas pedem cautela com os dados.

A partícula de Deus está, ao que parece, do jeito que o diabo gosta: malcomportada. É o que indica uma análise preliminar de dados coletados no LHC, maior acelerador de partículas do mundo.

O trabalho, feito por Oscar Éboli, do Instituto de Física da Universidade de São Paulo (USP), sugere que o chamado bóson de Higgs, que seria responsável por dar massa a tudo o que existe, não está se portando como deveria, a julgar pela teoria que previu sua existência, o Modelo Padrão. Se confirmado, o comportamento anômalo da partícula seria a deixa para uma nova era da física.

A descoberta do possível bóson, anunciada com estardalhaço no mês passado, foi comemorada como a finalização de uma etapa gloriosa no estudo das partículas fundamentais da matéria. Sua existência, em resumo, explicaria porque o Sol pode produzir sua energia e criaturas como nós podem existir.

Dada sua importância para a consistência do Universo (e fazendo uma analogia com a história bíblica da torre de Babel), o físico ganhador do Nobel Leon Lederman deu ao bóson o apelido de “partícula de Deus”.

Para analisar o bóson de Higgs, é preciso primeiro produzir uma colisão entre prótons em altíssima velocidade – função primordial do LHC. Então, do impacto de alta energia, surgem montes de novas partículas, dentre as quais o Higgs, que rapidamente decai, como se diz.

É que, por ser muito instável, o bóson se “decompõe” quando a energia da colisão diminui. Aparecem, no lugar dele, outras partículas. É esse subproduto que pode ser detectado e indicar a existência do bóson de Higgs. Contudo, isso exige a realização de muitos impactos, até que as estatísticas comecem a sugerir a presença do procurado bóson.

Os dados coletados até aqui são suficientes para apontar a existência da partícula, mas suas características específicas ainda não puderam ser determinadas. “Estamos ainda num estágio inicial da exploração das propriedades da dela”, diz Éboli. “Contudo, há uma indicação de que o Higgs decaia mais em dois fótons [partículas de luz] do que seria esperado no Modelo Padrão.”

Os resultados dessa análise preliminar foram divulgados no Arxiv.org, repositório de estudos de física na internet, e abordados na revista “Pesquisa Fapesp”.

Surpresa bem-vinda – A novidade anima os cientistas. “Para a maioria dos físicos, o Modelo Padrão é uma boa representação da natureza, mas não é a teoria final”, afirma Éboli. “Se de fato for confirmado que o Higgs está decaindo mais que o esperado em dois fótons, isso pode significar que novas partículas podem estar dentro do alcance de descoberta do LHC.”

Poderia ser o primeiro vislumbre de um novo “zoológico” de tijolos elementares da matéria. Previa-se que essas partículas exóticas começassem a aparecer com as energias elevadas do LHC.

Tudo muito interessante, mas nada resolvido. “É um trabalho muito sério, mas eu acho que ainda é muito cedo para se tirar qualquer conclusão se se trata ou não do Higgs padrão”, afirma Sérgio Novaes, pesquisador da Unesp que participa de um dos experimentos que detectaram o bóson de Higgs. “Até o final do ano as coisas estarão um pouco mais claras”, avalia ele.

Scientific particles collide with social media to benefit of all (Irish Times)

The Irish Times – Thursday, July 12, 2012

xxx Large Hadron Collider at Cern: the research body now has 590,000 followers on Twitter

xxx Large Hadron Collider at Cern: the research body now has 590,000 followers on Twitter

MARIE BORAN

IN 2008 CERN switched on the Large Hadron Collider (LHC) in Geneva – around the same time it sent out its first tweet. Although the first outing of the LHC didn’t go according to plan, the Twitter account gained 10,000 followers within the first day, according to James Gillies, head of communications at Cern.

Speaking at the Euroscience Open Forum in Dublin this week, Gillies explained the role social media plays in engaging the public with the particle physics research its laboratory does. The Twitter account now has 590,000 followers and Cern broke important news via it in March 2010 by joyously declaring: “Experiment have seen collisions.”

“Why do we communicate at Cern? If you talk to the scientists who work there they will tell you it’s a good thing to do and they all want to do it,” Gillies said, adding that Cern is publicly funded so engaging with the people who pay the bills is important.

When the existence of the Higgs particle was announced last week, it wasn’t an exclusive press event. Live video was streamed across the web, questions were taken not only from journalists but also from Twitter followers, and Cern used this as a chance to announce jobs via Facebook.

While Cern appears to be the social media darling of the science world, other research institutes and scientists are still weighing up the pros and cons of platforms like Facebook, Twitter or YouTube.

There is a certain stigma attached to social networking sites, not just because much of the content is perceived as banal, but also because too much tweeting could be damaging to your image as a scientist.

Bora Zivkovic is blogs editor at Scientific American, organiser of the fast-growing science conference ScienceOnline and speaker at the social media panel this Saturday at the Euroscience Open Forum. He says the adoption of social media by scientists is slow but growing.

“Academics are quite risk-averse and are shy about trying new things that have a perceived potential to remove the edge they may have in the academic hierarchy, either through lost time or lost reputation.”

Zivkovic talks about fear of the “Sagan effect”, named after the late Carl Sagan. A talented astronomer and astrophysicist, he was loved by the public but snubbed by the science community.

“Many still see social media as self-promotion, which is still in some scientific circles viewed as a negative thing to do. The situation is reminiscent of the very slow adoption of email by researchers back in the early 1990s.

“Once the scientists figure out how to include social media in their daily workflow, realise it does not take away from their time but actually makes them more effective in reaching their academic goals, and realise that the ‘Sagan effect’ on reputation is a thing of the past, they will readily incorporate social media into their normal work.”

Many researchers still rely heavily on specialist mailing lists. The broadcast capability on social media is far greater and bespoke, claims Dr Matthew Rowe, research associate at the Knowledge Media Institute with the Open University.

“If I was to email people about some recent work I would presume that it would be marked as spam. However, if I was to announce the release of some work through social media, then a debate and conversation could evolve surrounding the topic; I have seen this happen many times on Facebook.”

Conversations on social media sites are often seen as trivial – for scientists, the end goal is “publish or perish”. Results must be published in a reputable academic journal and preferably cited by those in their area.

Twitter, it seems, can help. A 2011 paper from researcher Gunther Eysenbach found a correlation between Twitter activity and highly cited articles. The microblogging site may help citation rate or serve as a measure of how “citable” your paper may be.

In addition, a 2010 survey on Twitter found one-third of academics said they use it for sharing information with peers, communicating with students or as a real-time news source.

For some the argument for social media is the potential for connecting with volunteers and providing valuable data from the citizen scientist. Yolanda Melero Cavero’s MinkApp has connected locals with an effort to control the mink population in Scotland.

“The most interesting thing about MinkApp, for me, was the fact that the scientist was able to get 600 volunteers for her ecological study. Social media has the grassroots potential to engage with willing volunteers,” says Nancy Salmon, researcher at the department of occupational therapy at the University of Limerick.

Rowe gives some sage social media advice for academics about keeping on topic and your language jargon-free.

But there’s always room for humour as demonstrated by the Higgs boson jokes on Twitter and Facebook last week. As astronomer Phil Platt tweeted: “I’ve got 99.9999% problems, but a Higgs ain’t one.”

Disorderly Conduct: Probing the Role of Disorder in Quantum Coherence (Science Daily)

ScienceDaily (July 19, 2012) — A new experiment conducted at the Joint Quantum Institute (JQI)* examines the relationship between quantum coherence, an important aspect of certain materials kept at low temperature, and the imperfections in those materials. These findings should be useful in forging a better understanding of disorder, and in turn in developing better quantum-based devices, such as superconducting magnets.

Figure 1 (top): Two thin planes of cold atoms are held in an optical lattice by an array of laser beams. Still another laser beam, passed through a diffusing material, adds an element of disorder to the atoms in the form of a speckle pattern. Figure 2 (bottom): Interference patterns resulting when the two planes of atoms are allowed to collide. In (b) the amount of disorder is just right and the pattern is crisp. In (c) too much disorder has begun to wash out the pattern. In (a) the pattern is complicated by the presence of vortices in the among the atoms, vortices which are hard to see in this image taken from the side. (Credit: Matthew Beeler)

Most things in nature are imperfect at some level. Fortunately, imperfections — a departure, say, from an orderly array of atoms in a crystalline solid — are often advantageous. For example, copper wire, which carries so much of the world’s electricity, conducts much better if at least some impurity atoms are present.

In other words, a pinch of disorder is good. But there can be too much of this good thing. The issue of disorder is so important in condensed matter physics, and so difficult to understand directly, that some scientists have been trying for some years to simulate with thin vapors of cold atoms the behavior of electrons flowing through solids trillions of times more dense. With their ability to control the local forces over these atoms, physicists hope to shed light on more complicated case of solids.

That’s where the JQI experiment comes in. Specifically, Steve Rolston and his colleagues have set up an optical lattice of rubidium atoms held at temperature close to absolute zero. In such a lattice atoms in space are held in orderly proximity not by natural inter-atomic forces but by the forces exerted by an array of laser beams. These atoms, moreover, constitute a Bose Einstein condensate (BEC), a special condition in which they all belong to a single quantum state.

This is appropriate since the atoms are meant to be a proxy for the electrons flowing through a solid superconductor. In some so called high temperature superconductors (HTSC), the electrons move in planes of copper and oxygen atoms. These HTSC materials work, however, only if a fillip of impurity atoms, such as barium or yttrium, is present. Theorists have not adequately explained why this bit of disorder in the underlying material should be necessary for attaining superconductivity.

The JQI experiment has tried to supply palpable data that can illuminate the issue of disorder. In solids, atoms are a fraction of a nanometer (billionth of a meter) apart. At JQI the atoms are about a micron (a millionth of a meter) apart. Actually, the JQI atom swarm consists of a 2-dimensional disk. “Disorder” in this disk consists not of impurity atoms but of “speckle.” When a laser beam strikes a rough surface, such as a cinderblock wall, it is scattered in a haphazard pattern. This visible speckle effect is what is used to slightly disorganize the otherwise perfect arrangement of Rb atoms in the JQI sample.

In superconductors, the slight disorder in the form of impurities ensures a very orderly “coherence” of the supercurrent. That is, the electrons moving through the solid flow as a single coordinated train of waves and retain their cohesiveness even in the midst of impurity atoms.

In the rubidium vapor, analogously, the slight disorder supplied by the speckle laser ensures that the Rb atoms retain their coordinated participation in the unified (BEC) quantum wave structure. But only up to a point. If too much disorder is added — if the speckle is too large — then the quantum coherence can go away. Probing this transition numerically was the object of the JQI experiment. The setup is illustrated in figure 1.

And how do you know when you’ve gone too far with the disorder? How do you know that quantum coherence has been lost? By making coherence visible.

The JQI scientists cleverly pry their disk-shaped gas of atoms into two parallel sheets, looking like two thin crepes, one on top of each other. Thereafter, if all the laser beams are turned off, the two planes will collide like miniature galaxies. If the atoms were in a coherent condition, their collision will result in a crisp interference pattern showing up on a video screen as a series of high-contrast dark and light stripes.

If, however, the imposed disorder had been too high, resulting in a loss of coherence among the atoms, then the interference pattern will be washed out. Figure 2 shows this effect at work. Frames b and c respectively show what happens when the degree of disorder is just right and when it is too much.

“Disorder figures in about half of all condensed matter physics,” says Steve Rolston. “What we’re doing is mimicking the movement of electrons in 3-dimensional solids using cold atoms in a 2-dimensional gas. Since there don’t seem to be any theoretical predictions to help us understand what we’re seeing we’ve moved into new experimental territory.”

Where does the JQI work go next? Well, in figure 2a you can see that the interference pattern is still visible but somewhat garbled. That arises from the fact that for this amount of disorder several vortices — miniature whirlpools of atoms — have sprouted within the gas. Exactly such vortices among electrons emerge in superconductivity, limiting their ability to maintain a coherent state.

The new results are published in the New Journal of Physics: “Disorder-driven loss of phase coherence in a quasi-2D cold atom system,” by M C Beeler, M E W Reed, T Hong, and S L Rolston.

Another of the JQI scientists, Matthew Beeler, underscores the importance of understanding the transition from the coherent state to incoherent state owing to the fluctuations introduced by disorder: “This paper is the first direct observation of disorder causing these phase fluctuations. To the extent that our system of cold atoms is like a HTSC superconductor, this is a direct connection between disorder and a mechanism which drives the system from superconductor to insulator.”

Um novo bóson à vista (FAPESP)

Físicos do Cern descobriram nova partícula que parece ser o bóson de Higgs

MARCOS PIVETTA | Edição Online 19:46 4 de julho de 2012

Colisões de prótons nos quais se observa quatro elétrons de alta energia (linhas verdes e torres vermelhas). O evento mostra características esperadas do decaimento de um bóson de Higgs mas também é coerente com processos de fundo do modelo padrão

de Lindau (Alemanha)*

O maior laboratório do mundo pode ter encontrado a partícula que dá massa a todas as outras partículas, o tão procurado bóson de Higgs. Era a peça que faltava para completar um quebra-cabeça científico chamado modelo padrão, o arcabouço teórico formulado nas últimas décadas para explicar as partículas e forças presentes na matéria visível do Universo. Depois de analisarem trilhões de colisões de prótons produzidas em 2011 e em parte deste ano no Grande Acelerador de Hádrons (LHC), físicos dos dois maiores experimentos tocados de forma independente no Centro Europeu de Energia Nuclear (Cern) anunciaram nesta quarta-feira (4), nos arredores de Genebra (Suíça), a descoberta de uma nova partícula que tem quase todas as características do bóson de Higgs, embora ainda não possam assegurar com certeza de que se trata especificamente desse ou de algum outro tipo de bóson.

“Observamos em nossos dados sinais claros de uma nova partícula na região de massa em torno de 126 GeV (Giga-elétron-volts)”, disse a física Fabiola Gianotti, porta-voz do experimento Atlas. “Mas precisamos um pouco mais de tempo para prepararmos os resultados para publicação.” As informações provenientes de outro experimento feito no Cern, o CMS, são praticamente idênticas. “Os resultados são preliminares, mas os sinais que vemos em torno da região com massa de 125 GeV são dramáticos. É realmente uma nova partícula. Sabemos que deve ser um bóson e é o bóson mais pesado que achamos”, afirmou o porta-voz do experimento CMS, o físico Joe Incandela. Se tiver mesmo uma massa de 125 ou 126 GeV, a nova partícula será tão pesada quanto um átomo do elemento químico iodo.

Em ambos os casos experimentos, o grau de confiabilidade das análises estatísticas atingiu o nível que os cientistas chamam de 5 sigma. Nesses casos, a chance de erro é de uma em três milhões. Ou seja, com esse nível de certeza, é possível falar que houve uma descoberta, só não se conhece em detalhes a natureza da partícula encontrada. “É incrível que essa descoberta tenha acontecido durante a minha vida”, comenta Peter Higgs, o físico teórico britânico que, há 50 anos, ao lado de outros cientistas, previu a existência desse tipo de bóson. Ainda neste mês, um artigo com os dados do LHC deverá ser submetido a uma revista científica. Até o final do ano, quando acelerador será fechado para manutenção por ao menos um ano e meio, mais dados devem ser produzidos pelos dois experimentos.

“Estou rindo o dia todo”
Em Lindau, uma pequena cidade do sul da Alemanha à beira do lago Constance na divisa com a Áustria e a Suíça, onde ocorre nesta semana o 62º Encontro de Prêmios Nobel, os pesquisadores comemoraram a notícia vinda dos experimentos no Cern. Como o tema do encontro deste ano era física, não faltaram laureados com a maior honraria da ciência para comentar o feito. “Não sabemos se é o bóson (de Higgs), mas é um bóson”, disse o físico teórico David J. Gross, da Universidade de Califórnia, ganhador do Nobel de 2004 pela descoberta da liberdade assintótica. “Estou rindo o dia todo.” O físico experimental Carlo Rubia, ex-diretor geral do Cern e ganhador do Nobel de 1984 por trabalhos que levaram à identificação de dois tipos de bósons (o W e Z), foi na mesma linha de raciocínio. “Estamos diante de um marco”, afirmou.

Talvez com um entusiasmo um pouco menor, mas ainda assim reconhecendo a enorme importância do achado no Cern, dois outros Nobel deram sua opinião sobre a notícia do dia. “É algo que esperávamos há anos”, afirmou o físico teórico holandês Martinus Veltman, que recebeu o prêmio em 1999. “O modelo padrão ganhou um degrau maior de validade.” Para o cosmologista americano George Smoot, ganhador do Nobel de 2006 pela descoberta da radiação cósmica de fundo (uma relíquia do Big Bang, a explosão primordial que criou o Universo), ainda deve demorar uns dois ou três anos para os cientistas realmente saberem que tipo de nova partícula foi realmente descoberta. Se a nova partícula não for o bóson de Higgs, Smoot disse que seria “maravilhoso se fosse algo relacionado com a matéria escura”, um misterioso componente que, ao lado da matéria visível e da ainda mais desconhecida energia escura, seria um dos pilares do Universo.

Não é possível medir de forma direta partículas com as propriedades do bóson de Higgs, mas sua existência, ainda que fugaz, deixaria rastros, que, estes sim, poderiam ser detectados num acelerador de partículas tão potente como o LHC. Instáveis e fugazes, os bósons de Higgs sobrevivem uma ínfima fração de segundo – até decaírem e virarem partículas menos pesadas, que, por sua vez, decaem também e dão origem a partículas ainda mais leves. O modelo padrão prevê que, em função de sua massa, os bósons de Higgs devem decair em diferentes canais, ou seja, em distintas combinações de partículas mais leves, como dois fótons ou quatro léptons. Nos experimentos feitos no Cern, dos quais participaram cerca de 6 mil físicos, foram encontradas evidências quase inequívocas das formas de decaimento que seriam a assinatura típica dos bóson de Higgs.

*O jornalista Marcos Pivetta viajou a Lindau a convite do Daad (Serviço Alemão de Intercâmbio Acadêmico)

What the World Is Made Of (Discovery Magazine)

by Sean Carroll

I know you’re all following the Minute Physics videos (that we talked about here), but just in case my knowledge is somehow fallible you really should start following them. After taking care of why stones are round, and why there is no pink light, Henry Reich is now explaining the fundamental nature of our everyday world: quantum field theory and the Standard Model. It’s a multi-part series, since some things deserve more than a minute, dammit.

Two parts have been posted so far. The first is just an intro, pointing out something we’ve already heard: the Standard Model of Particle physics describes all the world we experience in our everyday lives.

The second one, just up, tackles quantum field theory and the Pauli exclusion principle, of which we’ve been recently speaking. (Admittedly it’s two minutes long, but these are big topics!)

The world is made of fields, which appear to us as particles when we look at them. Something everyone should know.

The Importance Of Mistakes (NPR)

February 28, 2012
by ADAM FRANK

It takes a lot of cabling to make the Oscillation Project with Emulsion-Racking Apparatus (OPERA) run at the Gran Sasso National Laboratory (LNGS) in Italy.Alberto Pizzoli/AFP/Getty Images. It takes a lot of cabling to make the Oscillation Project with Emulsion-Racking Apparatus (OPERA) run at the Gran Sasso National Laboratory (LNGS) in Italy.

How do people handle the discovery of their own mistake? Some folks might shrug it off. Some folks might minimize its effect. Some folks might even step in with a lie. Most people, we hope, would admit the mistake. But how often do we expect them to announce it to the world from a hilltop. How often do we expect them to tell us — in the clearest language possible — that they screwed up, providing every detail possible about the nature of the mistake?

That’s exactly what’s required in science. As embarrassing as it might seem to most people, admitting a mistake is really the essence of scientific heroism.

Which brings us, first, to faster-than-light neutrinos and then to climate science.

Last week rumors began to circulate that the (potential) discovery of neutrinos traveling faster than the speed of light may get swept into the dustbin of scientific history. The news (rumors really) first circulated via Science Insider.

“According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer.”

Oops.

The story goes on to say that once the cable was tightened the Einstein-busting result disappeared. While “sources familiar with the experiment” might not seem enough to start singing funeral dirges, (who was the source, Deep Neutrino?), CERN released its own statement that points in a similar direction. No one can say for sure yet, but it appears that the faster-than-light hoopla is likely to go away.

So what are we to make of this? A loose cable seems pretty lame on the face of it. “Dude, Everybody with a cable box and a 32-inch flat screen knows you got to check the cable!”

There is no doubt that, as mistakes go, researchers running the neutrino experiments would rather have something a bit more sexy to offer if their result was disproven. (How about tiny corrections due to seismic effects?) Still, I’m betting the OPERA experiment had a heck of a lot more cables than your TV so, perhaps, we should be more understanding.

More importantly, no matter how it happens making mistakes is exactly what scientists are supposed to do. “Our whole problem is to make mistakes as fast possible,” John Wheeler once said.

What make science so powerful is not just the admission of mistakes but also the detailing of mistakes. While the OPERA group might now wish they had waited a bit longer to make their announcement, there is no shame in the mistake in-and-of itself. If they step into the spotlight and tell the world what happened, then they deserve to be counted as heroes just as much as if they’d broken Einstein’s theory.

And that is where we can see the connection to climate, evolution and all the other fronts in the ever-expanding war on science. Last week at the AAAS meeting in Vancouver, Nina Fedoroff, a distinguished agricultural scientist and president of that body, made a bold and frightening statement (especially for someone in such a position of authority). Fedoroff told her audience, as The Guardian reported:

“‘We are sliding back into a dark era,’ she said. ‘And there seems little we can do about it. I am profoundly depressed at just how difficult it has become merely to get a realistic conversation started on issues such as climate change or genetically modified organisms.'”

See video: http://bcove.me/ajmi39pd

The spectacle of watching politicians fall over each other to distance themselves from research validated by armies of scientists is more than depressing. Our current understanding of climate, for example, represents the work of thousands of human beings all working to make mistakes as fast possible, all working to root out error as fast as possible. There is no difference between what happens in climate science or evolutionary biology and any other branch of science.

Honest people asking the best of themselves push forward in their own fields. They watch their work and those of their colleagues closely, always looking for mistakes, cracks in reasoning, subtle flaws in logic. When they are found, the process is set in motion: critique, defend, critique, root out. When science deniers trot out the same tired talking points, talking points with no scientific validity, they ignore (or fail to understand) their argument’s lack of credibility.

Eventually, science always finds its mistakes. Eventually we find some kind of truth, unless, of course, mistakes are forced on us from outside of science. That, however, is an error of another kind entirely.

Medida da discórdia (quântica) (Fapesp)

Pesquisadores brasileiros medem diretamente pela primeira vez propriedade que pode se mostrar muito importante para o desenvolvimento da computação quântica (montagem:Ag.FAPESP)

14/10/2011

Por Elton Alisson

Agência FAPESP – A fragilidade das propriedades quânticas, que desaparecem devido à interação com o meio ambiente, a temperatura finita ou em corpos macroscópicos, representa um dos maiores obstáculos para o desenvolvimento dos desejados computadores quânticos, máquinas ultravelozes que seriam capazes de realizar simultaneamente e, em questão de segundos, operações que os computadores convencionais demorariam bilhões de anos para efetuar.

Um grupo de físicos brasileiros mediu experimentalmente de forma direta, pela primeira vez, uma propriedade que pode ser útil para o desenvolvimento da computação quântica.

Derivados do projeto “Informação quântica e decoerência”, apoiado pela FAPESP por meio do Programa Jovens Pesquisadores em Centros Emergentes, os resultados dos experimentos foram publicados em 30 de setembro na Physical Review Letters.

Em 9 de agosto, o grupo havia publicado na mesma revista um artigo em que descreveram como conseguiram medir a chamada discórdia quântica à temperatura ambiente.

Introduzido em 2001, o conceito de discórdia quântica indica a correlação não clássica entre duas entidades, como núcleos, elétrons, spins e fótons, que implica em características que não podem ser observadas em sistemas clássicos.

Até então se acreditava que essa grandeza quântica só poderia ser medida em sistemas muito bem controlados ou a baixíssimas temperaturas e isolados do meio ambiente, uma vez que qualquer interferência seria capaz de destruir a ligação entre os objetos quânticos, que era atribuída unicamente a um fenômeno físico chamado emaranhamento – o que dificultaria a concepção de um computador quântico.

“Entretanto, medimos experimentalmente essa correlação (discórdia) quântica e demonstramos que ela está presente onde não se esperava e que esse fenômeno pode ser explorado mesmo à temperatura ambiente, em situações em que há muito ruído térmico”, disse Roberto Menezes Serra, professor da Universidade Federal do ABC (UFABC) e coordenador do projeto, à Agência FAPESP.

Para medir a discórdia quântica, os pesquisadores trabalharam com uma molécula de clorofórmio, que possui um átomo de carbono, um de hidrogênio, e três de cloro.

Utilizando técnicas de ressonância magnética nuclear, eles codificaram um bit quântico no spin do núcleo do hidrogênio e outro no de carbono, em um cenário em que eles não estavam emaranhados, e demonstraram que é possível medir as correlações quânticas entre os dois spins nucleares.

Por intermédio do experimento, desenvolveram um método prático para medir correlações quânticas (a discórdia quântica) através de uma grandeza física, denominada “testemunha ocular”, que permite a observação direta do caráter quântico da correlação de um sistema. “Isso demonstrou de forma inequívoca a natureza quântica dos testes de princípios realizados em ressonância magnética nuclear à temperatura ambiente. Esses resultados podem abrir caminho para outras aplicações em informação quântica à temperatura embiente”, disse Serra.

No trabalho publicado no novo artigo, os pesquisadores brasileiros mediram outro fenômeno que haviam previsto, denominado mudança súbita de comportamento da discórdia quântica.

O efeito descreve a alteração de comportamento da discórdia quântica quando o sistema físico em que ela está presente entra em contato com o meio ambiente, causando uma perda de coerência do sistema (um fenômeno conhecido como decoerência). Nessa situação, a discórdia quântica pode permanecer constante e insensível ao ruído térmico durante um determinado tempo e, depois, começar a decair.

“Conhecer as sutilezas do comportamento dinâmico desse sistema é importante porque, se utilizarmos a discórdia quântica para obter alguma vantagem em algum processo, como de metrologia ou de processamento de informação, precisamos saber o quão robusto esse aspecto quântico é em relação a essa perda de coerência para conhecer por quanto tempo o dispositivo pode funcionar bem e quais erros devem ser corrigidos”, explicou Serra.

Referência mundial

Até há alguns anos, os cientistas achavam que o emaranhamento fosse uma propriedade essencial para obtenção de ganhos em um sistema quântico, como a maior capacidade para a troca de informações entre objetos quânticos. Recentemente, descobriu-se que essa propriedade não é necessariamente fundamental para a vantagem quântica em processamento de informação, porque há protocolos em que a vantagem quântica é obtida em sistemas não emaranhados. Dessa forma, conjectura-se que a discórdia quântica é que poderia estar associada às vantagens de um sistema quântico .

Em função disso, tanto a discórdia como o emaranhamento passaram a ser reconhecidos como úteis para a realização de tarefas em um computador quântico. No entanto, sistemas não emaranhados dotados de discórdia teriam a vantagem de ser mais robustos à ação do meio externo, uma vez que o emaranhamento pode desaparecer subitamente, em um fenômeno chamado “morte súbita”.

“Nosso maior interesse, no momento, é avançar na compreensão da origem da vantagem dos computadores quânticos. Se soubermos isso, poderemos construir dispositivos mais eficientes, consumindo menos recursos para controlar sua coerência”, disse Serra.

De acordo com o pesquisador, o grupo de físicos brasileiros foi o primeiro a utilizar técnicas de ressonância magnética nuclear para medir a discórdia quântica de forma direta e se tornou referência mundial na área.

Para realizar as medições, o grupo de pesquisadores da UFABC se associou inicialmente ao grupo liderado pelo professor Tito José Bonagamba, do Instituto de Física da Universidade de São Paulo (USP), campus de São Carlos, que coordenou os primeiros experimentos por meio do projeto“Manipulação de spins nucleares através de técnicas de ressonância magnética e quadrupolar nuclear”, também realizado com apoio da FAPESP.

Os experimentos mais recentes foram realizados por meio de uma colaboração entre os pesquisadores da UFABC e da USP de São Carlos com um grupo de pesquisa do Centro Brasileiro de Pesquisas Físicas (CBPF), no Rio de Janeiro, liderado pelo professor Ivan Oliveira. Os pesquisadores também contaram com o apoio do Instituto Nacional de Ciência e Tecnologia de Informação Quântica (INCT-IQ).

“Nesse momento, estão sendo desenvolvidos no CBPF métodos para lidar com sistemas de três e quatro bits quânticos que, associados às técnicas que desenvolvemos para medir a discórdia quântica e outras propriedades, permitirão testarmos protocolos mais complexos em ciência da informação quântica como, por exemplo, de metrologia e de máquinas térmicas quânticas”, contou Serra.

Os artigos Experimentally Witnessing the Quantumness of Correlations e Environment-Induced Sudden Transition in Quantum Discord Dynamics, de Serra e outros (doi: 10.1103/PhysRevLett.107.070501 e 10.1103/PhysRevLett.107.140403), ), publicados naPhysical Review Letters, podem ser lidos emlink.aps.org/doi/10.1103/PhysRevLett.107.070501 elink.aps.org/doi/10.1103/PhysRevLett.107.140403.

Beyond space-time: Welcome to phase space (New Scientist)

08 August 2011 by Amanda Gefter
Magazine issue 2824

A theory of reality beyond Einstein’s universe is taking shape – and a mysterious cosmic signal could soon fill in the blanks

Does some deeper level of reality lurk beneath? (Image: Luke Brookes)

IT WASN’T so long ago we thought space and time were the absolute and unchanging scaffolding of the universe. Then along came Albert Einstein, who showed that different observers can disagree about the length of objects and the timing of events. His theory of relativity unified space and time into a single entity – space-time. It meant the way we thought about the fabric of reality would never be the same again. “Henceforth space by itself, and time by itself, are doomed to fade into mere shadows,” declared mathematician Hermann Minkowski. “Only a kind of union of the two will preserve an independent reality.”

But did Einstein’s revolution go far enough? Physicist Lee Smolin at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, doesn’t think so. He and a trio of colleagues are aiming to take relativity to a whole new level, and they have space-time in their sights. They say we need to forget about the home Einstein invented for us: we live instead in a place called phase space.

If this radical claim is true, it could solve a troubling paradox about black holes that has stumped physicists for decades. What’s more, it could set them on the path towards their heart’s desire: a “theory of everything” that will finally unite general relativity and quantum mechanics.

So what is phase space? It is a curious eight-dimensional world that merges our familiar four dimensions of space and time and a four-dimensional world called momentum space.

Momentum space isn’t as alien as it first sounds. When you look at the world around you, says Smolin, you don’t ever observe space or time – instead you see energy and momentum. When you look at your watch, for example, photons bounce off a surface and land on your retina. By detecting the energy and momentum of the photons, your brain reconstructs events in space and time.

The same is true of physics experiments. Inside particle smashers, physicists measure the energy and momentum of particles as they speed toward one another and collide, and the energy and momentum of the debris that comes flying out. Likewise, telescopes measure the energy and momentum of photons streaming in from the far reaches of the universe. “If you go by what we observe, we don’t live in space-time,” Smolin says. “We live in momentum space.”

And just as space-time can be pictured as a coordinate system with time on one axis and space – its three dimensions condensed to one – on the other axis, the same is true of momentum space. In this case energy is on one axis and momentum – which, like space, has three components – is on the other (see diagram).

Simple mathematical transformations exist to translate measurements in this momentum space into measurements in space-time, and the common wisdom is that momentum space is a mere mathematical tool. After all, Einstein showed that space-time is reality’s true arena, in which the dramas of the cosmos are played out.

Smolin and his colleagues aren’t the first to wonder whether that is the full story. As far back as 1938, the German physicist Max Born noticed that several pivotal equations in quantum mechanics remain the same whether expressed in space-time coordinates or in momentum space coordinates. He wondered whether it might be possible to use this connection to unite the seemingly incompatible theories of general relativity, which deals with space-time, and quantum mechanics, whose particles have momentum and energy. Maybe it could provide the key to the long-sought theory of quantum gravity.

Born’s idea that space-time and momentum space should be interchangeable – a theory now known as “Born reciprocity” – had a remarkable consequence: if space-time can be curved by the masses of stars and galaxies, as Einstein’s theory showed, then it should be possible to curve momentum space too.

At the time it was not clear what kind of physical entity might curve momentum space, and the mathematics necessary to make such an idea work hadn’t even been invented. So Born never fulfilled his dream of putting space-time and momentum space on an equal footing.

That is where Smolin and his colleagues enter the story. Together with Laurent Freidel, also at the Perimeter InstituteJerzy Kowalski-Glikman at the University of Wroclaw, Poland, and Giovanni Amelino-Camelia at Sapienza University of Rome in Italy, Smolin has been investigating the effects of a curvature of momentum space.

The quartet took the standard mathematical rules for translating between momentum space and space-time and applied them to a curved momentum space. What they discovered is shocking: observers living in a curved momentum space will no longer agree on measurements made in a unified space-time. That goes entirely against the grain of Einstein’s relativity. He had shown that while space and time were relative, space-time was the same for everyone. For observers in a curved momentum space, however, even space-time is relative (see diagram).

This mismatch between one observer’s space-time measurements and another’s grows with distance or over time, which means that while space-time in your immediate vicinity will always be sharply defined, objects and events in the far distance become fuzzier. “The further away you are and the more energy is involved, the larger the event seems to spread out in space-time,” says Smolin.

For instance, if you are 10 billion light years from a supernova and the energy of its light is about 10 gigaelectronvolts, then your measurement of its location in space-time would differ from a local observer’s by a light second. That may not sound like much, but it amounts to 300,000 kilometres. Neither of you would be wrong – it’s just that locations in space-time are relative, a phenomenon the researchers have dubbed “relative locality”.

Relative locality would deal a huge blow to our picture of reality. If space-time is no longer an invariant backdrop of the universe on which all observers can agree, in what sense can it be considered the true fabric of reality?

That is a question still to be wrestled with, but relative locality has its benefits, too. For one thing, it could shed light on a stubborn puzzle known as the black hole information-loss paradox. In the 1970s, Stephen Hawking discovered that black holes radiate away their mass, eventually evaporating and disappearing altogether. That posed an intriguing question: what happens to all the stuff that fell into the black hole in the first place?

Relativity prevents anything that falls into a black hole from escaping, because it would have to travel faster than light to do so – a cosmic speed limit that is strictly enforced. But quantum mechanics enforces its own strict law: things, or more precisely the information that they contain, cannot simply vanish from reality. Black hole evaporation put physicists between a rock and a hard place.

According to Smolin, relative locality saves the day. Let’s say you were patient enough to wait around while a black hole evaporated, a process that could take billions of years. Once it had vanished, you could ask what happened to, say, an elephant that once succumbed to its gravitational grip. But as you look back to the time at which you thought the elephant had fallen in, you would find that locations in space-time had grown so fuzzy and uncertain that there would be no way to tell whether the elephant actually fell into the black hole or narrowly missed it. The information-loss paradox dissolves.

Big questions still remain. For instance, how can we know if momentum space is really curved? To find the answer, the team has proposed several experiments.

One idea is to look at light arriving at the Earth from distant gamma-ray bursts. If momentum space is curved in a particular way that mathematicians refer to as “non-metric”, then a high-energy photon in the gamma-ray burst should arrive at our telescope a little later than a lower-energy photon from the same burst, despite the two being emitted at the same time.

Just that phenomenon has already been seen, starting with some unusual observations made by a telescope in the Canary Islands in 2005 (New Scientist, 15 August 2009, p 29). The effect has since been confirmed by NASA’s Fermi gamma-ray space telescope, which has been collecting light from cosmic explosions since it launched in 2008. “The Fermi data show that it is an undeniable experimental fact that there is a correlation between arrival time and energy – high-energy photons arrive later than low-energy photons,” says Amelino-Camelia.

Still, he is not popping the champagne just yet. It is not clear whether the observed delays are true signatures of curved momentum space, or whether they are down to “unknown properties of the explosions themselves”, as Amelino-Camelia puts it. Calculations of gamma-ray bursts idealise the explosions as instantaneous, but in reality they last for several seconds. While there is no obvious reason to think so, it is possible that the bursts occur in such a way that they emit lower-energy photons a second or two before higher-energy photons, which would account for the observed delays.

In order to disentangle the properties of the explosions from properties of relative locality, we need a large sample of gamma-ray bursts taking place at various known distances (arxiv.org/abs/1103.5626). If the delay is a property of the explosion, its length will not depend on how far away the burst is from our telescope; if it is a sign of relative locality, it will. Amelino-Camelia and the rest of Smolin’s team are now anxiously awaiting more data from Fermi.

The questions don’t end there, however. Even if Fermi’s observations confirm that momentum space is curved, they still won’t tell us what is doing the curving. In general relativity, it is momentum and energy in the form of mass that warp space-time. In a world in which momentum space is fundamental, could space and time somehow be responsible for curving momentum space?

Work by Shahn Majid, a mathematical physicist at Queen Mary University of London, might hold some clues. In the 1990s, he showed that curved momentum space is equivalent to what’s known as a noncommutative space-time. In familiar space-time, coordinates commute – that is, if we want to reach the point with coordinates (x,y), it doesn’t matter whether we take xsteps to the right and then y steps forward, or if we travel y steps forward followed by x steps to the right. But mathematicians can construct space-times in which this order no longer holds, leaving space-time with an inherent fuzziness.

In a sense, such fuzziness is exactly what you might expect once quantum effects take hold. What makes quantum mechanics different from ordinary mechanics is Heisenberg’s uncertainty principle: when you fix a particle’s momentum – by measuring it, for example – then its position becomes completely uncertain, and vice versa. The order in which you measure position and momentum determines their values; in other words, these properties do not commute. This, Majid says, implies that curved momentum space is just quantum space-time in another guise.

What’s more, Majid suspects that this relationship between curvature and quantum uncertainty works two ways: the curvature of space-time – a manifestation of gravity in Einstein’s relativity – implies that momentum space is also quantum. Smolin and colleagues’ model does not yet include gravity, but once it does, Majid says, observers will not agree on measurements in momentum space either. So if both space-time and momentum space are relative, where does objective reality lie? What is the true fabric of reality?

Smolin’s hunch is that we will find ourselves in a place where space-time and momentum space meet: an eight-dimensional phase space that represents all possible values of position, time, energy and momentum. In relativity, what one observer views as space, another views as time and vice versa, because ultimately they are two sides of a single coin – a unified space-time. Likewise, in Smolin’s picture of quantum gravity, what one observer sees as space-time another sees as momentum space, and the two are unified in a higher-dimensional phase space that is absolute and invariant to all observers. With relativity bumped up another level, it will be goodbye to both space-time and momentum space, and hello phase space.

“It has been obvious for a long time that the separation between space-time and energy-momentum is misleading when dealing with quantum gravity,” says physicist João Magueijo of Imperial College London. In ordinary physics, it is easy enough to treat space-time and momentum space as separate things, he explains, “but quantum gravity may require their complete entanglement”. Once we figure out how the puzzle pieces of space-time and momentum space fit together, Born’s dream will finally be realised and the true scaffolding of reality will be revealed.

Bibliography

  1. The principle of relative locality by Giovanni Amelino-Camelia and others (arxiv.org/abs/1101.0931)

Amanda Gefter is a consultant for New Scientist based in Boston