Arquivo da tag: Física

A física e Hollywood (Folha de S.Paulo)

HENRIQUE GOMES

22/02/2015  03h15

RESUMO “Interestelar” integra leva de filmes pautados pela ciência. Nele, a busca pela sobrevivência leva humanos à proximidade de um buraco negro, mote para especulações ligadas à pesquisa de Stephen Hawking -tema de “A Teoria de Tudo”, que, como a ficção científica de Christopher Nolan, disputa hoje categorias do Oscar.

*

Em 2014, houve um “boom” de filmes em Hollywood levando a ciência a sério. “A Teoria de Tudo” e “O Jogo da Imitação” tratam da vida de cientistas importantes do século 21: Stephen Hawking e Alan Turing, respectivamente. Um terceiro longa, a ficção científica “Interestelar”, inova por não só aderir fielmente ao que se sabe sobre o espaço-tempo mas por usar os conhecimentos em prol da narrativa.

Não estou falando aqui sobre incluir efeitos sonoros no espaço. Isso já foi feito antes e não muda significativamente o roteiro. Os responsáveis por “Interestelar” não se contentaram em preencher a burocracia técnica só para escapar dos chatos de plantão. Aplicaram um esforço homérico em inúmeras reuniões com o renomado físico Kip Thorne (que também aparece em “A Teoria de Tudo”), em simulações de buracos negros e efetivamente reescreveram o roteiro para se adequar às diretrizes físicas.

O resultado final não perde em nada –ao menos no quesito de excitar a imaginação e produzir efeitos fantásticos– para catástrofes científicas como “Além da Escuridão – Star Trek” (2013) e “Prometheus” (2012). Em “Star Trek”, por exemplo, Isaac Newton e até Galileu ficariam horrorizados ao ver uma nave entrar em queda livre em direção à Terra, enquanto seus tripulantes, simultaneamente, entram em queda livre em relação à nave. Como bem sabem aqueles que já estiveram em naves em queda livre, tripulantes flutuam, não caem. (Pedras de pesos diferentes caem com a mesma velocidade da torre de Pisa e de outras torres.)

“Interestelar” ultrapassa as regras da ficção científica hollywoodiana. Carl Sagan disse que “a ciência não é só compatível com a espiritualidade; é uma profunda fonte de espiritualidade”. “Interestelar” prova o que nós, cientistas, sabemos há muito tempo: a frase se aplica igualmente ao encanto humano com o desconhecido.

Divulgação
Matthew McConaughey como Cooper em "Interestelar"
Matthew McConaughey como Cooper em “Interestelar”

“Interestelar” e “A Teoria de Tudo” têm alguns temas em comum.

O primeiro é a degeneração –do planeta Terra, em um; de um sistema neuromuscular, em outro. À deterioração do planeta Terra, se busca escapar por meio da exploração interestelar, liderada pelo personagem Cooper ( Matthew McConaughey). À do corpo humano, por meio da mente incansável de Stephen Hawking, vivido pelo excelente Eddie Redmayne.

O segundo tema em comum é justamente uma parte importante da obra de Hawking.

ESTRELAS

Stephen Hawking nasceu em Oxford, em 1942. Aos 21 anos, já no primeiro ano de seu doutorado, recebeu o diagnóstico de ELA (esclerose lateral amiotrófica), doença degenerativa que atinge a comunicação nervosa com os músculos, mas que deixa outras funções cerebrais intactas. Decidido a continuar seus estudos, um dos primeiros problemas a que Hawking se dedicou foi à questão do que acontece quando uma estrela é tão pesada que não aguenta o próprio peso.

O colapso da estrela concentra toda a sua massa em um único ponto, onde a teoria deixa de fazer sentido. Já se antecipando a aplicações em filmes de ficção científica, físicos chamaram esse ponto de singularidade. A uma certa distância desse ponto singular, a atração de toda a massa concentrada é suficiente para que nem a luz consiga escapar. Uma lanterna acesa a (no mínimo) esse raio não pode ser enxergada por alguém a uma distância maior; nada escapa a essa esfera, chamada de buraco negro (por motivos óbvios).

Na época em que Hawking era estudante, existia uma única solução para as equações da relatividade geral de Einstein –que descrevem como concentrações de matéria e energia distorcem a geometria do espaço-tempo– que representava um buraco negro, descoberta pelo alemão Schwarzschild. Um grupo de físicos russos argumentava que essa solução era artificial, nascida do arranjo de partículas colapsando em perfeita sincronização para que chegassem juntas ao centro, formando assim um ponto de densidade infinita: a singularidade.

Hawking e Roger Penrose, matemático de Oxford, demonstraram que, na verdade, essa era uma característica genérica das equações de Einstein –e mais: que o universo teria começado no que se convencionou chamar “singularidade cosmológica”, na qual a noção de tempo deixaria de ter significado. Como Hawking diz no filme, “seria o começo do tempo em si”.

Não há consenso na física teórica moderna sobre o que acontece de fato a quem se aproxima de uma singularidade dentro de um buraco negro. O maior obstáculo ao nosso entendimento é que, a pequenas distâncias da singularidade, precisamos levar em conta efeitos quânticos, e –como comenta a escolada Jane Hawking em “A Teoria de Tudo”– a teoria quântica e a relatividade geral são escritas em linguagens completamente diferentes. Não que seja preciso chegar tão perto da singularidade para sabermos que os efeitos seriam drásticos.

A crítica que mais ouvi de físicos amadores (e não tão amadores) a “Interestelar” é a de que –atenção, spoiler– Cooper seria trucidado ao entrar em Gargantua, um buraco negro gigante. “Trucidado” talvez seja a palavra incorreta: “espaguetificado” é o termo técnico.

MARÉS

O que mataria você ao cair em um buraco negro não é a força absoluta da gravidade. Assim como pedras jogadas por hereges italianos de cima de torres, partes diferentes do seu corpo caem com a mesma aceleração, mesmo que a aceleração em si seja enorme. Essa conclusão vale se a força da gravidade for relativamente constante –quase a mesma no seu pé e na sua cabeça. Apesar de essa condição ser satisfeita na superfície da Terra, a força da gravidade obviamente não é constante. Ela decai com a distância, e é possível observar efeitos da variação dessa força –pequenos até mesmo na escala da torre de Pisa– em corpos bem maiores. O exemplo mais familiar para nós terráqueos é o efeito das marés no nosso planeta. A Lua puxa a Terra com mais força em sua face mais próxima, com os oceanos inchando e desinchando de acordo a essa atração. Apesar da força gravitacional solar absoluta na Terra ser maior, por a Lua estar bem mais próxima de nós do que o Sol, o maior gradiente da força é lunar, e é por isso que nós sentimos mais os efeitos de maré provindos da Lua que do Sol.

Pelo mesmo motivo, assim que entrássemos em buracos negros estaríamos sujeitos a uma força gravitacional imensa, mas, ainda bem longe da singularidade central, não necessariamente sentiríamos força de maré. Essa ausência de efeitos dramáticos nesse estágio da nossa queda é adequadamente chamada de “sem drama” na comunidade, e até ali a entrada de Cooper em Gargantua seria assim: sem drama. Mas não depois. Ao se aproximar de uma singularidade, mesmo antes de precisarmos incluir efeitos quânticos, a força gravitacional pode ser tão diferente dos pés à cabeça que Cooper seria esticado –daí a “espaguetificação”.

Não era possível, claro, incluir essa explicação (macarrônica?) em “Interestelar”. Mesmo assim, uma das cenas mais espantosas do filme envolve justamente marés no planeta Miller, que orbita Gargantua. No filme, marés enormes atingem os protagonistas de hora em hora, de forma bem inconveniente. Para que o efeito de maré fosse aproximadamente correto, o físico Kip Thorne calculou o tamanho do buraco negro, sua rotação e a órbita do planeta. As imagens estarrecedoras do filme são frutos de cálculos.

Ainda que não seja realista esperar igual cuidado em produções futuras, talvez o fato de algumas dessas simulações de buracos negros serem novidade até na comunidade científica estimule alguns produtores/físicos amadores de plantão –uma demografia bem magra– a seguir o exemplo.

Mas voltemos ao destino de Cooper. Já vimos que ele escaparia ileso à entrada no buraco negro. Sem drama até ali. Mas e a tal da “espaguetificação”? Muitos comentaristas, como o popular Neil deGrasse Tyson, argumentaram que simplesmente não sabemos o que acontece dentro de um buraco negro. Passando daquela fronteira, o roteiro adquiriria então imunidade diplomática às leis da física, virando terreno fértil para especulações mais ousadas –para não dizer “terra de ninguém”.

Bem, que me desculpe Tyson, mas isso não é exatamente verdade. Acreditamos que a relatividade geral funcionaria muito bem até antes que efeitos quânticos fossem importantes (para um buraco negro do tamanho de Gargantua). Somada a isso, na solução de Schwarzschild, a aproximação da singularidade é inevitável. Assim como não conseguimos parar o tempo, não conseguiríamos manter a mesma distância do centro, tendo que inexoravelmente aproximarmo-nos mais e mais da singularidade, que se agigantaria à nossa frente, inevitável como o futuro. Nesse caso, Cooper viraria espaguete antes que os trompetes da mecânica quântica pudessem soar a sua (possível) salvação. Felizmente para toda raça humana no filme, esse não é o caso.

PIÕES

Em 1962, 43 anos após a descoberta do buraco negro nas trincheiras da Grande Guerra, o físico matemático neozelandês Roy Kerr, em circunstâncias bastante mais confortáveis, generalizou a solução de Schwarzschild, descobrindo uma solução da teoria de Einstein que correspondia a um buraco negro em rotação –girando como um pião.

Mais tarde, Hawking e colaboradores mostraram que qualquer buraco negro se assenta na forma de Kerr e, adequadamente, Gargantua é um desses, em altíssima rotação. Mas, quando piões como esses giram, eles puxam consigo o próprio espaço-tempo, e há uma espécie de força centrífuga –aquela força que sentimos no carro quando fazemos uma curva fechada– inevitável, que aumenta conforme o centro do buraco negro se aproxima. A uma certa distância do centro, o cabo de guerra entre a força de atração e a centrípeta se equilibra, e a singularidade deixa de ser inevitável.

A partir desse momento, realmente não sabemos bem o que acontece, e Cooper fica livre para fazer o que os roteiristas inventarem. Não que adentrar uma quarta dimensão, ver o tempo como mais uma direção do espaço e todo o resto não tenham nenhum embasamento, mas a partir dali entramos no reino da especulação científica. Pelo menos o fizemos com consciência limpa.

Carl Sagan, no excelente “Cosmos”, mais uma vez nos guia: “Nós não teremos medo de especular. Mas teremos cuidado em separar especulação de fato. O cosmos é cheio, além de qualquer medida, de verdades elegantes, de requintadas inter-relações, do impressionante maquinário da natureza”. O universo é mais estranho (e mais fascinante) do que a ficção. Está mais do que na hora de explorarmos uma ficção, científica não só no nome.

HENRIQUE GOMES, 34, é doutor em física pela Universidade de Nottingham (Reino Unido) e pesquisador no Perimeter Institute for Theoretical Physics (Canadá).

How The Nature of Information Could Resolve One of The Great Paradoxes Of Cosmology (The Physics Arxiv Blog)

Feb 17, 2015

Stephen Hawking described it as the most spectacular failure of any physical theory in history. Can a new theory of information rescue cosmologists?

One of the biggest puzzles in science is the cosmological constant paradox. This arises when physicists attempt to calculate the energy density of the universe from first principles. Using quantum mechanics, the number they come up with is 10^94 g/cm^3.

And yet the observed energy density, calculated from the density of mass in the cosmos and the way the universe is expanding, is about 10^-27 g/cm^3. In other words, our best theory of the universe misses the mark by 120 orders of magnitude.

That’s left cosmologists somewhat red-faced. Indeed, Stephen Hawking has famously described this as the most spectacular failure of any physical theory in history. This huge discrepancy is all the more puzzling because quantum mechanics makes such accurate predictions in other circumstances. Just why it goes so badly wrong here is unknown.

Today, Chris Fields, an independent researcher formerly with New Mexico State University in Las Cruces, puts forward a simple explanation. His idea is that the discrepancy arises because large objects, such as planets and stars, behave classically rather than demonstrating quantum properties. And he’s provided some simple calculations to make his case.

One of the key properties of quantum objects is that they can exist in a superposition of states until they are observed. When that happens, these many possibilities “collapse” and become one specific outcome, a process known as quantum decoherence.

For example, a photon can be in a superposition of states that allow it to be in several places at the same time. However, as soon as the photon is observed the superposition decoheres and the photon appears in one place.

This process of decoherence must apply to everything that has a specific position, says Fields. Even to large objects such as stars, whose position is known with respect to the cosmic microwave background, the echo of the big bang which fills the universe.

In fact, Fields argues that it is the interaction between the cosmic microwave background and all large objects in the universe that causes them to decohere giving them specific positions which astronomers observe.

But there is an important consequence from having a specific position — there must be some information associated with this location in 3D space. If a location is unknown, then the amount of information must be small. But if it is known with precision, the information content is much higher.

And given that there are some 10^25 stars in the universe, that’s a lot of information. Fields calculates that encoding the location of each star to within 10 cubic kilometres requires some 10^93 bits.

That immediately leads to an entirely new way of determining the energy density of the cosmos. Back in the 1960s, the physicist Rolf Landauer suggested that every bit of information had an energy associated with it, an idea that has gained considerable traction since then.

So Fields uses Landauer’s principle to calculate the energy associated with the locations of all the stars in the universe. This turns out to be about 10^-30 g /cm^3, very similar to the observed energy density of the universe.

But here’s the thing. That calculation requires the position of each star to be encoded only to within 10 km^3. Fields also asks how much information is required to encode the position of stars to the much higher resolution associated with the Planck length. “Encoding 10^25 stellar positions at [the Planck length] would incur a free-energy cost ∼ 10^117 larger than that found here,” he says.

That difference is remarkably similar to the 120 orders of magnitude discrepancy between the observed energy density and that calculated using quantum mechanics. Indeed, Fields says that the discrepancy arises because the positions of the stars can be accounted for using quantum mechanics. “It seems reasonable to suggest that the discrepancy between these numbers may be due to the assumption that encoding classical information at [the Planck scale] can be considered physically meaningful.”

That’s a fascinating result that raises important questions about the nature of reality. First, there is the hint in Fields’ ideas that information provides the ghostly bedrock on which the laws of physics are based. That’s an idea that has gained traction among other physicists too.

Then there is the role of energy. One important question is where this energy might have come from in the first place. The process of decoherence seems to create it from nothing.

Cosmologists generally overlook violations of the principle of conservation of energy. After all, the big bang itself is the biggest offender. So don’t expect much hand wringing over this. But Fields’ approach also implies that a purely quantum universe would have an energy density of zero, since nothing would have localised position. That’s bizarre.

Beyond this is the even deeper question of how the universe came to be classical at all, given that cosmologists would have us believe that the big bang was a quantum process. Fields suggests that it is the interaction between the cosmic microwave background and the rest of the universe that causes the quantum nature of the universe to decohere and become classical.

Perhaps. What is all too clear is that there are fundamental and fascinating problems in cosmology — and the role that information plays in reality.

Ref: arxiv.org/abs/1502.03424 : Is Dark Energy An Artifact Of Decoherence?

No Big Bang? Quantum equation predicts universe has no beginning (Phys.org)

Feb 09, 2015 by Lisa Zyga

big bang

This is an artist’s concept of the metric expansion of space, where space (including hypothetical non-observable portions of the universe) is represented at each time by the circular sections. Note on the left the dramatic expansion (not to scale) occurring in the inflationary epoch, and at the center the expansion acceleration. The scheme is decorated with WMAP images on the left and with the representation of stars at the appropriate level of development. Credit: NASA

Read more at: http://phys.org/news/2015-02-big-quantum-equation-universe.html#jCp

(Phys.org) —The universe may have existed forever, according to a new model that applies quantum correction terms to complement Einstein’s theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.

The widely accepted age of the , as estimated by , is 13.8 billion years. In the beginning, everything in existence is thought to have occupied a single infinitely dense point, or . Only after this point began to expand in a “Big Bang” did the universe officially begin.

Although the Big Bang singularity arises directly and unavoidably from the mathematics of general relativity, some scientists see it as problematic because the math can explain only what happened immediately after—not at or before—the singularity.

“The Big Bang singularity is the most serious problem of general relativity because the laws of physics appear to break down there,” Ahmed Farag Ali at Benha University and the Zewail City of Science and Technology, both in Egypt, told Phys.org.

Ali and coauthor Saurya Das at the University of Lethbridge in Alberta, Canada, have shown in a paper published in Physics Letters B that the Big Bang singularity can be resolved by their  in which the universe has no beginning and no end.

Old ideas revisited

The physicists emphasize that their quantum correction terms are not applied ad hoc in an attempt to specifically eliminate the Big Bang singularity. Their work is based on ideas by the theoretical physicist David Bohm, who is also known for his contributions to the philosophy of physics. Starting in the 1950s, Bohm explored replacing classical geodesics (the shortest path between two points on a curved surface) with quantum trajectories.

In their paper, Ali and Das applied these Bohmian trajectories to an equation developed in the 1950s by physicist Amal Kumar Raychaudhuri at Presidency University in Kolkata, India. Raychaudhuri was also Das’s teacher when he was an undergraduate student of that institution in the ’90s.

Using the quantum-corrected Raychaudhuri equation, Ali and Das derived quantum-corrected Friedmann equations, which describe the expansion and evolution of universe (including the Big Bang) within the context of general relativity. Although it’s not a true theory of , the  does contain elements from both quantum theory and general relativity. Ali and Das also expect their results to hold even if and when a full theory of quantum gravity is formulated.

No singularities nor dark stuff

In addition to not predicting a Big Bang singularity, the new model does not predict a “big crunch” singularity, either. In general relativity, one possible fate of the universe is that it starts to shrink until it collapses in on itself in a big crunch and becomes an infinitely dense point once again.

Ali and Das explain in their paper that their model avoids singularities because of a key difference between classical geodesics and Bohmian trajectories. Classical geodesics eventually cross each other, and the points at which they converge are singularities. In contrast, Bohmian trajectories never cross each other, so singularities do not appear in the equations.

In cosmological terms, the scientists explain that the quantum corrections can be thought of as a cosmological constant term (without the need for dark energy) and a radiation term. These terms keep the universe at a finite size, and therefore give it an infinite age. The terms also make predictions that agree closely with current observations of the cosmological constant and density of the universe.

New gravity particle

In physical terms, the model describes the universe as being filled with a quantum fluid. The scientists propose that this fluid might be composed of gravitons—hypothetical massless particles that mediate the force of gravity. If they exist, gravitons are thought to play a key role in a theory of quantum gravity.

In a related paper, Das and another collaborator, Rajat Bhaduri of McMaster University, Canada, have lent further credence to this model. They show that gravitons can form a Bose-Einstein condensate (named after Einstein and another Indian physicist, Satyendranath Bose) at temperatures that were present in the universe at all epochs.

Motivated by the model’s potential to resolve the Big Bang singularity and account for  and , the physicists plan to analyze their model more rigorously in the future. Their future work includes redoing their study while taking into account small inhomogeneous and anisotropic perturbations, but they do not expect small perturbations to significantly affect the results.

“It is satisfying to note that such straightforward corrections can potentially resolve so many issues at once,” Das said.

More information: Ahmed Farag Ali and Saurya Das. “Cosmology from quantum potential.” Physics Letters B. Volume 741, 4 February 2015, Pages 276–279. DOI: 10.1016/j.physletb.2014.12.057. Also at: arXiv:1404.3093[gr-qc].

Saurya Das and Rajat K. Bhaduri, “Dark matter and dark energy from Bose-Einstein condensate”, preprint: arXiv:1411.0753[gr-qc].

Chemists Confirm the Existence of New Type of Bond (Scientific American)

A “vibrational” chemical bond predicted in the 1980s is demonstrated experimentally

Jan 20, 2015 By Amy Nordrum


Credit: Allevinatis/Flickr

Chemistry has many laws, one of which is that the rate of a reaction speeds up as temperature rises. So, in 1989, when chemists experimenting at a nuclear accelerator in Vancouver observed that a reaction between bromine and muonium—a hydrogen isotope—slowed down when they increased the temperature, they were flummoxed.

Donald Fleming, a University of British Columbia chemist involved with the experiment, thought that perhaps as bromine and muonium co-mingled, they formed an intermediate structure held together by a “vibrational” bond—a bond that other chemists had posed as a theoretical possibility earlier that decade. In this scenario, the lightweight muonium atom would move rapidly between two heavy bromine atoms, “like a Ping Pong ball bouncing between two bowling balls,” Fleming says. The oscillating atom would briefly hold the two bromine atoms together and reduce the overall energy, and therefore speed, of the reaction. (With a Fleming working on a bond, you could say the atomic interaction is shaken, not stirred.)

At the time of the experiment, the necessary equipment was not available to examine the milliseconds-long reaction closely enough to determine whether such vibrational bonding existed. Over the past 25 years, however, chemists’ ability to track subtle changes in energy levels within reactions has greatly improved, so Fleming and his colleagues ran their reaction again three years ago in the nuclear accelerator at Rutherford Appleton Laboratory in England. Based on calculations from both experiments and the work of collaborating theoretical chemists at Free University of Berlin and Saitama University in Japan, they concluded that muonium and bromine were indeed forming a new type of temporary bond. Its vibrational nature lowered the total energy of the intermediate bromine-muonium structure—thereby explaining why the reaction slowed even though the temperature was rising.

The team reported its results last December in Angewandte Chemie International Edition, a publication of the German Chemical Society. The work confirms that vibrational bonds—fleeting though they may be—should be added to the list of known chemical bonds. And although the bromine-muonium reaction was an “ideal” system to verify vibrational bonding, Fleming predicts the phenomenon also occurs in other reactions between heavy and light atoms.

This article was originally published with the title “New Vibrations.”

Computadores quânticos podem revolucionar teoria da informação (Fapesp)

30 de janeiro de 2015

Por Diego Freire

Agência FAPESP – A perspectiva dos computadores quânticos, com capacidade de processamento muito superior aos atuais, tem levado ao aprimoramento de uma das áreas mais versáteis da ciência, com aplicações nas mais diversas áreas do conhecimento: a teoria da informação. Para discutir essa e outras perspectivas, o Instituto de Matemática, Estatística e Computação Científica (Imecc) da Universidade Estadual de Campinas (Unicamp) realizou, de 19 a 30 de janeiro, a SPCoding School.

O evento ocorreu no âmbito do programa Escola São Paulo de Ciência Avançada (ESPCA), da FAPESP, que oferece recursos para a organização de cursos de curta duração em temas avançados de ciência e tecnologia no Estado de São Paulo.

A base da informação processada pelos computadores largamente utilizados é o bit, a menor unidade de dados que pode ser armazenada ou transmitida. Já os computadores quânticos trabalham com qubits, que seguem os parâmetros da mecânica quântica, ramo da Física que trata das dimensões próximas ou abaixo da escala atômica. Por conta disso, esses equipamentos podem realizar simultaneamente uma quantidade muito maior de cálculos.

“Esse entendimento quântico da informação atribui toda uma complexidade à sua codificação. Mas, ao mesmo tempo em que análises complexas, que levariam décadas, séculos ou até milhares de anos para serem feitas em computadores comuns, poderiam ser executadas em minutos por computadores quânticos, também essa tecnologia ameaçaria o sigilo de informações que não foram devidamente protegidas contra esse tipo de novidade”, disse Sueli Irene Rodrigues Costa, professora do IMECC, à Agência FAPESP.

A maior ameaça dos computadores quânticos à criptografia atual está na sua capacidade de quebrar os códigos usados na proteção de informações importantes, como as de cartão de crédito. Para evitar esse tipo de risco é preciso desenvolver também sistemas criptográficos visando segurança, considerando a capacidade da computação quântica.

“A teoria da informação e a codificação precisam estar um passo à frente do uso comercial da computação quântica”, disse Rodrigues Costa, que coordena o Projeto Temático “Segurança e confiabilidade da informação: teoria e prática”, apoiado pela FAPESP.

“Trata-se de uma criptografia pós-quântica. Como já foi demonstrado no final dos anos 1990, os procedimentos criptográficos atuais não sobreviverão aos computadores quânticos por não serem tão seguros. E essa urgência pelo desenvolvimento de soluções preparadas para a capacidade da computação quântica também impulsiona a teoria da informação a avançar cada vez mais em diversas direções”, disse.

Algumas dessas soluções foram tratadas ao longo da programação da SPCoding School, muitas delas visando sistemas mais eficientes para a aplicação na computação clássica, como o uso de códigos corretores de erros e de reticulados para criptografia. Para Rodrigues Costa, a escalada da teoria da informação em paralelo ao desenvolvimento da computação quântica provocará revoluções em várias áreas do conhecimento.

“A exemplo das múltiplas aplicações da teoria da informação na atualidade, a codificação quântica também elevaria diversas áreas da ciência a novos patamares por possibilitar simulações computacionais ainda mais precisas do mundo físico, lidando com uma quantidade exponencialmente maior de variáveis em comparação aos computadores clássicos”, disse Rodrigues Costa.

A teoria da informação envolve a quantificação da informação e envolve áreas como matemática, engenharia elétrica e ciência da computação. Teve como pioneiro o norte-americano Claude Shannon (1916-2001), que foi o primeiro a considerar a comunicação como um problema matemático.

Revoluções em curso

Enquanto se prepara para os computadores quânticos, a teoria da informação promove grandes modificações na codificação e na transmissão de informações. Amin Shokrollahi, da École Polytechnique Fédérale de Lausanne, na Suíça, apresentou na SPCoding School novas técnicas de codificação para resolver problemas como ruídos na informação e consumo elevado de energia no processamento de dados, inclusive na comunicação chip a chip nos aparelhos.

Shokrollahi é conhecido na área por ter inventado os códigos Raptor e coinventado os códigos Tornado, utilizados em padrões de transmissão móveis de informação, com implementações em sistemas sem fio, satélites e no método de transmissão de sinais televisivos IPTV, que usa o protocolo de internet (IP, na sigla em inglês) para transmitir conteúdo.

“O crescimento do volume de dados digitais e a necessidade de uma comunicação cada vez mais rápida aumentam a susceptibilidade a vários tipos de ruído e o consumo de energia. É preciso buscar novas soluções nesse cenário”, disse.

Shokrollahi também apresentou inovações desenvolvidas na empresa suíça Kandou Bus, da qual é diretor de pesquisa. “Utilizamos algoritmos especiais para codificar os sinais, que são todos transferidos simultaneamente até que um decodificador recupere os sinais originais. Tudo isso é feito evitando que fios vizinhos interfiram entre si, gerando um nível de ruído significativamente menor. Os sistemas também reduzem o tamanho dos chips, aumentam a velocidade de transmissão e diminuem o consumo de energia”, explicou.

De acordo com Rodrigues Costa, soluções semelhantes também estão sendo desenvolvidas em diversas tecnologias largamente utilizadas pela sociedade.

“Os celulares, por exemplo, tiveram um grande aumento de capacidade de processamento e em versatilidade, mas uma das queixas mais frequentes entre os usuários é de que a bateria não dura. Uma das estratégias é descobrir meios de codificar de maneira mais eficiente para economizar energia”, disse.

Aplicações biológicas

Não são só problemas de natureza tecnológica que podem ser abordados ou solucionados por meio da teoria da informação. Professor na City University of New York, nos Estados Unidos, Vinay Vaishampayan coordenou na SPCoding School o painel “Information Theory, Coding Theory and the Real World”, que tratou de diversas aplicações dos códigos na sociedade – entre elas, as biológicas.

“Não existe apenas uma teoria da informação e suas abordagens, entre computacionais e probabilísticas, podem ser aplicadas a praticamente todas as áreas do conhecimento. Nós tratamos no painel das muitas possibilidades de pesquisa à disposição de quem tem interesse em estudar essas interfaces dos códigos com o mundo real”, disse à Agência FAPESP.

Vaishampayan destacou a Biologia como área de grande potencial nesse cenário. “A neurociência apresenta questionamentos importantes que podem ser respondidos com a ajuda da teoria da informação. Ainda não sabemos em profundidade como os neurônios se comunicam entre si, como o cérebro funciona em sua plenitude e as redes neurais são um campo de estudo muito rico também do ponto de vista matemático, assim como a Biologia Molecular”, disse.

Isso porque, de acordo com Max Costa, professor da Faculdade de Engenharia Elétrica e de Computação da Unicamp e um dos palestrantes, os seres vivos também são feitos de informação.

“Somos codificados por meio do DNA das nossas células. Descobrir o segredo desse código, o mecanismo que há por trás dos mapeamentos que são feitos e registrados nesse contexto, é um problema de enorme interesse para a compreensão mais profunda do processo da vida”, disse.

Para Marcelo Firer, professor do Imecc e coordenador da SPCoding School, o evento proporcionou a estudantes e pesquisadores de diversos campos novas possibilidades de pesquisa.

“Os participantes compartilharam oportunidades de engajamento em torno dessas e muitas outras aplicações da Teoria da Informação e da Codificação. Foram oferecidos desde cursos introdutórios, destinados a estudantes com formação matemática consistente, mas não necessariamente familiarizados com codificação, a cursos de maior complexidade, além de palestras e painéis de discussão”, disse Firer, membro da coordenação da área de Ciência e Engenharia da Computação da FAPESP.

Participaram do evento cerca de 120 estudantes de 70 universidades e 25 países. Entre os palestrantes estrangeiros estiveram pesquisadores do California Institute of Technology (Caltech), da Maryland University e da Princeton University, nos Estados Unidos; da Chinese University of Hong Kong, na China; da Nanyang Technological University, em Cingapura; da Technische Universiteit Eindhoven, na Holanda; da Universidade do Porto, em Portugal; e da Tel Aviv University, em Israel.

Mais informações em www.ime.unicamp.br/spcodingschool.

The Question That Could Unite Quantum Theory With General Relativity: Is Spacetime Countable? (The Physics Arxiv Blog)

Current thinking about quantum gravity assumes that spacetime exists in countable lumps, like grains of sand. That can’t be right, can it?

The Physics arXiv Blog

One of the big problems with quantum gravity is that it generates infinities that have no physical meaning. These come about because quantum mechanics implies that accurate measurements of the universe on the tiniest scales require high-energy. But when the scale becomes very small, the energy density associated with a measurement is so great that it should lead to the formation of a black hole, which would paradoxically ruin the measurement that created it.

These kinds of infinities are something of an annoyance. Their paradoxical nature makes them hard to deal with mathematically and difficult to reconcile with our knowledge of the universe, which as far as we can tell, avoids this kind of paradoxical behaviour.

So physicists have invented a way to deal with infinities called renormalisation. In essence, theorists assume that space-time is not infinitely divisible. Instead, there is a minimum scale beyond which nothing can be smaller, the so-called Planck scale. This limit ensures that energy densities never become high enough to create black holes.

This is also equivalent to saying that space-time is discrete, or as a mathematician might put it, countable. In other words, it is possible to allocate a number to each discrete volume of space-time making it countable, like grains of sand on a beach or atoms in the universe. That means space-time is entirely unlike uncountable things, such as straight lines which are infinitely divisible, or the degrees of freedom of in the fields that constitute the basic building blocks of physics, which have been mathematically proven to be uncountable.

This discreteness is certainly useful but it also raises an important question: is it right? Can the universe really be fundamentally discrete, like a computer model? Today, Sean Gryb from Radboud University in the Netherlands argues that an alternative approach is emerging in the form of a new formulation of gravity called shape dynamics. This new approach implies that spacetime is smooth and uncountable, an idea that could have far-reaching consequences for the way we understand the universe.

At the heart of this new theory is the concept of scale invariance. This is the idea that an object or law has the same properties regardless of the scale at which it is viewed.

The current laws of physics generally do not have this property. Quantum mechanics, for example, operates only at the smallest scale, while gravity operates at the largest. So it is easy to see why scale invariance is a property that theorists drool over — a scale invariant description of the universe must encompass both quantum theory and gravity.

Shape dynamics does just this, says Gryb. It does this by ignoring many ordinary features of physical objects, such as their position within the universe. Instead, it focuses on objects’ relationships to each other, such as the angles between them and the shape that this makes (hence the term shape dynamics).

This approach immediately leads to a scale invariant picture of reality. Angles are scale invariant because they are the same regardless of the scale at which they are viewed. So the new thinking is describe the universe as a series of instantaneous snapshots on the relationship between objects.

The result is a scale invariance that is purely spatial. But this, of course, is very different to the more significant notion of spacetime scale invariance.

So a key part of Gryb’s work is in using the mathematical ideas of symmetry to show that spatial scale invariance can be transformed into spacetime scale invariance.

Specifically, Gryb shows exactly how this works in a closed, expanding universe in which the laws of physics are the same for all inertial observers and for whom the speed of light is finite and constant.

If those last two conditions sound familiar, it’s because they are the postulates Einstein used to derive special relativity. And Gryb’s formulation is equivalent to this. “Observers in Einstein’s special theory of relativity can be reinterpreted as observers in a scale-invariant space,” he says.

That raises some interesting possibilities for a broader theory of theuniversegravity, just as special relativity lead to a broader theory of gravity in the form of general relativity.

Gryb describes how it is possible to create models of curved space-time by gluing together local patches of flat space-times. “Could it be possible to do something similar in Shape Dynamics; i.e., glue together local patches of conformally flat spaces that could then be related to General Relativity?” he asks.

Nobody has succeeded in doing this on a model that includes the three dimensions of space and one of time but this is early days for shape dynamics. But Gryb and others are working on the problem.

He is clearly excited by the future possibilities, saying that it suggests a new way to think about quantum gravity in scale invariant terms. “This would provide a new mechanism for being able to deal with the uncountably infinite number of degrees of freedom in the gravitational field without introducing discreteness at the Plank scale,” he says.

That’s an exciting new approach. And it is one expounded by a fresh new voice who is able to explain his ideas in a highly readable fashion to a broad audience. There is no way of knowing how this line of thinking will evolve but we’ll look forward to more instalments from Gryb.

Ref: arxiv.org/abs/1501.02671 : Is Spacetime Countable?

The Paradoxes That Threaten To Tear Modern Cosmology Apart (The Physics Arxiv Blog)

Some simple observations about the universe seem to contradict basic physics. Solving these paradoxes could change the way we think about the cosmos

The Physics arXiv Blog on Jan 20

Revolutions in science often come from the study of seemingly unresolvable paradoxes. An intense focus on these paradoxes, and their eventual resolution, is a process that has leads to many important breakthroughs.

So an interesting exercise is to list the paradoxes associated with current ideas in science. It’s just possible that these paradoxes will lead to the next generation of ideas about the universe.

Today, Yurij Baryshev at St Petersburg State University in Russia does just this with modern cosmology. The result is a list of paradoxes associated with well-established ideas and observations about the structure and origin of the universe.

Perhaps the most dramatic, and potentially most important, of these paradoxes comes from the idea that the universe is expanding, one of the great successes of modern cosmology. It is based on a number of different observations.

The first is that other galaxies are all moving away from us. The evidence for this is that light from these galaxies is red-shifted. And the greater the distance, the bigger this red-shift.

Astrophysicists interpret this as evidence that more distant galaxies are travelling away from us more quickly. Indeed, the most recent evidence is that the expansion is accelerating.

What’s curious about this expansion is that space, and the vacuum associated with it, must somehow be created in this process. And yet how this can occur is not at all clear. “The creation of space is a new cosmological phenomenon, which has not been tested yet in physical laboratory,” says Baryshev.

What’s more, there is an energy associated with any given volume of the universe. If that volume increases, the inescapable conclusion is that this energy must increase as well. And yet physicists generally think that energy creation is forbidden.

Baryshev quotes the British cosmologist, Ted Harrison, on this topic: “The conclusion, whether we like it or not, is obvious: energy in the universe is not conserved,” says Harrison.

This is a problem that cosmologists are well aware of. And yet ask them about it and they shuffle their feet and stare at the ground. Clearly, any theorist who can solve this paradox will have a bright future in cosmology.

The nature of the energy associated with the vacuum is another puzzle. This is variously called the zero point energy or the energy of the Planck vacuum and quantum physicists have spent some time attempting to calculate it.

These calculations suggest that the energy density of the vacuum is huge, of the order of 10^94 g/cm^3. This energy, being equivalent to mass, ought to have a gravitational effect on the universe.

Cosmologists have looked for this gravitational effect and calculated its value from their observations (they call it the cosmological constant). These calculations suggest that the energy density of the vacuum is about 10^-29 g/cm3.

Those numbers are difficult to reconcile. Indeed, they differ by 120 orders of magnitude. How and why this discrepancy arises is not known and is the cause of much bemused embarrassment among cosmologists.

Then there is the cosmological red-shift itself, which is another mystery. Physicists often talk about the red-shift as a kind of Doppler effect, like the change in frequency of a police siren as it passes by.

The Doppler effect arises from the relative movement of different objects. But the cosmological red-shift is different because galaxies are stationary in space. Instead, it is space itself that cosmologists think is expanding.

The mathematics that describes these effects is correspondingly different as well, not least because any relative velocity must always be less than the speed of light in conventional physics. And yet the velocity of expanding space can take any value.

Interestingly, the nature of the cosmological red-shift leads to the possibility of observational tests in the next few years. One interesting idea is that the red-shifts of distant objects must increase as they get further away. For a distant quasar, this change may be as much as one centimetre per second per year, something that may be observable with the next generation of extremely large telescopes.

One final paradox is also worth mentioning. This comes from one of the fundamental assumptions behind Einstein’s theory of general relativity—that if you look at the universe on a large enough scale, it must be the same in all directions.

It seems clear that this assumption of homogeneity does not hold on the local scale. Our galaxy is part of a cluster known as the Local Group which is itself part of a bigger supercluster.

This suggests a kind of fractal structure to the universe. In other words, the universe is made up of clusters regardless of the scale at which you look at it.

The problem with this is that it contradicts one of the basic ideas of modern cosmology—the Hubble law. This is the observation that the cosmological red-shift of an object is linearly proportional to its distance from Earth.

It is so profoundly embedded in modern cosmology that most currently accepted theories of universal expansion depend on its linear nature. That’s all okay if the universe is homogeneous (and therefore linear) on the largest scales.

But the evidence is paradoxical. Astrophysicists have measured the linear nature of the Hubble law at distances of a few hundred megaparsecs. And yet the clusters visible on those scales indicate the universe is not homogeneous on the scales.

And so the argument that the Hubble law’s linearity is a result of the homogeneity of the universe (or vice versa) does not stand up to scrutiny. Once again this is an embarrassing failure for modern cosmology.

It is sometimes tempting to think that astrophysicists have cosmology more or less sewn up, that the Big Bang model, and all that it implies, accounts for everything we see in the cosmos.

Not even close. Cosmologists may have successfully papered over the cracks in their theories in a way that keeps scientists happy for the time being. This sense of success is surely an illusion.

And that is how it should be. If scientists really think they are coming close to a final and complete description of reality, then a simple list of paradoxes can do a remarkable job of putting feet firmly back on the ground.

Ref: arxiv.org/abs/1501.01919 : Paradoxes Of Cosmological Physics In The Beginning Of The 21-St Century

How Mathematicians Used A Pump-Action Shotgun to Estimate Pi (The Physics arXiv Blog)

The Physics arXiv Blog

If you’ve ever wondered how to estimate pi using a Mossberg 500 pump-action shotgun, a sheet of aluminium foil and some clever mathematics, look no further

Imagine the following scenario. The end of civilisation has occurred, zombies have taken over the Earth and all access to modern technology has ended. The few survivors suddenly need to know the value of π and, being a mathematician, they turn to you. What do you do?

If ever you find yourself in this situation, you’ll be glad of the work of Vincent Dumoulin and Félix Thouin at the Université de Montréal in Canada. These guys have worked out how to calculate an approximate value of π using the distribution of pellets from a Mossberg 500 pump-action shotgun, which they assume would be widely available in the event of a zombie apocalypse.

The principle is straightforward. Imagine a square with sides of length 1 and which contains an arc drawn between two opposite corners to form a quarter circle. The area of the square is 1 while the area of the quarter circle is π/4.

Next, sprinkle sand or rice over the square so that it is covered with a random distribution of grains. Then count the number of grains inside the quarter circle and the total number that cover the entire square.

The ratio of these two numbers is an estimate of the ratio between the area of the quarter circle and the square, in other words π/4.

So multiplying this ratio by 4 gives you π, or at least an estimate of it. And that’s it.

This technique is known as a Monte Carlo approximation (after the casino where the uncle of the physicist who developed it used to gamble). And it is hugely useful in all kinds of simulations.

Of course, the accuracy of the technique depends on the distribution of the grains on the square. If they are truly random, then a mere 30,000 grains can give you an estimate of π which is within 0.07 per cent of the actual value.

Dumoulin and Thouin’s idea is to use the distribution of shotgun pellets rather than sand or rice (which would presumably be in short supply in the post-apocalyptic world). So these guys set up an experiment consisting of a 28-inch barrel Mossberg 500 pump-action shotgun aimed at a sheet of aluminium foil some 20 metres away.

They loaded the gun with cartridges composed of 3 dram equivalent of powder and 32 grams of #8 lead pellets. When fired from the gun, these pellets have an average muzzle velocity of around 366 metres per second.

Dumoulin and Thouin then fired 200 shots at the aluminium foil, peppering it with 30,857 holes. Finally, they used the position of these holes in the same way as the grains of sand or rice in the earlier example, to calculate the value of π.

They immediately have a problem, however. The distribution of pellets is influenced by all kinds of factors, such as the height of the gun, the distance to the target, wind direction and so on. So this distribution is not random.

To get around this, they are able to fall back on a technique known as importance sampling. This is a trick that allows mathematicians to estimate the properties of one type of distribution while using samples generated by a different distribution.

Of their 30,000 pellet holes, they chose 10,000 at random to perform this estimation trick. They then use the remaining 20,000 pellet holes to get an estimate of π, safe in the knowledge that importance sampling allows the calculation to proceed as if the distribution of pellets had been random.

The result? Their value of π is 3.131, which is just 0.33 per cent off the true value. “We feel confident that ballistic Monte Carlo methods constitute reliable ways of computing mathematical constants should a tremendous civilization collapse occur,” they conclude.

Quite! Other methods are also available.

Ref: arxiv.org/abs/1404.1499 : A Ballistic Monte Carlo Approximation of π

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement (The Physics arXiv Blog)

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it

The Physics arXiv Blog

When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The results were not pretty.

It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.

Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.

But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.

This conundrum, which physicists call ‘the problem of time’, has proved to be a thorn in flesh of modern physicists, who have tried to ignore it but with little success.

Then in 1983, the theorists Don Page and William Wootters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.

Entanglement is a deep and powerful link and Page and Wootters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.

But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.

In this case, Page and Wootters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.

But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

Of course, without experimental verification, Page and Wootter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.

Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page and Wootters’ ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.

The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.

The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.

In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.

In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.

“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wootters mechanism,” say Moreva and co.

That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.

What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.

Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.

The next step will be to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.

And therein lies another challenge.

Ref: arxiv.org/abs/1310.4691 :Time From Quantum Entanglement: An Experimental Illustration

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas (The Physics arXiv Blog)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

A new way of thinking about consciousness is sweeping through science like wildfire. Now physicists are using it to formulate the problem of consciousness in concrete mathematical terms for the first time

The Physics arXiv Blog

There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers.

That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss.

Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart.

Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says.

He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.

Interestingly, the new approach to consciousness has come from outside the physics community, principally from neuroscientists such as Giulio Tononi at the University of Wisconsin in Madison.

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Both of these traits can be specified mathematically allowing physicists like Tegmark to reason about them for the first time. He begins by outlining the basic properties that a conscious system must have.

Given that it is a phenomenon of information, a conscious system must be able to store in a memory and retrieve it efficiently.

It must also be able to to process this data, like a computer but one that is much more flexible and powerful than the silicon-based devices we are familiar with.

Tegmark borrows the term computronium to describe matter that can do this and cites other work showing that today’s computers underperform the theoretical limits of computing by some 38 orders of magnitude.

Clearly, there is so much room for improvement that allows for the performance of conscious systems.

Next, Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.

Finally, Tegmark uses this new way of thinking about consciousness as a lens through which to study one of the fundamental problems of quantum mechanics known as the quantum factorisation problem.

This arises because quantum mechanics describes the entire universe using three mathematical entities: an object known as a Hamiltonian that describes the total energy of the system; a density matrix that describes the relationship between all the quantum states in the system; and Schrodinger’s equation which describes how these things change with time.

The problem is that when the entire universe is described in these terms, there are an infinite number of mathematical solutions that include all possible quantum mechanical outcomes and many other even more exotic possibilities.

So the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?

Tegmark does not have an answer. But what’s fascinating about his approach is that it is formulated using the language of quantum mechanics in a way that allows detailed scientific reasoning. And as a result it throws up all kinds of new problems that physicists will want to dissect in more detail.

Take for example, the idea that the information in a conscious system must be unified. That means the system must contain error-correcting codes that allow any subset of up to half the information to be reconstructed from the rest.

Tegmark points out that any information stored in a special network known as a Hopfield neural net automatically has this error-correcting facility. However, he calculates that a Hopfield net about the size of the human brain with 10^11 neurons, can only store 37 bits of integrated information.

“This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?” asks Tegmark.

That’s a question that many scientists might end up pondering in detail. For Tegmark, this paradox suggests that his mathematical formulation of consciousness is missing a vital ingredient. “This strongly implies that the integration principle must be supplemented by at least one additional principle,” he says. Suggestions please in the comments section!

And yet the power of this approach is in the assumption that consciousness does not lie beyond our ken; that there is no “secret sauce” without which it cannot be tamed.

At the beginning of the 20th century, a group of young physicists embarked on a quest to explain a few strange but seemingly small anomalies in our understanding of the universe. In deriving the new theories of relativity and quantum mechanics, they ended up changing the way we comprehend the cosmos. These physcists, at least some of them, are now household names.

Could it be that a similar revolution is currently underway at the beginning of the 21st century?

Ref:arxiv.org/abs/1401.1219: Consciousness as a State of Matter

Partículas telepáticas (Folha de S.Paulo)

CASSIO LEITE VIEIRA

ilustração JOSÉ PATRÍCIO

28/12/2014 03h08

RESUMO Há 50 anos, o físico norte-irlandês John Bell (1928-90) chegou a um resultado que demonstra a natureza “fantasmagórica” da realidade no mundo atômico e subatômico. Seu teorema é hoje visto como a arma mais eficaz contra a espionagem, algo que garantirá, num futuro talvez próximo, a privacidade absoluta das informações.

*

Um país da América do Sul quer manter a privacidade de suas informações estratégicas, mas se vê obrigado a comprar os equipamentos para essa tarefa de um país bem mais avançado tecnologicamente. Esses aparelhos, porém, podem estar “grampeados”.

Surge, então, a dúvida quase óbvia: haverá, no futuro, privacidade 100% garantida? Sim. E isso vale até mesmo para um país que compre a tecnologia antiespionagem do “inimigo”.
O que possibilita a resposta afirmativa acima é o resultado que já foi classificado como o mais profundo da ciência: o teorema de Bell, que trata de uma das perguntas filosóficas mais agudas e penetrantes feitas até hoje e que alicerça o próprio conhecimento: o que é a realidade? O teorema -que neste ano completou seu 50º aniversário- garante que a realidade, em sua dimensão mais íntima, é inimaginavelmente estranha.

José Patricio

A história do teorema, de sua comprovação experimental e de suas aplicações modernas tem vários começos. Talvez, aqui, o mais apropriado seja um artigo publicado em 1935 pelo físico de origem alemã Albert Einstein (1879-1955) e dois colaboradores, o russo Boris Podolsky (1896-1966) e o americano Nathan Rosen (1909-95).

Conhecido como paradoxo EPR (iniciais dos sobrenomes dos autores), o experimento teórico ali descrito resumia uma longa insatisfação de Einstein com os rumos que a mecânica quântica, a teoria dos fenômenos na escala atômica, havia tomado. Inicialmente, causou amargo no paladar do autor da relatividade o fato de essa teoria, desenvolvida na década de 1920, fornecer apenas a probabilidade de um fenômeno ocorrer. Isso contrastava com a “certeza” (determinismo) da física dita clássica, a que rege os fenômenos macroscópicos.

Einstein, na verdade, estranhava sua criatura, pois havia sido um dos pais da teoria quântica. Com alguma relutância inicial, o indeterminismo da mecânica quântica acabou digerido por ele. Algo, porém, nunca lhe passou pela garganta: a não localidade, ou seja, o estranhíssimo fato de algo aqui influenciar instantaneamente algo ali -mesmo que esse “ali” esteja muito distante. Einstein acreditava que coisas distantes tinham realidades independentes.

Einstein chegou a comparar -vale salientar que é só uma analogia- a não localidade a um tipo de telepatia. Mas a definição mais famosa dada por Einstein a essa estranheza foi “fantasmagórica ação a distância”.

EMARANHADO

A essência do argumento do paradoxo EPR é o seguinte: sob condições especiais, duas partículas que interagiram e se separaram acabam em um estado denominado emaranhado, como se fossem “gêmeas telepáticas”. De forma menos pictórica, diz-se que as partículas estão conectadas (ou correlacionadas, como preferem os físicos) e assim seguem, mesmo depois da interação.

A estranheza maior vem agora: se uma das partículas desse par for perturbada -ou seja, sofrer uma medida qualquer, como dizem os físicos-, a outra “sente” essa perturbação instantaneamente. E isso independe da distância entre as duas partículas. Podem estar separadas por anos-luz.

Os autores do paradoxo EPR diziam que era impossível imaginar que a natureza permitisse a conexão instantânea entre os dois objetos. E, por meio de argumentação lógica e complexa, Einstein, Podolsky e Rosen concluíam: a mecânica quântica tem que ser incompleta. Portanto, provisória.

SUPERIOR À LUZ?

Uma leitura apressada (porém, muito comum) do paradoxo EPR é dizer que uma ação instantânea (não local, no vocabulário da física) é impossível, porque violaria a relatividade de Einstein: nada pode viajar com velocidade superior à da luz no vácuo, 300 mil km/s.

No entanto, a não localidade atuaria apenas na dimensão microscópica -não pode ser usada, por exemplo, para mandar ou receber mensagens. No mundo macroscópico, se quisermos fazer isso, teremos que usar sinais que nunca viajam com velocidade maior que a da luz no vácuo. Ou seja, relatividade é preservada.

A não localidade tem a ver com conexões persistentes (e misteriosas) entre dois objetos: interferir com (alterar, mudar etc.) um deles interfere com (altera, muda etc.) o outro. Instantaneamente. O simples ato de observar um deles interfere no estado do outro.

Einstein não gostou da versão final do artigo de 1935, que só viu impressa -a redação ficou a cargo de Podolsky. Ele havia imaginado um texto menos filosófico. Pouco meses depois, viria a resposta do físico dinamarquês Niels Bohr (1885-1962) ao EPR -poucos anos antes, Einstein e Bohr haviam protagonizado o que para muitos é um dos debates filosóficos mais importantes da história: o tema era a “alma da natureza”, nas palavras de um filósofo da física.

Em sua resposta ao EPR, Bohr reafirmou tanto a completude da mecânica quântica quanto sua visão antirrealista do universo atômico: não é possível dizer que uma entidade quântica (elétron, próton, fóton etc.) tenha uma propriedade antes que esta seja medida. Ou seja, tal propriedade não seria real, não estaria oculta à espera de um aparelho de medida ou qualquer interferência (até mesmo o olhar) do observador. Quanto a isso, Einstein, mais tarde, ironizaria: “Será que a Lua só existe quando olhamos para ela?”.

AUTORIDADE

Um modo de entender o que seja uma teoria determinista é o seguinte: é aquela na qual se pressupõe que a propriedade a ser medida está presente (ou “escondida”) no objeto e pode ser determinada com certeza. Os físicos denominam esse tipo de teoria com um nome bem apropriado: teoria de variáveis ocultas.

Em uma teoria de variáveis ocultas, a tal propriedade (conhecida ou não) existe, é real. Daí, por vezes, os filósofos classificarem esse cenário como realismo -Einstein gostava do termo “realidade objetiva”: as coisas existem sem a necessidade de serem observadas.

Mas, na década de 1930, um teorema havia provado que seria impossível haver uma versão da mecânica quântica como uma teoria de variáveis ocultas. O feito era de um dos maiores matemáticos de todos os tempos, o húngaro John von Neumann (1903-57). E, fato não raro na história da ciência, valeu o argumento da autoridade em vez da autoridade do argumento.

O teorema de Von Neumann era perfeito do ponto de vista matemático, mas “errado, tolo” e “infantil” (como chegou a ser classificado) no âmbito da física, pois partia de uma premissa equivocada. Sabe-se hoje que Einstein desconfiou dessa premissa: “Temos que aceitar isso como verdade?”, perguntou a dois colegas. Mas não foi além.

O teorema de Von Neumann serviu, porém, para praticamente pisotear a versão determinista (portanto, de variáveis ocultas) da mecânica quântica feita em 1927 pelo nobre francês Louis de Broglie (1892-1987), Nobel de Física de 1929, que acabou desistindo dessa linha de pesquisa.

Por exatas duas décadas, o teorema de Von Neumann e as ideias de Bohr -que formou em torno dele uma influente escola de jovens notáveis- dissuadiram tentativas de buscar uma versão determinista da mecânica quântica.

Mas, em 1952, o físico norte-americano David Bohm (1917-92), inspirado pelas ideias de De Broglie, apresentou uma versão de variáveis ocultas da mecânica quântica -hoje, denominada mecânica quântica bohmiana, homenagem ao pesquisador que trabalhou na década de 1950 na Universidade de São Paulo (USP), quando perseguido nos EUA pelo macarthismo.

A mecânica quântica bohmiana tinha duas características em sua essência: 1) era determinista (ou seja, de variáveis ocultas); 2) era não local (isto é, admitia a ação a distância) -o que fez com que Einstein, localista convicto, perdesse o interesse inicial nela.

PROTAGONISTA

Eis que entra em cena a principal personagem desta história: o físico norte-irlandês John Stewart Bell, que, ao tomar conhecimento da mecânica bohmiana, teve uma certeza: o “impossível havia sido feito”. Mais: Von Neumann estava errado.

A mecânica quântica de Bohm -ignorada logo de início pela comunidade de físicos- acabava de cair em terreno fértil: Bell remoía, desde a universidade, como um “hobby”, os fundamentos filosóficos da mecânica quântica (EPR, Von Neumann, De Broglie etc.). E tinha tomado partido nesses debates: era um einsteiniano assumido e achava Bohr obscuro.

Bell nasceu em 28 de junho de 1928, em Belfast, em uma família anglicana sem posses. Deveria ter parado de estudar aos 14 anos, mas, por insistência da mãe, que percebeu os dotes intelectuais do segundo de quatro filhos, foi enviado a uma escola técnica de ensino médio, onde ele aprendeu coisas práticas (carpintaria, construção civil, biblioteconomia etc.).

Formado, aos 16, tentou empregos em escritórios, mas o destino quis que terminasse como técnico preparador de experimentos no departamento de física da Queen’s University, também em Belfast.

Os professores do curso logo perceberam o interesse do técnico pela física e passaram a incentivá-lo, com indicações de leituras e aulas. Com uma bolsa de estudos, Bell se formou em 1948 em física experimental e, no ano seguinte, em física matemática. Em ambos os casos, com louvor.

De 1949 a 1960, Bell trabalhou no Aere (Estabelecimento para a Pesquisa em Energia Atômica), em Harwell, no Reino Unido. Lá conheceria sua futura mulher, a física Mary Ross, sua interlocutora em vários trabalhos sobre física. “Ao olhar novamente esses artigos, vejo-a em todo lugar”, disse Bell, em homenagem recebida em 1987, três anos antes de morrer, de hemorragia cerebral.

Defendeu doutorado em 1956, após um período na Universidade de Birmingham, sob orientação do físico teuto-britânico Rudolf Peierls (1907-95). A tese inclui uma prova de um teorema muito importante da física (teorema CPT), que havia sido descoberto pouco antes por um contemporâneo seu.

O TEOREMA

Por discordar dos rumos das pesquisas no Aere, o casal decidiu trocar empregos estáveis por posições temporárias no Centro Europeu de Pesquisas Nucleares (Cern), em Genebra (Suíça). Ele na divisão de física teórica; ela, na de aceleradores.

Bell passou 1963 e 1964 trabalhando nos EUA. Lá, encontrou tempo para se dedicar a seu “hobby” intelectual e gestar o resultado que marcaria sua carreira e lhe daria, décadas mais tarde, fama.

Ele se fez a seguinte pergunta: será que a não localidade da teoria de variáveis ocultas de Bohm seria uma característica de qualquer teoria realista da mecânica quântica? Em outras palavras, se as coisas existirem sem serem observadas, elas terão que necessariamente estabelecer entre si aquela fantasmagórica ação a distância?

O teorema de Bell, publicado em 1964, é também conhecido como desigualdade de Bell. Sua matemática não é complexa. De forma muito simplificada, podemos pensar nesse teorema como uma inequação: x ≤ 2 (x menor ou igual a dois), sendo que “x” representa, para nossos propósitos aqui, os resultados de um experimento.

As consequências mais interessantes do teorema de Bell ocorreriam se tal experimento violasse a desigualdade, ou seja, mostrasse que x > 2 (x maior que dois). Nesse caso, teríamos de abrir mão de uma das duas suposições: 1) realismo (as coisas existem sem serem observadas); 2) da localidade (o mundo quântico não permite conexões mais velozes que a luz).

O artigo do teorema não teve grande repercussão -Bell havia feito outro antes, fundamental para ele chegar ao resultado, mas, por erro do editor do periódico, acabou publicado só em 1966.

REBELDIA A retomada das ideias de Bell -e, por conseguinte, do EPR e de Bohm- ganhou momento com fatores externos à física. Muitos anos depois do agitado final dos anos 1960, o físico americano John Clauser recordaria o período: ”A Guerra do Vietnã dominava os pensamentos políticos da minha geração. Sendo um jovem físico naquele período revolucionário, eu naturalmente queria chacoalhar o mundo”.

A ciência, como o resto do mundo, acabou marcada pelo espírito da geração paz e amor; pela luta pelos direitos civis; por maio de 1968; pelas filosofias orientais; pelas drogas psicodélicas; pela telepatia -em uma palavra: pela rebeldia. Que, traduzida para a física, significava se dedicar a uma área herética na academia: interpretações (ou fundamentos) da mecânica quântica. Mas fazer isso aumentava consideravelmente as chances de um jovem físico arruinar sua carreira: EPR, Bohm e Bell eram considerados temas filosóficos, e não físicos.

O elemento final para que o campo tabu de estudos ganhasse fôlego foi a crise do petróleo de 1973, que diminuiu a oferta de postos para jovens pesquisadores -incluindo físicos. À rebeldia somou-se a recessão.
Clauser, com mais três colegas, Abner Shimony, Richard Holt e Michael Horne, publicou suas primeiras ideias sobre o assunto em 1969, com o título “Proposta de Experimento para Testar Teorias de Variáveis Ocultas”. O quarteto fez isso em parte por ter notado que a desigualdade de Bell poderia ser testada com fótons, que são mais fáceis de serem gerados. Até então se pensava em arranjos experimentais mais complicados.

Em 1972, a tal proposta virou experimento -feito por Clauser e Stuart Freedman (1944-2012)-, e a desigualdade de Bell foi violada.

O mundo parecia ser não local -ironicamente, Clauser era localista! Mas só parecia: o experimento seguiu, por cerca de uma década, incompreendido e, portanto, desconsiderado pela comunidade de físicos. Mas aqueles resultados serviram a reforçar algo importante: fundamentos da mecânica quântica não eram só filosofia. Eram também física experimental.

MUDANÇA DE CENÁRIO

O aperfeiçoamento de equipamentos de óptica (incluindo lasers) permitiu que, em 1982, um experimento se tornasse um clássico da área.

Pouco antes, o físico francês Alain Aspect havia decidido iniciar um doutorado tardio, mesmo sendo um físico experimental experiente. Escolheu como tema o teorema de Bell. Foi ao encontro do colega norte-irlandês no Cern. Em entrevista ao físico Ivan dos Santos Oliveira, do Centro Brasileiro de Pesquisas Físicas, no Rio de Janeiro, e ao autor deste texto, Aspect contou o seguinte diálogo entre ele e Bell. “Você tem um cargo estável?”, perguntou Bell. “Sim”, disse Aspect. Caso contrário, “você seria muito pressionado a não fazer o experimento”, disse Bell.

O diálogo relatado por Aspect nos permite afirmar que, quase duas décadas depois do artigo seminal de 1964, o tema continuava revestido de preconceito.

Em um experimento feito com pares de fótons emaranhados, a natureza, mais uma vez, mostrou seu caráter não local: a desigualdade de Bell foi violada. Os dados mostraram x > 2. Em 2007, por exemplo, o grupo do físico austríaco Anton Zeilinger verificou a violação da desigualdade usando fótons separados por… 144 km.

Na entrevista no Brasil, Aspect disse que, até então, o teorema era pouquíssimo conhecido pelos físicos, mas ganharia fama depois de sua tese de doutorado, de cuja banca, aliás, Bell participou.

ESTRANHO

Afinal, por que a natureza permite que haja a “telepatia” einsteiniana? É no mínimo estranho pensar que uma partícula perturbada aqui possa, de algum modo, alterar o estado de sua companheira nos confins do universo.

Há várias maneiras de interpretar as consequências do que Bell fez. De partida, algumas (bem) equivocadas: 1) a não localidade não pode existir, porque viola a relatividade; 2) teorias de variáveis ocultas (Bohm, De Broglie etc.) da mecânica quântica estão totalmente descartadas; 3) a mecânica quântica é realmente indeterminista; 4) o irrealismo -ou seja, coisas só existem quando observadas- é a palavra final. A lista é longa.

Quando o teorema foi publicado, uma leitura rasa (e errônea) dizia que ele não tinha importância, pois o teorema de Von Neumann já havia descartado as variáveis ocultas, e a mecânica quântica seria, portanto, de fato indeterminista. Entre os que não aceitam a não localidade, há ainda aqueles que chegam ao ponto de dizer que Einstein, Bohm e Bell não entenderam o que fizeram.

O filósofo da física norte-americano Tim Maudlin, da Universidade de Nova York, em dois excelentes artigos, “What Bell Did” (O que Bell fez, arxiv.org/abs/1408.1826) e “Reply to Werner” (em que responde a comentários sobre o texto anterior, arxiv.org/abs/1408.1828), oferece uma longa lista de equívocos.

Para Maudlin, renomado em sua área, o teorema de Bell e sua violação significam uma só coisa: a natureza é não local (“fantasmagórica”) e, portanto, não há esperança para a localidade, como Einstein gostaria -nesse sentido, pode-se dizer que Bell mostrou que Einstein estava errado. Assim, qualquer teoria determinista (realista) que reproduza os resultados experimentais obtidos até hoje pela mecânica quântica -por sinal, a teoria mais precisa da história da ciência- terá que necessariamente ser não local.

De Aspect até hoje, desenvolvimentos tecnológicos importantes possibilitaram algo impensável há poucas décadas: estudar isoladamente uma entidade quântica (átomo, elétron, fóton etc.). E isso deu início à área de informação quântica, que abrange o estudo da criptografia quântica -aquela que permitirá a segurança absoluta dos dados- e o dos computadores quânticos, máquinas extremamente velozes. De certo modo, trata-se de filosofia transformada em física experimental.

Muitos desses avanços se devem basicamente à rebeldia de uma geração de físicos jovens que queriam contrariar o “sistema”.

Uma história saborosa desse período está em “How the Hippies Saved Physics” (Como os hippies salvaram a física, publicado pela W. W. Norton & Company em 2011), do historiador da física norte-americano David Kaiser. E uma análise histórica detalhada em “Quantum Dissidents: Research on the Foundations of Quantum Theory circa 1970” (Dissidentes do quantum: pesquisa sobre os fundamentos da teoria quântica por volta de 1970, bit.ly/1xyipTJ, só para assinantes), do historiador da física Olival Freire Jr., da Universidade Federal da Bahia.

Para os mais interessados no viés filosófico, há os dois volumes premiados de “Conceitos de Física Quântica” (Editora Livraria da Física, 2003), do físico e filósofo Osvaldo Pessoa Jr., da USP.

PRIVACIDADE

A esta altura, o(a) leitor(a) talvez esteja se perguntando sobre o que o teorema de Bell tem a ver com uma privacidade 100% garantida.

No futuro, é (bem) provável que a informação seja enviada e recebida na forma de fótons emaranhados. Pesquisas recentes em criptografia quântica garantem que bastaria submeter essas partículas de luz ao teste da desigualdade de Bell. Se ela for violada, então não há nenhuma possibilidade de a mensagem ter sido bisbilhotada indevidamente. E o teste independe do equipamento usado para enviar ou receber os fótons. A base teórica para isso está, por exemplo, em “The Ultimate Physical Limits of Privacy” (Limites físicos extremos da privacidade), de Artur Ekert e Renato Renner (bit.ly/1gFjynG, só para assinantes).

Em um futuro não muito distante, talvez, o teorema de Bell se transforme na arma mais poderosa contra a espionagem. Isso é um tremendo alento para um mundo que parece rumar à privacidade zero. É também um imenso desdobramento de uma pergunta filosófica que, segundo o físico norte-americano Henry Stapp, especialista em fundamentos da mecânica quântica, se tornou “o resultado mais profundo da ciência”. Merecidamente. Afinal, por que a natureza optou pela “ação fantasmagórica a distância”?

A resposta é um mistério. Pena que a pergunta não seja nem sequer mencionada nas graduações de física no Brasil.

CÁSSIO LEITE VIEIRA, 54, jornalista do Instituto Ciência Hoje (RJ), é autor de “Einstein – O Reformulador do Universo” (Odysseus).
JOSÉ PATRÍCIO, 54, artista plástico pernambucano, participa da mostra “Asas a Raízes” na Caixa Cultural do Rio, de 17/1 a 15/3.

Earth Wrapped In ‘Star Trek Force Field’, Scientists Discover (Huff Post)

AP

Posted: 26/11/2014 20:17 GMT Updated: 26/11/2014 20:59 GMT

Scientists discover Earth shield

Earth is wrapped in an invisible force field that scientists have compared with the “shields” featured in Star Trek. A US team discovered the barrier, some 7,200 miles above the Earth’s surface, that blocks high energy electrons threatening astronauts and satellites.

Scientists identified an “extremely sharp” boundary within the Van Allen radiation belts, two large doughnut-shaped rings held in place by the Earth’s magnetic field that are filled with fast-moving particles. Lead researcher Professor Daniel Baker, from the University of Colorado at Boulder, said: “It’s almost like these electrons are running into a glass wall in space.

Artists impression of a doughnut-shaped brick wall to illustrate an invisible “shield” discovered US scientists that is 7,200 miles above the Earth in the Van Allen radiation belts, which blocks high energy electrons that threaten astronauts and satellites

“Somewhat like the shields created by force fields on Star Trek that were used to repel alien weapons, we are seeing an invisible shield blocking these electrons. It’s an extremely puzzling phenomenon.”

The team originally thought the highly charged electrons, which loop around the Earth at more than 100,000 miles per second, would slowly drift downward into the upper atmosphere. But a pair of probes launched in 2012 to investigate the Van Allen belts showed that the electrons are stopped in their tracks before they get that far.

The nature of the force field remains an unsolved mystery. It does not appear to be linked to magnetic field lines or human-generated radio signals, and scientists are not convinced that a cloud of cold electrically charged gas called the plasmasphere that stretches thousands of miles into the outer Van Allen belt can fully explain the phenomenon either.

Prof Baker added: “I think the key here is to keep observing the region in exquisite detail, which we can do because of the powerful instruments on the Van Allen probes.” The research is reported in the journal Nature.

Transitions between states of matter: It’s more complicated, scientists find (Science Daily)

Date: November 6, 2014

Source: New York University

Summary: The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. New work reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms.

Melting ice. The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known. Credit: © shefkate / Fotolia

The seemingly simple process of phase changes — those transitions between states of matter — is more complex than previously known, according to research based at Princeton University, Peking University and New York University.

Their study, which appears in the journal Science, reveals the need to rethink one of science’s building blocks and, with it, how some of the basic principles underlying the behavior of matter are taught in our classrooms. The researchers examined the way that a phase change, specifically the melting of a solid, occurs at a microscopic level and discovered that the transition is far more involved than earlier models had accounted for.

“This research shows that phase changes can follow multiple pathways, which is counter to what we’ve previously known,” explains Mark Tuckerman, a professor of chemistry and applied mathematics at New York University and one of the study’s co-authors. “This means the simple theories about phase transitions that we teach in classes are just not right.”

According to Tuckerman, scientists will need to change the way they think of and teach on phase changes.

The work stems from a 10-year project at Princeton to develop a mathematical framework and computer algorithms to study complex behavior in systems, explained senior author Weinan E, a professor in Princeton’s Department of Mathematics and Program in Applied and Computational Mathematics. Phase changes proved to be a crucial test case for their algorithm, E said. E and Tuckerman worked with Amit Samanta, a postdoctoral researcher at Princeton now at Lawrence Livermore National Laboratory, and Tang-Qing Yu, a postdoctoral researcher at NYU’s Courant Institute of Mathematical Sciences.

“It was a test case for the rather powerful set of tools that we have developed to study hard questions about complex phenomena such as phase transitions,” E said. “The melting of a relatively simple atomic solid such as a metal, proved to be enormously rich. With the understanding we have gained from this case, we next aim to probe more complex molecular solids such as ice.”

The findings reveal that phase transition can occur via multiple and competing pathways and that the transitions involve at least two steps. The study shows that, along one of these pathways, the first step in the transition process is the formation of point defects — local defects that occur at or around a single lattice site in a crystalline solid. These defects turn out to be highly mobile. In a second step, the point defects randomly migrate and occasionally meet to form large, disordered defect clusters.

This mechanism predicts that “the disordered cluster grows from the outside in rather than from the inside out, as current explanations suggest,” Tuckerman notes. “Over time, these clusters grow and eventually become sufficiently large to cause the transition from solid to liquid.”

Along an alternative pathway, the defects grow into thin lines of disorder (called “dislocations”) that reach across the system. Small liquid regions then pool along these dislocations, these regions expand from the dislocation region, engulfing more and more of the solid, until the entire system becomes liquid.

This study modeled this process by tracing copper and aluminum metals from an atomic solid to an atomic liquid state. The researchers used advanced computer models and algorithms to reexamine the process of phase changes on a microscopic level.

“Phase transitions have always been something of a mystery because they represent such a dramatic change in the state of matter,” Tuckerman observes. “When a system changes from solid to liquid, the properties change substantially.”

He adds that this research shows the surprising incompleteness of previous models of nucleation and phase changes–and helps to fill in existing gaps in basic scientific understanding.

This work is supported by the Office of Naval Research (N00014-13-1-0338), the Army Research Office (W911NF- 11-1-0101), the Department of Energy (DE-SC0009248, DE-AC52-07NA27344), and the National Science Foundation of China (CHE-1301314).


Journal Reference:

  1. A. Samanta, M. E. Tuckerman, T.-Q. Yu, W. E. Microscopic mechanisms of equilibrium melting of a solid. Science, 2014; 346 (6210): 729 DOI:10.1126/science.1253810

You’re powered by quantum mechanics. No, really… (The Guardian)

For years biologists have been wary of applying the strange world of quantum mechanics, where particles can be in two places at once or connected over huge distances, to their own field. But it can help to explain some amazing natural phenomena we take for granted

and

The Observer, Sunday 26 October 2014

A European robin in flight

According to quantum biology, the European robin has a ‘sixth sense’ in the form of a protein in its eye sensitive to the orientation of the Earth’s magnetic field, allowing it to ‘see’ which way to migrate. Photograph: Helmut Heintges/ Helmut Heintges/Corbis

Every year, around about this time, thousands of European robins escape the oncoming harsh Scandinavian winter and head south to the warmer Mediterranean coasts. How they find their way unerringly on this 2,000-mile journey is one of the true wonders of the natural world. For unlike many other species of migratory birds, marine animals and even insects, they do not rely on landmarks, ocean currents, the position of the sun or a built-in star map. Instead, they are among a select group of animals that use a remarkable navigation sense – remarkable for two reasons. The first is that they are able to detect tiny variations in the direction of the Earth’s magnetic field – astonishing in itself, given that this magnetic field is 100 times weaker than even that of a measly fridge magnet. The second is that robins seem to be able to “see” the Earth’s magnetic field via a process that even Albert Einstein referred to as “spooky”. The birds’ in-built compass appears to make use of one of the strangest features of quantum mechanics.

Over the past few years, the European robin, and its quantum “sixth sense”, has emerged as the pin-up for a new field of research, one that brings together the wonderfully complex and messy living world and the counterintuitive, ethereal but strangely orderly world of atoms and elementary particles in a collision of disciplines that is as astonishing and unexpected as it is exciting. Welcome to the new science of quantum biology.

Most people have probably heard of quantum mechanics, even if they don’t really know what it is about. Certainly, the idea that it is a baffling and difficult scientific theory understood by just a tiny minority of smart physicists and chemists has become part of popular culture. Quantum mechanics describes a reality on the tiniest scales that is, famously, very weird indeed; a world in which particles can exist in two or more places at once, spread themselves out like ghostly waves, tunnel through impenetrable barriers and even possess instantaneous connections that stretch across vast distances.

But despite this bizarre description of the basic building blocks of the universe, quantum mechanics has been part of all our lives for a century. Its mathematical formulation was completed in the mid-1920s and has given us a remarkably complete account of the world of atoms and their even smaller constituents, the fundamental particles that make up our physical reality. For example, the ability of quantum mechanics to describe the way that electrons arrange themselves within atoms underpins the whole of chemistry, material science and electronics; and is at the very heart of most of the technological advances of the past half-century. Without the success of the equations of quantum mechanics in describing how electrons move through materials such as semiconductors we would not have developed the silicon transistor and, later, the microchip and the modern computer.

However, if quantum mechanics can so beautifully and accurately describe the behaviour of atoms with all their accompanying weirdness, then why aren’t all the objects we see around us, including us – which are after all only made up of these atoms – also able to be in two place at once, pass through impenetrable barriers or communicate instantaneously across space? One obvious difference is that the quantum rules apply to single particles or systems consisting of just a handful of atoms, whereas much larger objects consist of trillions of atoms bound together in mindboggling variety and complexity. Somehow, in ways we are only now beginning to understand, most of the quantum weirdness washes away ever more quickly the bigger the system is, until we end up with the everyday objects that obey the familiar rules of what physicists call the “classical world”. In fact, when we want to detect the delicate quantum effects in everyday-size objects we have to go to extraordinary lengths to do so – freezing them to within a whisker of absolute zero and performing experiments in near-perfect vacuums.

Quantum effects were certainly not expected to play any role inside the warm, wet and messy world of living cells, so most biologists have thus far ignored quantum mechanics completely, preferring their traditional ball-and-stick models of the molecular structures of life. Meanwhile, physicists have been reluctant to venture into the messy and complex world of the living cell; why should they when they can test their theories far more cleanly in the controlled environment of the lab where they at least feel they have a chance of understanding what is going on?

Erwin Schrödinger, whose book What is Life? suggested that the macroscopic order of life was based on order at its quantum level.

Erwin Schrödinger, whose book What is Life? suggested that the macroscopic order of life was based on order at its quantum level. Photograph: Bettmann/CORBIS

Yet, 70 years ago, the Austrian Nobel prize-winning physicist and quantum pioneer, Erwin Schrödinger, suggested in his famous book,What is Life?, that, deep down, some aspects of biology must be based on the rules and orderly world of quantum mechanics. His book inspired a generation of scientists, including the discoverers of the double-helix structure of DNA, Francis Crick and James Watson. Schrödinger proposed that there was something unique about life that distinguishes it from the rest of the non-living world. He suggested that, unlike inanimate matter, living organisms can somehow reach down to the quantum domain and utilise its strange properties in order to operate the extraordinary machinery within living cells.

Schrödinger’s argument was based on the paradoxical fact that the laws of classical physics, such as those of Newtonian mechanics and thermodynamics, are ultimately based on disorder. Consider a balloon. It is filled with trillions of molecules of air all moving entirely randomly, bumping into one another and the inside wall of the balloon. Each molecule is governed by orderly quantum laws, but when you add up the random motions of all the molecules and average them out, their individual quantum behaviour washes out and you are left with the gas laws that predict, for example, that the balloon will expand by a precise amount when heated. This is because heat energy makes the air molecules move a little bit faster, so that they bump into the walls of the balloon with a bit more force, pushing the walls outward a little bit further. Schrödinger called this kind of law “order from disorder” to reflect the fact that this apparent macroscopic regularity depends on random motion at the level of individual particles.

But what about life? Schrödinger pointed out that many of life’s properties, such as heredity, depend of molecules made of comparatively few particles – certainly too few to benefit from the order-from-disorder rules of thermodynamics. But life was clearly orderly. Where did this orderliness come from? Schrödinger suggested that life was based on a novel physical principle whereby its macroscopic order is a reflection of quantum-level order, rather than the molecular disorder that characterises the inanimate world. He called this new principle “order from order”. But was he right?

Up until a decade or so ago, most biologists would have said no. But as 21st-century biology probes the dynamics of ever-smaller systems – even individual atoms and molecules inside living cells – the signs of quantum mechanical behaviour in the building blocks of life are becoming increasingly apparent. Recent research indicates that some of life’s most fundamental processes do indeed depend on weirdness welling up from the quantum undercurrent of reality. Here are a few of the most exciting examples.

Enzymes are the workhorses of life. They speed up chemical reactions so that processes that would otherwise take thousands of years proceed in seconds inside living cells. Life would be impossible without them. But how they accelerate chemical reactions by such enormous factors, often more than a trillion-fold, has been an enigma. Experiments over the past few decades, however, have shown that enzymes make use of a remarkable trick called quantum tunnelling to accelerate biochemical reactions. Essentially, the enzyme encourages electrons and protons to vanish from one position in a biomolecule and instantly rematerialise in another, without passing through the gap in between – a kind of quantum teleportation.

And before you throw your hands up in incredulity, it should be stressed that quantum tunnelling is a very familiar process in the subatomic world and is responsible for such processes as radioactive decay of atoms and even the reason the sun shines (by turning hydrogen into helium through the process of nuclear fusion). Enzymes have made every single biomolecule in your cells and every cell of every living creature on the planet, so they are essential ingredients of life. And they dip into the quantum world to help keep us alive.

Another vital process in biology is of course photosynthesis. Indeed, many would argue that it is the most important biochemical reaction on the planet, responsible for turning light, air, water and a few minerals into grass, trees, grain, apples, forests and, ultimately, the rest of us who eat either the plants or the plant-eaters.

The initiating event is the capture of light energy by a chlorophyll molecule and its conversion into chemical energy that is harnessed to fix carbon dioxide and turn it into plant matter. The process whereby this light energy is transported through the cell has long been a puzzle because it can be so efficient – close to 100% and higher than any artificial energy transport process.

Sunlight shines through chestnut tree leaves. Quantum biology can explain why photosynthesis in plants is so efficient.

Sunlight shines through chestnut tree leaves. Quantum biology can explain why photosynthesis in plants is so efficient. Photograph: Getty Images/Visuals Unlimited

The first step in photosynthesis is the capture of a tiny packet of energy from sunlight that then has to hop through a forest of chlorophyll molecules to makes its way to a structure called the reaction centre where its energy is stored. The problem is understanding how the packet of energy appears to so unerringly find the quickest route through the forest. An ingenious experiment, first carried out in 2007 in Berkley, California, probed what was going on by firing short bursts of laser light at photosynthetic complexes. The research revealed that the energy packet was not hopping haphazardly about, but performing a neat quantum trick. Instead of behaving like a localised particle travelling along a single route, it behaves quantum mechanically, like a spread-out wave, and samples all possible routes at once to find the quickest way.

A third example of quantum trickery in biology – the one we introduced in our opening paragraph – is the mechanism by which birds and other animals make use of the Earth’s magnetic field for navigation. Studies of the European robin suggest that it has an internal chemical compass that utilises an astonishing quantum concept called entanglement, which Einstein dismissed as “spooky action at a distance”. This phenomenon describes how two separated particles can remain instantaneously connected via a weird quantum link. The current best guess is that this takes place inside a protein in the bird’s eye, where quantum entanglement makes a pair of electrons highly sensitive to the angle of orientation of the Earth’s magnetic field, allowing the bird to “see” which way it needs to fly.

All these quantum effects have come as a big surprise to most scientists who believed that the quantum laws only applied in the microscopic world. All delicate quantum behaviour was thought to be washed away very quickly in bigger objects, such as living cells, containing the turbulent motion of trillions of randomly moving particles. So how does life manage its quantum trickery? Recent research suggests that rather than avoiding molecular storms, life embraces them, rather like the captain of a ship who harnesses turbulent gusts and squalls to maintain his ship upright and on course.

Just as Schrödinger predicted, life seems to be balanced on the boundary between the sensible everyday world of the large and the weird and wonderful quantum world, a discovery that is opening up an exciting new field of 21st-century science.

Life on the Edge: The Coming of Age of Quantum Biology by Jim Al-Khalili and Johnjoe McFadden will be published by Bantam Press on 6 November.

‘Superglue’ for the atmosphere: How sulfuric acid increases cloud formation (Science Daily)

Date: October 8, 2014

Source: Goethe-Universität Frankfurt am Main

Summary: It has been known for several years that sulfuric acid contributes to the formation of tiny aerosol particles, which play an important role in the formation of clouds. A new study shows that dimethylamine can tremendously enhance new particle formation. The formation of neutral (i.e. uncharged) nucleating clusters of sulfuric acid and dimethylamine was observed for the first time.

Clouds. Credit: Copyright Michele Hogan

It has been known for several years that sulfuric acid contributes to the formation of tiny aerosol particles, which play an important role in the formation of clouds. The new study by Kürten et al. shows that dimethylamine can tremendously enhance new particle formation. The formation of neutral (i.e. uncharged) nucleating clusters of sulfuric acid and dimethylamine was observed for the first time.

Previously, it was only possible to detect neutral clusters containing up to two sulfuric acid molecules. However, in the present study molecular clusters containing up to 14 sulfuric acid and 16 dimethylamine molecules were detected and their growth by attachment of individual molecules was observed in real-time starting from just one molecule. Moreover, these measurements were made at concentrations of sulfuric acid and dimethylamine corresponding to atmospheric levels (less than 1 molecule of sulfuric acid per 1 x 1013 molecules of air).

The capability of sulfuric acid molecules together with water and ammonia to form clusters and particles has been recognized for several years. However, clusters which form in this manner can vaporize under the conditions which exist in the atmosphere. In contrast, the system of sulfuric acid and dimethylamine forms particles much more efficiently because even the smallest clusters are essentially stable against evaporation. In this respect dimethylamine can act as “superglue” because when interacting with sulfuric acid every collision between a cluster and a sulfuric acid molecule bonds them together irreversibly. Sulphuric acid as well as amines in the present day atmosphere have mainly anthropogenic sources.

Sulphuric acid is derived mainly from the oxidation of sulphur dioxide while amines stem, for example, from animal husbandry. The method used to measure the neutral clusters utilizes a combination of a mass spectrometer and a chemical ionization source, which was developed by the University of Frankfurt and the University of Helsinki. The measurements were made by an international collaboration at the CLOUD (Cosmics Leaving OUtdoor Droplets) chamber at CERN (European Organization for Nuclear Research).

The results allow for very detailed insight into a chemical system which could be relevant for atmospheric particle formation. Aerosol particles influence Earth’s climate through cloud formation: Clouds can only form if so-called cloud condensation nuclei (CCN) are present, which act as seeds for condensing water molecules. Globally about half the CCN originate from a secondary process which involves the formation of small clusters and particles in the very first step followed by growth to sizes of at least 50 nanometers.

The observed process of particle formation from sulfuric acid and dimethylamine could also be relevant for the formation of CCN. A high concentration of CCN generally leads to the formation of clouds with a high concentration of small droplets; whereas fewer CCN lead to clouds with few large droplets. Earth’s radiation budget, climate as well as precipitation patterns can be influenced in this manner. The deployed method will also open a new window for future measurements of particle formation in other chemical systems.


Journal Reference:

  1. A. Kurten, T. Jokinen, M. Simon, M. Sipila, N. Sarnela, H. Junninen, A. Adamov, J. Almeida, A. Amorim, F. Bianchi, M. Breitenlechner, J. Dommen, N. M. Donahue, J. Duplissy, S. Ehrhart, R. C. Flagan, A. Franchin, J. Hakala, A. Hansel, M. Heinritzi, M. Hutterli, J. Kangasluoma, J. Kirkby, A. Laaksonen, K. Lehtipalo, M. Leiminger, V. Makhmutov, S. Mathot, A. Onnela, T. Petaja, A. P. Praplan, F. Riccobono, M. P. Rissanen, L. Rondo, S. Schobesberger, J. H. Seinfeld, G. Steiner, A. Tome, J. Trostl, P. M. Winkler, C. Williamson, D. Wimmer, P. Ye, U. Baltensperger, K. S. Carslaw, M. Kulmala, D. R. Worsnop, J. Curtius. Neutral molecular cluster formation of sulfuric acid-dimethylamine observed in real time under atmospheric conditions. Proceedings of the National Academy of Sciences, 2014; DOI: 10.1073/pnas.1404853111

New math and quantum mechanics: Fluid mechanics suggests alternative to quantum orthodoxy (Science Daily)

Date: September 12, 2014

Source: Massachusetts Institute of Technology

Summary: The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed. But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics. Now a professor of applied mathematics believes that pilot-wave theory deserves a second look.


Close-ups of an experiment conducted by John Bush and his student Daniel Harris, in which a bouncing droplet of fluid was propelled across a fluid bath by waves it generated. Credit: Dan Harris

The central mystery of quantum mechanics is that small chunks of matter sometimes seem to behave like particles, sometimes like waves. For most of the past century, the prevailing explanation of this conundrum has been what’s called the “Copenhagen interpretation” — which holds that, in some sense, a single particle really is a wave, smeared out across the universe, that collapses into a determinate location only when observed.

But some founders of quantum physics — notably Louis de Broglie — championed an alternative interpretation, known as “pilot-wave theory,” which posits that quantum particles are borne along on some type of wave. According to pilot-wave theory, the particles have definite trajectories, but because of the pilot wave’s influence, they still exhibit wavelike statistics.

John Bush, a professor of applied mathematics at MIT, believes that pilot-wave theory deserves a second look. That’s because Yves Couder, Emmanuel Fort, and colleagues at the University of Paris Diderot have recently discovered a macroscopic pilot-wave system whose statistical behavior, in certain circumstances, recalls that of quantum systems.

Couder and Fort’s system consists of a bath of fluid vibrating at a rate just below the threshold at which waves would start to form on its surface. A droplet of the same fluid is released above the bath; where it strikes the surface, it causes waves to radiate outward. The droplet then begins moving across the bath, propelled by the very waves it creates.

“This system is undoubtedly quantitatively different from quantum mechanics,” Bush says. “It’s also qualitatively different: There are some features of quantum mechanics that we can’t capture, some features of this system that we know aren’t present in quantum mechanics. But are they philosophically distinct?”

Tracking trajectories

Bush believes that the Copenhagen interpretation sidesteps the technical challenge of calculating particles’ trajectories by denying that they exist. “The key question is whether a real quantum dynamics, of the general form suggested by de Broglie and the walking drops, might underlie quantum statistics,” he says. “While undoubtedly complex, it would replace the philosophical vagaries of quantum mechanics with a concrete dynamical theory.”

Last year, Bush and one of his students — Jan Molacek, now at the Max Planck Institute for Dynamics and Self-Organization — did for their system what the quantum pioneers couldn’t do for theirs: They derived an equation relating the dynamics of the pilot waves to the particles’ trajectories.

In their work, Bush and Molacek had two advantages over the quantum pioneers, Bush says. First, in the fluidic system, both the bouncing droplet and its guiding wave are plainly visible. If the droplet passes through a slit in a barrier — as it does in the re-creation of a canonical quantum experiment — the researchers can accurately determine its location. The only way to perform a measurement on an atomic-scale particle is to strike it with another particle, which changes its velocity.

The second advantage is the relatively recent development of chaos theory. Pioneered by MIT’s Edward Lorenz in the 1960s, chaos theory holds that many macroscopic physical systems are so sensitive to initial conditions that, even though they can be described by a deterministic theory, they evolve in unpredictable ways. A weather-system model, for instance, might yield entirely different results if the wind speed at a particular location at a particular time is 10.01 mph or 10.02 mph.

The fluidic pilot-wave system is also chaotic. It’s impossible to measure a bouncing droplet’s position accurately enough to predict its trajectory very far into the future. But in a recent series of papers, Bush, MIT professor of applied mathematics Ruben Rosales, and graduate students Anand Oza and Dan Harris applied their pilot-wave theory to show how chaotic pilot-wave dynamics leads to the quantumlike statistics observed in their experiments.

What’s real?

In a review article appearing in the Annual Review of Fluid Mechanics, Bush explores the connection between Couder’s fluidic system and the quantum pilot-wave theories proposed by de Broglie and others.

The Copenhagen interpretation is essentially the assertion that in the quantum realm, there is no description deeper than the statistical one. When a measurement is made on a quantum particle, and the wave form collapses, the determinate state that the particle assumes is totally random. According to the Copenhagen interpretation, the statistics don’t just describe the reality; they are the reality.

But despite the ascendancy of the Copenhagen interpretation, the intuition that physical objects, no matter how small, can be in only one location at a time has been difficult for physicists to shake. Albert Einstein, who famously doubted that God plays dice with the universe, worked for a time on what he called a “ghost wave” theory of quantum mechanics, thought to be an elaboration of de Broglie’s theory. In his 1976 Nobel Prize lecture, Murray Gell-Mann declared that Niels Bohr, the chief exponent of the Copenhagen interpretation, “brainwashed an entire generation of physicists into believing that the problem had been solved.” John Bell, the Irish physicist whose famous theorem is often mistakenly taken to repudiate all “hidden-variable” accounts of quantum mechanics, was, in fact, himself a proponent of pilot-wave theory. “It is a great mystery to me that it was so soundly ignored,” he said.

Then there’s David Griffiths, a physicist whose “Introduction to Quantum Mechanics” is standard in the field. In that book’s afterword, Griffiths says that the Copenhagen interpretation “has stood the test of time and emerged unscathed from every experimental challenge.” Nonetheless, he concludes, “It is entirely possible that future generations will look back, from the vantage point of a more sophisticated theory, and wonder how we could have been so gullible.”

“The work of Yves Couder and the related work of John Bush … provides the possibility of understanding previously incomprehensible quantum phenomena, involving ‘wave-particle duality,’ in purely classical terms,” says Keith Moffatt, a professor emeritus of mathematical physics at Cambridge University. “I think the work is brilliant, one of the most exciting developments in fluid mechanics of the current century.”

Journal Reference:

  1. John W.M. Bush. Pilot-Wave Hydrodynamics. Annual Review of Fluid Mechanics, 2014 DOI: 10.1146/annurev-fluid-010814-014506

Teoria quântica, múltiplos universos, e o destino da consciência humana após a morte (Biocentrismo, Robert Lanza)

[Nota do editor do blogue: o título da matéria em português não é fiel ao título original em inglês, e tem caráter sensacionalista. Por ser este blogue uma hemeroteca, não alterei o título.]

Cientistas comprovam a reencarnação humana (Duniverso)

s/d; acessado em 14 de setembro de 2014. Desde que o mundo é mundo discutimos e tentamos descobrir o que existe além da morte. Desta vez a ciência quântica explica e comprova que existe sim vida (não física) após a morte de qualquer ser humano. Um livro intitulado “O biocentrismo: Como a vida e a consciência são as chaves para entender a natureza do Universo” “causou” na Internet, porque continha uma noção de que a vida não acaba quando o corpo morre e que pode durar para sempre. O autor desta publicação o cientista Dr. Robert Lanza, eleito o terceiro mais importante cientista vivo pelo NY Times, não tem dúvidas de que isso é possível.

Além do tempo e do espaço

Lanza é um especialista em medicina regenerativa e diretor científico da Advanced Cell Technology Company. No passado ficou conhecido por sua extensa pesquisa com células-tronco e também por várias experiências bem sucedidas sobre clonagem de espécies animais ameaçadas de extinção. Mas não há muito tempo, o cientista se envolveu com física, mecânica quântica e astrofísica. Esta mistura explosiva deu à luz a nova teoria do biocentrismo que vem pregando desde então. O biocentrismo ensina que a vida e a consciência são fundamentais para o universo. É a consciência que cria o universo material e não o contrário. Lanza aponta para a estrutura do próprio universo e diz que as leis, forças e constantes variações do universo parecem ser afinadas para a vida, ou seja, a inteligência que existia antes importa muito. Ele também afirma que o espaço e o tempo não são objetos ou coisas mas sim ferramentas de nosso entendimento animal. Lanza diz que carregamos o espaço e o tempo em torno de nós “como tartarugas”, o que significa que quando a casca sai, espaço e tempo ainda existem. ciencia-quantica-comprova-reencarnacao

A teoria sugere que a morte da consciência simplesmente não existe. Ele só existe como um pensamento porque as pessoas se identificam com o seu corpo. Eles acreditam que o corpo vai morrer mais cedo ou mais tarde, pensando que a sua consciência vai desaparecer também. Se o corpo gera a consciência então a consciência morre quando o corpo morre. Mas se o corpo recebe a consciência da mesma forma que uma caixa de tv a cabo recebe sinais de satélite então é claro que a consciência não termina com a morte do veículo físico. Na verdade a consciência existe fora das restrições de tempo e espaço. Ele é capaz de estar em qualquer lugar: no corpo humano e no exterior de si mesma. Em outras palavras é não-local, no mesmo sentido que os objetos quânticos são não-local. Lanza também acredita que múltiplos universos podem existir simultaneamente. Em um universo o corpo pode estar morto e em outro continua a existir, absorvendo consciência que migraram para este universo. Isto significa que uma pessoa morta enquanto viaja através do mesmo túnel acaba não no inferno ou no céu, mas em um mundo semelhante a ele ou ela que foi habitado, mas desta vez vivo. E assim por diante, infinitamente, quase como um efeito cósmico vida após a morte.

Vários mundos

Não são apenas meros mortais que querem viver para sempre mas também alguns cientistas de renome têm a mesma opinião de Lanza. São os físicos e astrofísicos que tendem a concordar com a existência de mundos paralelos e que sugerem a possibilidade de múltiplos universos. Multiverso (multi-universo) é o conceito científico da teoria que eles defendem. Eles acreditam que não existem leis físicas que proibiriam a existência de mundos paralelos.

ciencia-quantica-comprova-reencarnacao-2

O primeiro a falar sobre isto foi o escritor de ficção científica HG Wells em 1895 com o livro “The Door in the Wall“. Após 62 anos essa ideia foi desenvolvida pelo Dr. Hugh Everett em sua tese de pós-graduação na Universidade de Princeton. Basicamente postula que, em determinado momento o universo se divide em inúmeros casos semelhantes e no momento seguinte, esses universos “recém-nascidos” dividem-se de forma semelhante. Então em alguns desses mundos que podemos estar presentes, lendo este artigo em um universo e assistir TV em outro. Na década de 1980 Andrei Linde cientista do Instituto de Física da Lebedev, desenvolveu a teoria de múltiplos universos. Agora como professor da Universidade de Stanford, Linde explicou: o espaço consiste em muitas esferas de insuflar que dão origem a esferas semelhantes, e aqueles, por sua vez, produzem esferas em números ainda maiores e assim por diante até o infinito. No universo eles são separados. Eles não estão cientes da existência do outro mas eles representam partes de um mesmo universo físico. A física Laura Mersini Houghton da Universidade da Carolina do Norte com seus colegas argumentam: as anomalias do fundo do cosmos existe devido ao fato de que o nosso universo é influenciado por outros universos existentes nas proximidades e que buracos e falhas são um resultado direto de ataques contra nós por universos vizinhos.

Alma

Assim, há abundância de lugares ou outros universos onde a nossa alma poderia migrar após a morte, de acordo com a teoria de neo biocentrismo. Mas será que a alma existe? Existe alguma teoria científica da consciência que poderia acomodar tal afirmação? Segundo o Dr. Stuart Hameroff uma experiência de quase morte acontece quando a informação quântica que habita o sistema nervoso deixa o corpo e se dissipa no universo. Ao contrário do que defendem os materialistas Dr. Hameroff oferece uma explicação alternativa da consciência que pode, talvez, apelar para a mente científica racional e intuições pessoais. A consciência reside, de acordo com Stuart e o físico britânico Sir Roger Penrose, nos microtúbulos das células cerebrais que são os sítios primários de processamento quântico. Após a morte esta informação é liberada de seu corpo, o que significa que a sua consciência vai com ele. Eles argumentaram que a nossa experiência da consciência é o resultado de efeitos da gravidade quântica nesses microtúbulos, uma teoria que eles batizaram Redução Objetiva Orquestrada. Consciência ou pelo menos proto consciência é teorizada por eles para ser uma propriedade fundamental do universo, presente até mesmo no primeiro momento do universo durante o Big Bang. “Em uma dessas experiências conscientes comprova-se que o proto esquema é uma propriedade básica da realidade física acessível a um processo quântico associado com atividade cerebral.” Nossas almas estão de fato construídas a partir da própria estrutura do universo e pode ter existido desde o início dos tempos. Nossos cérebros são apenas receptores e amplificadores para a proto-consciência que é intrínseca ao tecido do espaço-tempo. Então, há realmente uma parte de sua consciência que é não material e vai viver após a morte de seu corpo físico. ciencia-quantica-comprova-reencarnacao-3

Dr. Hameroff disse ao Canal Science através do documentário Wormhole: “Vamos dizer que o coração pare de bater, o sangue pare de fluir e os microtúbulos percam seu estado quântico. A informação quântica dentro dos microtúbulos não é destruída, não pode ser destruída, ele só distribui e se dissipa com o universo como um todo.” Robert Lanza acrescenta aqui que não só existem em um único universo, ela existe talvez, em outro universo. Se o paciente é ressuscitado, esta informação quântica pode voltar para os microtúbulos e o paciente diz: “Eu tive uma experiência de quase morte”. Ele acrescenta: “Se ele não reviveu e o paciente morre é possível que esta informação quântica possa existir fora do corpo talvez indefinidamente, como uma alma.” Esta conta de consciência quântica explica coisas como experiências de quase morte, projeção astral, experiências fora do corpo e até mesmo a reencarnação sem a necessidade de recorrer a ideologia religiosa. A energia de sua consciência potencialmente é reciclada de volta em um corpo diferente em algum momento e nesse meio tempo ela existe fora do corpo físico em algum outro nível de realidade e possivelmente, em outro universo.

E você o que acha? Concorda com Lanza?

Grande abraço!

Indicação: Pedro Lopes Martins Artigo publicado originalmente em inglês no site SPIRIT SCIENCE AND METAPHYSICS.

*   *   *

Scientists Claim That Quantum Theory Proves Consciousness Moves To Another Universe At Death

STEVEN BANCARZ, JANUARY 7, 2014

A book titled “Biocentrism: How Life and Consciousness Are the Keys to Understanding the Nature of the Universe“ has stirred up the Internet, because it contained a notion that life does not end when the body dies, and it can last forever. The author of this publication, scientist Dr. Robert Lanza who was voted the 3rd most important scientist alive by the NY Times, has no doubts that this is possible.

Lanza is an expert in regenerative medicine and scientific director of Advanced Cell Technology Company. Before he has been known for his extensive research which dealt with stem cells, he was also famous for several successful experiments on cloning endangered animal species. But not so long ago, the scientist became involved with physics, quantum mechanics and astrophysics. This explosive mixture has given birth to the new theory of biocentrism, which the professor has been preaching ever since.  Biocentrism teaches that life and consciousness are fundamental to the universe.  It is consciousness that creates the material universe, not the other way around. Lanza points to the structure of the universe itself, and that the laws, forces, and constants of the universe appear to be fine-tuned for life, implying intelligence existed prior to matter.  He also claims that space and time are not objects or things, but rather tools of our animal understanding.  Lanza says that we carry space and time around with us “like turtles with shells.” meaning that when the shell comes off (space and time), we still exist. The theory implies that death of consciousness simply does not exist.   It only exists as a thought because people identify themselves with their body. They believe that the body is going to perish, sooner or later, thinking their consciousness will disappear too.  If the body generates consciousness, then consciousness dies when the body dies.  But if the body receives consciousness in the same way that a cable box receives satellite signals, then of course consciousness does not end at the death of the physical vehicle. In fact, consciousness exists outside of constraints of time and space. It is able to be anywhere: in the human body and outside of it. In other words, it is non-local in the same sense that quantum objects are non-local. Lanza also believes that multiple universes can exist simultaneously.  In one universe, the body can be dead. And in another it continues to exist, absorbing consciousness which migrated into this universe.  This means that a dead person while traveling through the same tunnel ends up not in hell or in heaven, but in a similar world he or she once inhabited, but this time alive. And so on, infinitely.  It’s almost like a cosmic Russian doll afterlife effect.

Multiple worlds

This hope-instilling, but extremely controversial theory by Lanza has many unwitting supporters, not just mere mortals who want to live forever, but also some well-known scientists. These are the physicists and astrophysicists who tend to agree with existence of parallel worlds and who suggest the possibility of multiple universes. Multiverse (multi-universe) is a so-called scientific concept, which they defend. They believe that no physical laws exist which would prohibit the existence of parallel worlds. The first one was a science fiction writer H.G. Wells who proclaimed in 1895 in his story “The Door in the Wall”.  And after 62 years, this idea was developed by Dr. Hugh Everett in his graduate thesis at the Princeton University. It basically posits that at any given moment the universe divides into countless similar instances. And the next moment, these “newborn” universes split in a similar fashion. In some of these worlds you may be present: reading this article in one universe, or watching TV in another. The triggering factor for these multiplyingworlds is our actions, explained Everett. If we make some choices, instantly one universe splits into two with different versions of outcomes. In the 1980s, Andrei Linde, scientist from the Lebedev’s Institute of physics, developed the theory of multiple universes. He is now a professor at Stanford University.  Linde explained: Space consists of many inflating spheres, which give rise to similar spheres, and those, in turn, produce spheres in even greater numbers, and so on to infinity. In the universe, they are spaced apart. They are not aware of each other’s existence. But they represent parts of the same physical universe. The fact that our universe is not alone is supported by data received from the Planck space telescope. Using the data, scientists have created the most accurate map of the microwave background, the so-called cosmic relic background radiation, which has remained since the inception of our universe. They also found that the universe has a lot of dark recesses represented by some holes and extensive gaps. Theoretical physicist Laura Mersini-Houghton from the North Carolina University with her colleagues argue: the anomalies of the microwave background exist due to the fact that our universe is influenced by other universes existing nearby. And holes and gaps are a direct result of attacks on us by neighboring universes.

Soul

So, there is abundance of places or other universes where our soul could migrate after death, according to the theory of neo-biocentrism. But does the soul exist?  Is there any scientific theory of consciousness that could accommodate such a claim?  According to Dr. Stuart Hameroff, a near-death experience happens when the quantum information that inhabits the nervous system leaves the body and dissipates into the universe.  Contrary to materialistic accounts of consciousness, Dr. Hameroff offers an alternative explanation of consciousness that can perhaps appeal to both the rational scientific mind and personal intuitions. Consciousness resides, according to Stuart and British physicist Sir Roger Penrose, in the microtubules of the brain cells, which are the primary sites of quantum processing.  Upon death, this information is released from your body, meaning that your consciousness goes with it. They have argued that our experience of consciousness is the result of quantum gravity effects in these microtubules, a theory which they dubbed orchestrated objective reduction (Orch-OR). Consciousness, or at least proto-consciousness is theorized by them to be a fundamental property of the universe, present even at the first moment of the universe during the Big Bang. “In one such scheme proto-conscious experience is a basic property of physical reality accessible to a quantum process associated with brain activity.” Our souls are in fact constructed from the very fabric of the universe – and may have existed since the beginning of time.  Our brains are just receivers and amplifiers for the proto-consciousness that is intrinsic to the fabric of space-time. So is there really a part of your consciousness that is non-material and will live on after the death of your physical body? Dr Hameroff told the Science Channel’s Through the Wormhole documentary: “Let’s say the heart stops beating, the blood stops flowing, the microtubules lose their quantum state. The quantum information within the microtubules is not destroyed, it can’t be destroyed, it just distributes and dissipates to the universe at large”.  Robert Lanza would add here that not only does it exist in the universe, it exists perhaps in another universe. If the patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says “I had a near death experience”‘

He adds: “If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.”

This account of quantum consciousness explains things like near-death experiences, astral projection, out of body experiences, and even reincarnation without needing to appeal to religious ideology.  The energy of your consciousness potentially gets recycled back into a different body at some point, and in the mean time it exists outside of the physical body on some other level of reality, and possibly in another universe. Robert Lanza on Biocentrism:

Sources: http://www.learning-mind.com/quantum-theory-proves-that-consciousness-moves-to-another-universe-after-death/ http://en.wikipedia.org/wiki/Biocentric_universe http://www.dailymail.co.uk/sciencetech/article-2225190/Can-quantum-physics-explain-bizarre-experiences-patients-brought-brink-death.html#axzz2JyudSqhB http://www.news.com.au/news/quantum-scientists-offer-proof-soul-exists/story-fnenjnc3-1226507686757 http://www.psychologytoday.com/blog/biocentrism/201112/does-the-soul-exist-evidence-says-yes http://www.hameroff.com/penrose-hameroff/fundamentality.html

– See more at: http://www.spiritscienceandmetaphysics.com/scientists-claim-that-quantum-theory-proves-consciousness-moves-to-another-universe-at-death/#sthash.QVylhCNb.dpuf

Physicists, alchemists, and ayahuasca shamans: A study of grammar and the body (Cultural Admixtures)

Posted on by

Are there any common denominators that may underlie the practices of leading physicists and scientists, Renaissance alchemists, and indigenous Amazonian ayahuasca healers? There are obviously a myriad of things that these practices do not have in common. Yet through an analysis of the body and the senses and styles of grammar and social practice, these seemingly very different modes of existence may be triangulated to reveal a curious set of logics at play. Ways in which practitioners identify their subjectivities (or ‘self’) with nonhuman entities and ‘natural’ processes are detailed in the three contexts. A logic of identification illustrates similarities, and also differences, in the practices of advanced physics, Renaissance alchemy, and ayahuasca healing.

Physics and the “I” and “You” of experimentation

physics-physicists-wallpaper-physics-31670037-530-425

A small group of physicists at a leading American university in the early 1990s are investigating magnetic temporality and atomic spins in a crystalline lattice; undertaking experiments within the field of condensed matter physics. The scientists collaborate together, presenting experimental or theoretical findings on blackboards, overhead projectors, printed pages and various other forms of visual media. Miguel, a researcher, describes to a colleague the experiments he has just conducted. He points down and then up across a visual representation of the experiment while describing an aspect of the experiment, “We lowered the field [and] raised the field”. In response, his collaborator Ron replies using what is a common type of informal scientific language. The language-style identifies, conflates, or brings-together the researcher with the object being researched. In the following reply, the pronoun ‘he’ refers to both Miguel and the object or process under investigation: Ron asks, “Is there a possibility that he hasn’t seen anything real? I mean is there a [he points to the diagram]“. Miguel sharply interjects “I-, i-, it is possible… I am amazed by his measurement because when I come down I’m in the domain state”. Here Miguel is referring to a physical process of temperature change; a cooling that moves ‘down’ to the ‘domain state’. Ron replies, “You quench from five to two tesla, a magnet, a superconducting magnet”.  What is central here in regards to the common denominators explored in this paper is the way in which the scientists collaborate with certain figurative styles of language that blur the borders between physicist and physical process or state.

The collaboration between Miguel and Ron was filmed and examined by linguistic ethnographers Elinor Ochs, Sally Jacoby, and Patrick Gonzales (1994, 1996:328).  In the experiment, the physicists, Ochs et al illustrate, refer to ‘themselves as the thematic agents and experiencers of [the physical] phenomena’ (Osch et al 1996:335). By employing the pronouns ‘you’, ‘he’, and ‘I’ to refer to the physical processes and states under investigation, the physicists identify their own subjectivities, bodies, and investigations with the objects they are studying.

In the physics laboratory, members are trying to understand physical worlds that are not directly accessible by any of their perceptual abilities. To bridge this gap, it seems, they take embodied interpretive journeys across and through see-able, touchable two-dimensional artefacts that conventionally symbolize those worlds… Their sensory-motor gesturing is a means not only of representing (possible) worlds but also of imagining or vicariously experiencing them… Through verbal and gestural (re)enactments of constructed physical processes, physicist and physical entity are conjoined in simultaneous, multiple constructed worlds: the here-and-now interaction, the visual representation, and the represented physical process. The indeterminate grammatical constructions, along with gestural journeys through visual displays, constitute physicist and physical entity as coexperiencers of dynamic processes and, therefore, as coreferents of the personal pronoun. (Ochs et al 1994:163,164)

When Miguel says “I am in the domain state” he is using a type of ‘private, informal scientific discourse’  that has been observed in many other types of scientific practice (Latour & Woolgar 1987; Gilbert & Mulkay 1984 ). This style of erudition and scientific collaboration obviously has become established in state-of-the-art universities given the utility that it provides in regards to empirical problems and the development of scientific ideas.

What could this style of practice have in common with the healing practices of Amazonian shamans drinking the powerful psychoactive brew ayahuasca? Before moving on to an analysis of grammar and the body in types of ayahuasca use, the practice of Renaissance alchemy is introduced given the bridge or resemblance it offers between these scientific practices and certain notions of healing.

Renaissance alchemy, “As above so below”

khunrath-amphitheatrum-engraving

Heinrich Khunrath: 1595 engraving Amphitheatre

Graduating from the Basel Medical Academy in 1588, the physician Heinrich Khunrath defended his thesis that concerns a particular development of the relationship between alchemy and medicine. Inspired by the works of key figures in Roman and Greek medicine, key alchemists and practitioners of the hermetic arts, and key botanists, philosophers and others, Khunrath went on to produced innovative and influential texts and illustrations that informed various trajectories in medical and occult practice.

Alchemy flourished in the Renaissance period and was draw upon by elites such as Queen Elizabeth I and the Holy Emperor of Rome, Rudolf II . Central to the practices of Renaissance alchemists was a belief that all metals sprang from one source deep within the earth and that this process may be reversed and every metal be potentially turned into gold. The process of ‘transmutation’ or reversal of nature, it was claimed, could also lead to the elixir of life, the philosopher’s stone, or eternal youth and immortality. It was a spiritual pursuit of purification and regeneration which depended heavily on natural science experimentation.

Alchemical experiments were typically undertaken in a laboratory and alchemists were often contracted by elites for pragmatic purposes related to mining, medical services, and the production of chemicals, metals, and gemstones (Nummedal 2007). Allison Coudert describes and distills the practice of Renaissance alchemy with a basic overview of the relationship between an alchemist and the ‘natural entities’ of his practice.

All the ingredients mentioned in alchemical recipes—the minerals, metals, acids, compounds, and mixtures—were in truth only one, the alchemist himself. He was the base matter in need of purification from the fire; and the acid needed to accomplish this transformation came from his own spiritual malaise and longing for wholeness and peace. The various alchemical processes… were steps in the mysterious process of spiritual regeneration. (cited in Hanegraaff 1996:395)

The physician-alchemist Khunrath worked within a laboratory/oratory that included various alchemical apparatuses, including ‘smelting equipment for the extraction of metal from ore… glass vessels, ovens… [a] furnace or athanor… [and] a mirror’. Khunrath spoke of using the mirror as a ‘physico-magical instrument for setting a coal or lamp-fire alight by the heat of the sun’ (Forshaw 2005:205). Urszula Szulakowska argues that this use of the mirror embodies the general alchemical process and purpose of Khunruth’s practice. The functions of his practice and his alchemical illustrations and glyphs (such as his engraving Amphitheatre above) are aimed towards various outcomes of transmutation or reversal of nature. Khunruth’s engravings and illustrations,  Szulakowska (2000:9) argues:

are intended to excite the imagination of the viewer so that a mystic alchemy can take place through the act of visual contemplation… Khunrath’s theatre of images, like a mirror, catoptrically reflects the celestial spheres to the human mind, awakening the empathetic faculty of the human spirit which unites, through the imagination, with the heavenly realms. Thus, the visual imagery of Khunrath’s treatises has become the alchemical quintessence, the spiritualized matter of the philosopher’s stone.

Khunrath called himself a ‘lover of both medicines’, referring to the inseparability of material and spiritual forms of medicine.  Illustrating the centrality of alchemical practice in his medical approach, he described his ‘down-to-earth Physical-Chemistry of Nature’ as:

[T]he art of chemically dissolving, purifying and rightly reuniting Physical Things by Nature’s method; the Universal (Macro-Cosmically, the Philosopher’s Stone; Micro-Cosmically, the parts of the human body…) and ALL the particulars of the inferior globe. (cited in Forshaw 2005:205).

In Renaissance alchemy there is a certain kind of laboratory visionary mixing that happens between the human body and the human temperaments and ‘entities’ and processes of the natural world. This is condensed in the hermetic dictum “As above, so below” where the signatures of nature (‘above’) may be found in the human body (‘below’). The experiments involved certain practices of perception, contemplation, and language, that were undertaken in laboratory settings.

The practice of Renaissance alchemy, illustrated in recipes, glyphs, and instructional texts, includes styles of grammar in which minerals, metals, and other natural entities are animated with subjectivity and human temperaments. Lead “wants” or “desires” to transmute into gold; antimony feels a wilful “attraction” to silver (Kaiser 2010; Waite 1894). This form of grammar is entailed in the doctrine of medico-alchemical practice described by Khunrath above. Under certain circumstances and conditions, minerals, metals, and other natural entities may embody aspects of ‘Yourself’, or the subjectivity of the alchemist, and vice versa.

Renaissance alchemical language and practice bares a certain level of resemblance to the contemporary practices of physicists and scientists and the ways in which they identify themselves with the objects and processes of their experiments. The methods of physicists appear to differ considerably insofar as they use metaphors and trade spiritual for figurative approaches when ‘journeying through’ cognitive tasks, embodied gestures, and visual representations of empirical or natural processes. It is no coincidence that contemporary state-of-the-art scientists are employing forms of alchemical language and practice in advanced types of experimentation. Alchemical and hermetic thought and practice were highly influential in the emergence of modern forms of science (Moran 2006; Newman 2006; Hanegraaff 2013).

Ayahuasca shamanism and shapeshifting

ayahuasca-visions_023

Pablo Amaringo

In the Amazon jungle a radically different type of practice to the Renaissance alchemical traditions exists. Yet, as we will see, the practices of indigenous Amazonian shamans and Renaissance alchemists appear to include certain similarities — particularly in terms of the way in which ‘natural entities’ and the subjectivity of the practitioner may merge or swap positions — this is evidenced in the grammar and language of shamanic healing songs and in Amazonian cosmologies more generally.

In the late 1980s, Cambridge anthropologist Graham Townsley was undertaking PhD fieldwork with the indigenous Amazonian Yaminahua on the Yurua river. His research was focused on ways in which forms of social organisation are embedded in cosmology and the practice of everyday life. Yaminahua healing practices are embedded in broad animistic cosmological frames and at the centre of these healing practices is song. ‘What Yaminahua shamans do, above everything else, is sing’, Townsley explains, and this ritual singing is typically done while under the effects of the psychoactive concoction ayahuasca.

The psychoactive drink provides shamans with a means of drawing upon the healing assistance of benevolent spirit persons of the natural world (such as plant-persons, animal-persons, sun-persons etc.) and of banishing malevolent spirit persons that are affecting the wellbeing of a patient. The Yaminahua practice of ayahuasca shamanism resembles broader types of Amazonian shamanism. Shapeshifting, or the metamorphosis of human persons into nonhuman persons (such as jaguar-persons and anaconda-persons) is central to understandings of illness and to practices of healing in various types of Amazonian shamanism (Chaumeil 1992; Praet 2009; Riviere 1994).

The grammatical styles and sensory experiences of indigenous ayahuasca curing rituals and songs bare some similarities with the logic of identification noted in the sections on physics and alchemy above. Townsley (1993) describes a Yaminahua ritual where a shaman attempts to heal a patient that was still bleeding several days after giving birth. The healing songs that the shaman sings (called wai which also means ‘path’ and ‘myth’ orabodes of the spirits) make very little reference to the illness in which they are aimed to heal. The shaman’s songs do not communicate meanings to the patient but they embody complex metaphors and analogies, or what Yaminahua call ‘twisted language’; a language only comprehensible to shamans. There are ‘perceptual resemblances’ that inform the logic of Yaminahua twisted language. For example, “white-collared peccaries” becomes fish given the similarities between the gills of the fish and designs on the peccaries neck. The use of visual or sensory resonance in shamanic song metaphors is not arbitrary but central to the practice Yaminahua ayahuasca healing.

Ayahuasca typically produces a powerful visionary experience. The shaman’s use of complex metaphors in ritual song helps him shape his visions and bring a level of control to the visionary content. Resembling the common denominators and logic of identification explored above, the songs allow the shaman to perceive from the various perspectives that the meanings of the metaphors (or the spirits) afford.

Everything said about shamanic songs points to the fact that as they are sung the shaman actively visualizes the images referred to by the external analogy of the song, but he does this through a carefully controlled “seeing as” the different things actually named by the internal metaphors of his song. This “seeing as” in some way creates a space in which powerful visionary experience can occur. (Townsley 1993:460)

The use of analogies and metaphors provides a particularly powerful means of navigating the visionary experience of ayahuasca. There appears to be a kind of pragmatics involved in the use of metaphor over literal meanings. For instance, a shaman states, “twisted language brings me close but not too close [to the meanings of the metaphors]–with normal words I would crash into things–with twisted ones I circle around them–I can see them clearly” (Townsley 1993:460). Through this method of “seeing as”, the shaman embodies a variety of animal and nature spirits, or yoshi in Yaminahua, including anaconda-yoshi, jaguar-yoshi and solar or sun-yoshi, in order to perform acts of healing and various other shamanic activities.

While Yaminahua shamans use metaphors to control visions and shapeshift (or “see as”), they, and Amazonians more generally, reportedly understand shapeshifting in literal terms. For example, Lenaerts describes this notion of ‘seeing like the spirits’, and the ‘physical’ or literal view that the Ashéninka hold in regards to the practice of ayahuasca-induced shapeshifting.

What is at stake here is a temporary bodily process, whereby a human being assumes the embodied point of view of another species… There is no need to appeal to any sort of metaphoric sense here. A literal interpretation of this process of disembodiment/re-embodiment is absolutely consistent with all what an Ashéninka knowns and directly feels during this experience, in a quite physical sense. (2006, 13)

The practices of indigenous ayahuasca shamans are centred on an ability to shapeshift and ‘see nonhumans as they [nonhumans] see themselves’ (Viveiros de Castro 2004:468). Practitioners not only identify with nonhuman persons or ‘natural entities’ but they embody their point of view with the help of psychoactive plants and  ‘twisted language’ in song.

Some final thoughts

Through a brief exploration of techniques employed by advanced physicists, Renaissance alchemists, and Amazonian ayahuasca shamans, a logic of identification may be observed in which practitioners embody different means of transcending themselves and becoming the objects or spirits of their respective practices. While the physicists tend to embody secular principles and relate to this logic of identification in a purely figurative or metaphorical sense, Renaissance alchemists and Amazonian shamans embody epistemological stances that afford much more weight to the existential qualities and ‘persons’ or ‘spirits’ of their respective practices. A cognitive value in employing forms of language and sensory experience that momentarily take the practitioner beyond him or herself is evidenced by these three different practices. However, there is arguably more at stake here than values confined to cogito. The boundaries of bodies, subjectivities and humanness in each of these practices become porous, blurred, and are transcended while the contours of various forms of possibility are exposed, defined, and acted upon — possibilities that inform the outcomes of the practices and the definitions of the human they imply.

 References

Chaumeil, Jean-Pierre 1992, ‘Varieties of Amazonian shamanism’. Diogenes. Vol. 158 p.101
Forshaw, P. 2008 ‘”Paradoxes, Absurdities, and Madness”: Conflicts over Alchemy, Magic and Medicine in the Works of Andreas Libavius and Heinrich Khunrath. Early Science and Medicine. Vol. 1 pp.53
Forshaw, P. 2006 ‘Alchemy in the Amphitheatre: Some considerations of the alchemical content of the engravings in Heinrich Khunrath’s Amphitheatre of Eternal Wisdom’ in Jacob Wamberg Art and Alchemy. p.195-221
Gilbert, G. N. & Mulkay, M. 1984 Opening Bandora’s Box: A sociological analysis of scientists’ discourse. Cambridge, Cambridge University Press 
Hanegraaff, W. 2012 Esotericism and the Academy: Rejected knowledge in Western culture. Cambridge, Cambridge University Press
Hanegraaff, W. 1996 New Age Religion and Western Culture: Esotericism in the Mirror of Secular Thought. New York: SUNY Press
Latour, B. & Woolgar, S. 1987 Laboratory Life: The social construction of scientific facts. Cambridge, Harvard University Press
Lenaerts, M. 2006, ‘Substance, relationships and the omnipresence of the body: an overview of Ashéninka ethnomedicine (Western Amazonia)’ Journal of Ethnobiology and Ethnomedicine, Vol. 2, (1) 49 http://www.ethnobiomed.com/content/2/1/49
Moran, B. 2006 Distilling Knowledge: Alchemy, Chemistry, and the Scientific Revolution. Harvard, Harvard University Press
Newman, W. 2006 Atoms and Alchemy: Chymistry and the Experimental Origins of the Scientific Revolution. Chicago, Chicago University Press
Nummedal, T. 2007 Alchemy and Authroity in the Holy Roman Empire. Chicago, Chicago University Press
Ochs, E. Gonzales, P., Jacoby, S. 1996 ‘”When I come down I’m in the domain state”: grammar and graphic representation in the interpretive activities of physicists’ in Ochs, E., Schegloff, E. & Thompson, S (ed.)Interaction and Grammar. Cambridge, Cambridge University Press
Ochs, E. Gonzales, P., Jacoby, S 1994 ‘Interpretive Journeys: How Physicists Talk and Travel through Graphic Space’ Configurations. (1) p.151
Praet, I. 2009, ‘Shamanism and ritual in South America: an inquiry into Amerindian shape-shifting’. Journal of the Royal Anthropological Institute. Vol. 15 pp.737-754
Riviere, P. 1994, ‘WYSINWYG in Amazonia’. Journal of the Anthropological Society of Oxford. Vol. 25
Szulakowska, U. 2000 The Alchemy of Light: Geometry and Optics in Late Renaissance Alchemical Illustration. Leiden, Brill Press
Townsley, G. 1993 ‘Song Paths: The ways and means of Yaminahua shamanic knowledge’. L’Hommee. Vol. 33 p. 449
Viveiros de Castro, E. 2004, ‘Exchanging perspectives: The Transformation of Objects into Subjects in Amerindian Ontologies’.Common Knowledge. Vol. 10 (3) pp.463-484
Waite, A. 1894 The Hermetic and Alchemical Writings of Aureolus Philippus Theophrastrus Bombast, of Hohenheim, called Paracelcus the Great. Cornell University Library, ebook

Quantum physics enables revolutionary imaging method (Science Daily)

Date: August 28, 2014

Source: University of Vienna

Summary: Researchers have developed a fundamentally new quantum imaging technique with strikingly counter-intuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

A new quantum imaging technique generates images with photons that have never touched to object — in this case a sketch of a cat. This alludes to the famous Schrödinger cat paradox, in which a cat inside a closed box is said to be simultaneously dead and alive as long there is no information outside the box to rule out one option over the other. Similarly, the new imaging technique relies on a lack of information regarding where the photons are created and which path they take. Credit: Copyright: Patricia Enigl, IQOQI

Researchers from the Institute for Quantum Optics and Quantum Information (IQOQI), the Vienna Center for Quantum Science and Technology (VCQ), and the University of Vienna have developed a fundamentally new quantum imaging technique with strikingly counterintuitive features. For the first time, an image has been obtained without ever detecting the light that was used to illuminate the imaged object, while the light revealing the image never touches the imaged object.

In general, to obtain an image of an object one has to illuminate it with a light beam and use a camera to sense the light that is either scattered or transmitted through that object. The type of light used to shine onto the object depends on the properties that one would like to image. Unfortunately, in many practical situations the ideal type of light for the illumination of the object is one for which cameras do not exist.

The experiment published in Nature this week for the first time breaks this seemingly self-evident limitation. The object (e.g. the contour of a cat) is illuminated with light that remains undetected. Moreover, the light that forms an image of the cat on the camera never interacts with it. In order to realise their experiment, the scientists use so-called “entangled” pairs of photons. These pairs of photons — which are like interlinked twins — are created when a laser interacts with a non-linear crystal. In the experiment, the laser illuminates two separate crystals, creating one pair of twin photons (consisting of one infrared photon and a “sister” red photon) in either crystal. The object is placed in between the two crystals. The arrangement is such that if a photon pair is created in the first crystal, only the infrared photon passes through the imaged object. Its path then goes through the second crystal where it fully combines with any infrared photons that would be created there.

With this crucial step, there is now, in principle, no possibility to find out which crystal actually created the photon pair. Moreover, there is now no information in the infrared photon about the object. However, due to the quantum correlations of the entangled pairs the information about the object is now contained in the red photons — although they never touched the object. Bringing together both paths of the red photons (from the first and the second crystal) creates bright and dark patterns, which form the exact image of the object.

Stunningly, all of the infrared photons (the only light that illuminated the object) are discarded; the picture is obtained by only detecting the red photons that never interacted with the object. The camera used in the experiment is even blind to the infrared photons that have interacted with the object. In fact, very low light infrared cameras are essentially unavailable on the commercial market. The researchers are confident that their new imaging concept is very versatile and could even enable imaging in the important mid-infrared region. It could find applications where low light imaging is crucial, in fields such as biological or medical imaging.

 

Journal Reference:

  1. Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelow, Radek Lapkiewicz, Anton Zeilinger. Quantum imaging with undetected photons.Nature, 2014; 512 (7515): 409 DOI: 10.1038/nature13586

The Quantum Cheshire Cat: Can neutrons be located at a different place than their own spin? (Science Daily)

Date: July 29, 2014

Source: Vienna University of Technology, TU Vienna

Summary: Can neutrons be located at a different place than their own spin? A quantum experiment demonstrates a new kind of quantum paradox. The Cheshire Cat featured in Lewis Caroll’s novel “Alice in Wonderland” is a remarkable creature: it disappears, leaving its grin behind. Can an object be separated from its properties? It is possible in the quantum world. In an experiment, neutrons travel along a different path than one of their properties — their magnetic moment. This “Quantum Cheshire Cat” could be used to make high precision measurements less sensitive to external perturbations.


The basic idea of the Quantum Cheshire Cat: In an interferometer, an object is separated from one if its properties – like a cat, moving on a different path than its own grin. Credit: Image courtesy of Vienna University of Technology, TU Vienna

Can neutrons be located at a different place than their own spin? A quantum experiment, carried out by a team of researchers from the Vienna University of Technology, demonstrates a new kind of quantum paradox.

The Cheshire Cat featured in Lewis Caroll’s novel “Alice in Wonderland” is a remarkable creature: it disappears, leaving its grin behind. Can an object be separated from its properties? It is possible in the quantum world. In an experiment, neutrons travel along a different path than one of their properties — their magnetic moment. This “Quantum Cheshire Cat” could be used to make high precision measurements less sensitive to external perturbations.

At Different Places at Once

According to the law of quantum physics, particles can be in different physical states at the same time. If, for example, a beam of neutrons is divided into two beams using a silicon crystal, it can be shown that the individual neutrons do not have to decide which of the two possible paths they choose. Instead, they can travel along both paths at the same time in a quantum superposition.

“This experimental technique is called neutron interferometry,” says Professor Yuji Hasegawa from the Vienna University of Technology. “It was invented here at our institute in the 1970s, and it has turned out to be the perfect tool to investigate fundamental quantum mechanics.”

To see if the same technique could separate the properties of a particle from the particle itself, Yuji Hasegawa brought together a team including Tobis Denkmayr, Hermann Geppert and Stephan Sponar, together with Alexandre Matzkin from CNRS in France, Professor Jeff Tollaksen from Chapman University in California and Hartmut Lemmel from the Institut Laue-Langevin to develop a brand new quantum experiment.

The experiment was done at the neutron source at the Institut Laue-Langevin (ILL) in Grenoble, where a unique kind of measuring station is operated by the Viennese team, supported by Hartmut Lemmel from ILL.

Where is the Cat …?

Neutrons are not electrically charged, but they carry a magnetic moment. They have a magnetic direction, the neutron spin, which can be influenced by external magnetic fields.

First, a neutron beam is split into two parts in a neutron interferometer. Then the spins of the two beams are shifted into different directions: The upper neutron beam has a spin parallel to the neutrons’ trajectory, the spin of the lower beam points into the opposite direction. After the two beams have been recombined, only those neutrons are chosen, which have a spin parallel to their direction of motion. All the others are just ignored. “This is called postselection,” says Hermann Geppert. “The beam contains neutrons of both spin directions, but we only analyse part of the neutrons.”

These neutrons, which are found to have a spin parallel to its direction of motion, must clearly have travelled along the upper path — only there, the neutrons have this spin state. This can be shown in the experiment. If the lower beam is sent through a filter which absorbs some of the neutrons, then the number of the neutrons with spin parallel to their trajectory stays the same. If the upper beam is sent through a filter, than the number of these neutrons is reduced.

… and Where is the Grin?

Things get tricky, when the system is used to measure where the neutron spin is located: the spin can be slightly changed using a magnetic field. When the two beams are recombined appropriately, they can amplify or cancel each other. This is exactly what can be seen in the measurement, if the magnetic field is applied at the lower beam — but that is the path which the neutrons considered in the experiment are actually never supposed to take. A magnetic field applied to the upper beam, on the other hand, does not have any effect.

“By preparing the neurons in a special initial state and then postselecting another state, we can achieve a situation in which both the possible paths in the interferometer are important for the experiment, but in very different ways,” says Tobias Denkmayr. “Along one of the paths, the particles themselves couple to our measurement device, but only the other path is sensitive to magnetic spin coupling. The system behaves as if the particles were spatially separated from their properties.”

High Hopes for High-Precision Measurements

This counter intuitive effect is very interesting for high precision measurements, which are very often based on the principle of quantum interference. “When the quantum system has a property you want to measure and another property which makes the system prone to perturbations, the two can be separated using a Quantum Cheshire Cat, and possibly the perturbation can be minimized,” says Stephan Sponar.

The idea of the Quantum Cheshire Cat was first developed by Prof. Jeff Tollaksen and Prof. Yakir Aharonov from the Chapman University. An experimental proposal was published last year. The measurements which have now been presented are the first experimental proof of this phenomenon.

Journal Reference:

  1. Tobias Denkmayr, Hermann Geppert, Stephan Sponar, Hartmut Lemmel, Alexandre Matzkin, Jeff Tollaksen, Yuji Hasegawa. Observation of a quantum Cheshire Cat in a matter-wave interferometer experiment. Nature Communications, 2014; 5 DOI: 10.1038/ncomms5492

Is the universe a bubble? Let’s check: Making the multiverse hypothesis testable (Science Daily)

Date: July 17, 2014

Source: Perimeter Institute

Summary: Scientists are working to bring the multiverse hypothesis, which to some sounds like a fanciful tale, firmly into the realm of testable science. Never mind the Big Bang; in the beginning was the vacuum. The vacuum simmered with energy (variously called dark energy, vacuum energy, the inflation field, or the Higgs field). Like water in a pot, this high energy began to evaporate — bubbles formed.

Screenshot from a video of Matthew Johnson explaining the related concepts of inflation, eternal inflation, and the multiverse (see http://youtu.be/w0uyR6JPkz4). Credit: Image courtesy of Perimeter Institute

Perimeter Associate Faculty member Matthew Johnson and his colleagues are working to bring the multiverse hypothesis, which to some sounds like a fanciful tale, firmly into the realm of testable science.

Never mind the big bang; in the beginning was the vacuum. The vacuum simmered with energy (variously called dark energy, vacuum energy, the inflation field, or the Higgs field). Like water in a pot, this high energy began to evaporate — bubbles formed.

Each bubble contained another vacuum, whose energy was lower, but still not nothing. This energy drove the bubbles to expand. Inevitably, some bubbles bumped into each other. It’s possible some produced secondary bubbles. Maybe the bubbles were rare and far apart; maybe they were packed close as foam.

But here’s the thing: each of these bubbles was a universe. In this picture, our universe is one bubble in a frothy sea of bubble universes.

That’s the multiverse hypothesis in a bubbly nutshell.

It’s not a bad story. It is, as scientists say, physically motivated — not just made up, but rather arising from what we think we know about cosmic inflation.

Cosmic inflation isn’t universally accepted — most cyclical models of the universe reject the idea. Nevertheless, inflation is a leading theory of the universe’s very early development, and there is some observational evidence to support it.

Inflation holds that in the instant after the big bang, the universe expanded rapidly — so rapidly that an area of space once a nanometer square ended up more than a quarter-billion light years across in just a trillionth of a trillionth of a trillionth of a second. It’s an amazing idea, but it would explain some otherwise puzzling astrophysical observations.

Inflation is thought to have been driven by an inflation field — which is vacuum energy by another name. Once you postulate that the inflation field exists, it’s hard to avoid an “in the beginning was the vacuum” kind of story. This is where the theory of inflation becomes controversial — when it starts to postulate multiple universes.

Proponents of the multiverse theory argue that it’s the next logical step in the inflation story. Detractors argue that it is not physics, but metaphysics — that it is not science because it cannot be tested. After all, physics lives or dies by data that can be gathered and predictions that can be checked.

That’s where Perimeter Associate Faculty member Matthew Johnson comes in. Working with a small team that also includes Perimeter Faculty member Luis Lehner, Johnson is working to bring the multiverse hypothesis firmly into the realm of testable science.

“That’s what this research program is all about,” he says. “We’re trying to find out what the testable predictions of this picture would be, and then going out and looking for them.”

Specifically, Johnson has been considering the rare cases in which our bubble universe might collide with another bubble universe. He lays out the steps: “We simulate the whole universe. We start with a multiverse that has two bubbles in it, we collide the bubbles on a computer to figure out what happens, and then we stick a virtual observer in various places and ask what that observer would see from there.”

Simulating the whole universe — or more than one — seems like a tall order, but apparently that’s not so.

“Simulating the universe is easy,” says Johnson. Simulations, he explains, are not accounting for every atom, every star, or every galaxy — in fact, they account for none of them.

“We’re simulating things only on the largest scales,” he says. “All I need is gravity and the stuff that makes these bubbles up. We’re now at the point where if you have a favourite model of the multiverse, I can stick it on a computer and tell you what you should see.”

That’s a small step for a computer simulation program, but a giant leap for the field of multiverse cosmology. By producing testable predictions, the multiverse model has crossed the line between appealing story and real science.

In fact, Johnson says, the program has reached the point where it can rule out certain models of the multiverse: “We’re now able to say that some models predict something that we should be able to see, and since we don’t in fact see it, we can rule those models out.”

For instance, collisions of one bubble universe with another would leave what Johnson calls “a disk on the sky” — a circular bruise in the cosmic microwave background. That the search for such a disk has so far come up empty makes certain collision-filled models less likely.

Meanwhile, the team is at work figuring out what other kinds of evidence a bubble collision might leave behind. It’s the first time, the team writes in their paper, that anyone has produced a direct quantitative set of predictions for the observable signatures of bubble collisions. And though none of those signatures has so far been found, some of them are possible to look for.

The real significance of this work is as a proof of principle: it shows that the multiverse can be testable. In other words, if we are living in a bubble universe, we might actually be able to tell.

Video: https://www.youtube.com/watch?v=w0uyR6JPkz4

Journal References:

  1. Matthew C. Johnson, Hiranya V. Peiris, Luis Lehner. Determining the outcome of cosmic bubble collisions in full general relativityPhysical Review D, 2012; 85 (8) DOI: 10.1103/PhysRevD.85.083516
  2. Carroll L. Wainwright, Matthew C. Johnson, Hiranya V. Peiris, Anthony Aguirre, Luis Lehner, Steven L. Liebling. Simulating the universe(s): from cosmic bubble collisions to cosmological observables with numerical relativity.Journal of Cosmology and Astroparticle Physics, 2014; 2014 (03): 030 DOI:10.1088/1475-7516/2014/03/030
  3. Carroll L. Wainwright, Matthew C. Johnson, Anthony Aguirre, Hiranya V. Peiris.Simulating the universe(s) II: phenomenology of cosmic bubble collisions in full General Relativitysubmitted to arXiv, 2014 [link]
  4. Stephen M. Feeney, Matthew C. Johnson, Jason D. McEwen, Daniel J. Mortlock, Hiranya V. Peiris. Hierarchical Bayesian detection algorithm for early-universe relics in the cosmic microwave backgroundPhysical Review D, 2013; 88 (4) DOI: 10.1103/PhysRevD.88.043012

Experimento demonstra decaimento do bóson de Higgs em componentes da matéria (Fapesp)

Comprovação corrobora a hipótese de que o bóson é o gerador das massas das partículas constituintes da matéria. Descoberta foi anunciada na Nature Physics por grupo com participação brasileira (CMS)
02/07/2014

Por José Tadeu Arantes

Agência FAPESP – O decaimento direto do bóson de Higgs emférmions – corroborando a hipótese de que ele é o gerador das massas das partículas constituintes da matéria – foi comprovado no Grande Colisor de Hádrons (LHC, na sigla em inglês), o gigantesco complexo experimental mantido pela Organização Europeia para a Pesquisa Nuclear (Cern) na fronteira da Suíça com a França.

O anúncio da descoberta acaba de ser publicado na revista Nature Physics pelo grupo de pesquisadores ligado ao detector Solenoide Compacto de Múons (CMS, na sigla em inglês).

Da equipe internacional do CMS, composta por cerca de 4.300 integrantes (entre físicos, engenheiros, técnicos, estudantes e pessoal administrativo), participam dois grupos de cientistas brasileiros: um sediado no Núcleo de Computação Científica (NCC) da Universidade Estadual Paulista (Unesp), em São Paulo, e outro no Centro Brasileiro de Pesquisas Físicas, do Ministério da Ciência, Tecnologia e Inovação (MCTI), e na Universidade do Estado do Rio de Janeiro (Uerj), no Rio de Janeiro.

“O experimento mediu, pela primeira vez, os decaimentos do bóson de Higgs em quarks bottom e léptons tau. E mostrou que eles são consistentes com a hipótese de as massas dessas partículas também serem geradas por meio do mecanismo de Higgs”, disse o físico Sérgio Novaes, professor da Unesp, à Agência FAPESP.

Novaes é líder do grupo da universidade paulista no experimento CMS e pesquisador principal do Projeto Temático “Centro de Pesquisa e Análise de São Paulo” (Sprace), integrado ao CMS e apoiado pela FAPESP.

O novo resultado reforçou a convicção de que o objeto cuja descoberta foi oficialmente anunciada em 4 de julho de 2012 é realmente o bóson de Higgs, a partícula que confere massa às demais partículas, de acordo com o Modelo Padrão, o corpo teórico que descreve os componentes e as interações supostamente fundamentais do mundo material.

“Desde o anúncio oficial da descoberta do bóson de Higgs, muitas evidências foram coletadas, mostrando que a partícula correspondia às predições do Modelo Padrão. Foram, fundamentalmente, estudos envolvendo seu decaimento em outros bósons (partículas responsáveis pelas interações da matéria), como os fótons (bósons da interação eletromagnética) e o W e o Z (bósons da interação fraca)”, disse Novaes.

“Porém, mesmo admitindo que o bóson de Higgs fosse responsável pela geração das massas do W e do Z, não era óbvio que ele devesse gerar também as massas dos férmions (partículas que constituem a matéria, como os quarks, que compõem os prótons e os nêutrons; e os léptons, como o elétron e outros), porque o mecanismo é um pouco diferente, envolvendo o chamado ‘acoplamento de Yukawa’ entre essas partículas e o campo de Higgs”, prosseguiu.

Os pesquisadores buscavam uma evidência direta de que o decaimento do bóson de Higgs nesses campos de matéria obedeceria à receita do Modelo Padrão. Porém, essa não era uma tarefa fácil, porque, exatamente pelo fato de conferir massa, o Higgs tem a tendência de decair nas partículas mais massivas, como os bósons W e Z, por exemplo, que possuem massas cerca de 80 e 90 vezes superiores à do próton, respectivamente.

“Além disso, havia outros complicadores. No caso particular do quark bottom, por exemplo, um par bottom-antibottom pode ser produzido de muitas outras maneiras, além do decaimento do Higgs. Então era preciso filtrar todas essas outras possibilidades. E, no caso do lépton tau, a probabilidade de decaimento do Higgs nele é muito pequena”, contou Novaes.

“Para se ter ideia, a cada trilhão de colisões realizadas no LHC, existe um evento com bóson de Higgs. Destes, menos de 10% correspondem ao decaimento do Higgs em um par de taus. Ademais, o par de taus também pode ser produzido de outras maneiras, como, por exemplo, a partir de um fóton, com frequência muito maior”, disse.

Para comprovar com segurança o decaimento do bóson de Higgs no quark bottom e no lépton tau, a equipe do CMS precisou coletar e processar uma quantidade descomunal de dados. “Por isso nosso artigo na Nature demorou tanto tempo para sair. Foi literalmente mais difícil do que procurar uma agulha no palheiro”, afirmou Novaes.

Mas o interessante, segundo o pesquisador, foi que, mesmo nesses casos, em que se considerava que o Higgs poderia fugir à receita do Modelo Padrão, isso não ocorreu. Os experimentos foram muito coerentes com as predições teóricas.

“É sempre surpreendente verificar o acordo entre o experimento e a teoria. Durante anos, o bóson de Higgs foi considerado apenas um artifício matemático, para dar coerência interna ao Modelo Padrão. Muitos físicos apostavam que ele jamais seria descoberto. Essa partícula foi procurada por quase meio século e acabou sendo admitida pela falta de uma proposta alternativa, capaz de responder por todas as predições, com a mesma margem de acerto. Então, esses resultados que estamos obtendo agora no LHC são realmente espetaculares. A gente costuma se espantar quando a ciência não dá certo. Mas o verdadeiro espanto é quando ela dá certo”, disse Novaes.

“Em 2015, o LHC deverá rodar com o dobro de energia. A expectativa é chegar a 14 teraelétrons-volt (TeV) (14 trilhões de elétrons-volt). Nesse patamar de energia, os feixes de prótons serão acelerados a mais de 99,99% da velocidade da luz. É instigante imaginar o que poderemos descobrir”, afirmou.

O artigo Evidence for the direct decay of the 125 GeV Higgs boson to fermions (doi:10.1038/nphys3005), da colaboração CMS, pode ser lido emhttp://nature.com/nphys/journal/vaop/ncurrent/full/nphys3005.html

 

GLOSSÁRIO

Modelo Padrão

Modelo elaborado ao longo da segunda metade do século XX, a partir da colaboração de um grande número de físicos de vários países, com alto poder de predição dos eventos que ocorrem no mundo subatômico. Engloba três das quatro interações conhecidas (eletromagnética, fraca e forte), mas não incorpora a interação gravitacional. O Modelo Padrão baseia-se no conceito de partículas elementares, agrupadas em férmions (partículas constituintes da matéria), bósons (partículas mediadoras das interações) e o bóson de Higgs (partícula que confere massa às demais partículas).

Férmions

Assim chamados em homenagem ao físico italiano Enrico Fermi (1901-1954), prêmio Nobel de Física de 1938. Segundo o Modelo Padrão, são as partículas constituintes da matéria. Compõem-se de seis quarks (up, down, charm, strange, top, bottom), seis léptons (elétron, múon, tau, neutrino do elétron, neutrino do múon, neutrino do tau) e suas respectivas antipartículas. Os quarks agrupam-se em tríades para formar os baryons (prótons e nêutrons) e em pares quark-antiquark para formar os mésons. Em conjunto, baryons e mésons constituem os hádrons.

Bósons

Assim chamados em homenagem ao físico indiano Satyendra Nath Bose (1894-1974). Segundo o Modelo Padrão, os bósons vetoriais são as partículas mediadoras das interações. Compõem-se do fóton (mediador da interação eletromagnética); do W+, W− e Z (mediadores da interação fraca); e de oito tipos de glúons (mediadores da interação forte). O gráviton (suposto mediador da interação gravitacional) ainda não foi encontrado nem faz parte do Modelo Padrão.

Bóson de Higgs

Nome em homenagem ao físico britânico Peter Higgs (nascido em 1929). Segundo o Modelo Padrão, é o único bóson elementar escalar (os demais bósons elementares são vetoriais). De forma simplificada, diz-se que é a partícula que confere massa às demais partículas. Foi postulado para explicar por que todas as partículas elementares do Modelo Padrão possuem massa, exceto o fóton e os glúons. Sua massa, de 125 a 127 GeV/c2 (gigaelétrons-volt divididos pela velocidade da luz ao quadrado), equivale a aproximadamente 134,2 a 136,3 vezes a massa do próton. Sendo uma das partículas mais massivas propostas pelo Modelo Padrão, só pode ser produzido em contextos de altíssima energia (como aqueles que teriam existido logo depois do Big Bang ou os agora alcançados no LHC ), decaindo quase imediatamente em partículas de massas menores. Após quase meio século de buscas, desde a postulação teórica em 1964, sua descoberta foi oficialmente anunciada no dia 4 de julho de 2012. O anúncio foi feito, de forma independente, pelas duas principais equipes do LHC, ligadas aos detectores CMS e Atlas do LHC. Em reconhecimento à descoberta, a Real Academia Sueca concedeu o Prêmio Nobel de Física de 2013 a Peter Higgs e ao belga François Englert, dois dos propositores da partícula.

Decaimento

Processo espontâneo por meio do qual uma partícula se transforma em outras, dotadas de massas menores. Se as partículas geradas não são estáveis, o processo de decaimento pode continuar. No caso mencionado no artigo, o decaimento do bóson de Higgs em férmions (especificamente, no quark bottom e no lépton tau) é tomado como evidência de que o Higgs é o gerador das massas dessas partículas.

LHC

O Grande Colisor de Hádrons é o maior e mais sofisticado complexo experimental já possuído pela humanidade. Construído pelo Cern ao longo de 10 anos, entre 1998 e 2008, consiste basicamente em um túnel circular de 27 quilômetros de extensão, situado a 175 metros abaixo da superfície do solo, na fronteira entre a França e a Suíça. Nele, feixes de prótons são acelerados em sentidos contrários e levados a colidir em patamares altíssimos de energia, gerando, a cada colisão, outros tipos de partículas, que possibilitam investigar a estrutura da matéria. A expectativa, para 2015, é produzir colisões de 14 TeV (14 trilhões de elétrons-volt), com os prótons movendo-se a mais de 99,99% da velocidade da luz. O LHC é dotado de sete detectores, sendo os dois principais o CMS e o Atlas.

Com a corda no pescoço (Folha de S.Paulo)

São Paulo, domingo, 05 de novembro de 2006

Físico americano revela em livro a celeuma travada nos bastidores da academia em torno da teoria de cordas e argumenta que talvez o Universo não seja elegante, afinal

FLÁVIO DE CARVALHO SERPA
COLABORAÇÃO PARA A FOLHA

Há tempos a comunidade dos físicos está dividida numa guerra surda, abafada pelos muros da academia. Agora, pela primeira vez, dois livros trazem a público os detalhes dessa desavença, que põe em xeque o modo de produzir a ciência moderna, revelando uma doença que pode estar se espalhando por todo o edifício acadêmico.

“The Trouble With Physics” (“A Crise da Física”), livro lançado no mês passado nos EUA e ainda sem tradução no Brasil, do físico teórico Lee Smolin, abre uma discussão que muitos prefeririam manter longe do grande público: está a física moderna completamente estagnada há três décadas?

“A história que vou contar”, escreve Smolin, “pode ser lida como uma tragédia. Para ser claro e antecipar o desfecho: nós fracassamos”, diz ele, invocando o cargo de porta-voz de toda uma geração de cientistas. Pior: a razão da estagnação seria a formação de gangues de cientistas, incluindo as mentes mais brilhantes do mundo, para afastar dos postos acadêmicos os teóricos dissidentes.

Os principais acusados são os físicos adeptos da chamada teoria de cordas, que promete, desde o início da década de 1970, unificar todas as forças e partículas do Universo conhecido. “A teoria de cordas tem uma posição tão dominante na academia”, escreve Smolin, “que é praticamente suicídio de carreira para um jovem teórico não juntar-se à onda”.

Smolin, um polêmico e respeitado físico teórico, com PhD em Harvard e professorado em Yale, não está só. Também o físico matemático Peter Woit disparou contra os físicos das cordas uma acusação pesada que transparece já no título de seu livro: “Not Even Wrong” (“Nem Sequer Errado”). Esse era o pior insulto que o legendário físico Wolfgang Pauli reservava para os trabalhos e teses mal feitas. Afinal, se uma tese fica comprovadamente errada, ela tem o lado positivo de fechar becos sem saída na busca do caminho certo.

Mas o alerta de Smolin não está restrito ao desenvolvimento teórico da física. Para manter privilégios acadêmicos, a comunidade dos teóricos de cordas tomou conta das principais universidades e centros de pesquisas, barrando a carreira de pesquisadores com enfoques alternativos. Smolin,que já namorou a teoria de cordas, produzindo 18 artigos sobre o assunto, emerge na arena científica como uma espécie de mafioso desertor, disparando sua metralhadora giratória.

Modelo Padrão
O mais surpreendente é que a confusão tenha começado logo após décadas de avanços contínuos no século que começa com Einstein e a consolidação da mecânica quântica.

O último capítulo dessa epopéia -e a raiz da bagunça- foi o espetacular sucesso do chamado Modelo Padrão das Forças e Partículas Elementares. Essa formulação, obra de gênios como Richard Feynman, Freeman Dyson, Murray Gell-Mann e outros, teve como canto do cisne a comprovação teórica e experimental da unificação da força fraca e o eletromagnetismo, feita pelos Prêmios Nobel Abdus Salam e Steven Weinberg. A unificação de forças é o santo graal da física desde Johannes Kepler (unificação das órbitas celestes), passando por Isaac Newton (unificação da gravidade e movimento orbital) James Maxwell (unificação da luz, eletricidade e magnetismo) e Einstein (unificação da energia e matéria) .

Mas o portentoso edifício do Modelo Padrão, tinha (e tem) graves rachaduras. Apesar de descrever todas as partículas e forças detectadas e previstas com incrível precisão, não incorporava a força da gravidade nem dizia nada sobre a histórica divisão entre os excludentes mundos da relatividade geral e da mecânica quântica.

Mas todos físicos da área de partículas e altas energias, teóricos e experimentais, mergulharam nas furiosas calculeiras do Modelo Padrão. Absorvidos no que se chama o modo de produção da ciência normal (em oposição aos períodos de erupção revolucionária, como o da relatividade), as mais brilhantes mentes do mundo chegaram a um beco sem saída: quase todas as previsões experimentais do Modelo Padrão foram vitoriosamente testadas. O que fazer depois?

Boas vibrações
É quando emergem as cordas. Em vez de partículas pontuais quase sem dimensão como constituintes básicos da matéria, surge a idéia revolucionária das entidades elementares serem na verdade literalmente cordas bidimensionais. Idênticas às dos violinos (no sentido matemático), só que de dimensões minúsculas (da ordem de um trilhão de vezes menores que um próton) e, mais espantoso, vibrando num Universo de mais do que as três dimensões habituais. Nas últimas formulações, nada menos que 11, incluindo o tempo.

No começo o progresso foi espantoso: a força da gravidade, uma deserdada da mecânica quântica e do Modelo Padrão, emergia naturalmente das harmonias de cordas, como ressuscitando as intuições pitagóricas. Todas as forças e partículas foram descritas matematicamente como formas particulares de oscilação de poucos tipos básicos de corda.

Mas logo as complicações começaram também a brotar descontroladamente das equações. Se o Modelo Padrão exigia 19 constantes, ajustadas na marra pelos teóricos para coincidir com a realidade, os desdobramentos da teoria de cordas passaram a exigir centenas delas.

No princípio a beleza da teoria de cordas vem de existir apenas o parâmetro da tensão de corda. Cada partícula ou força seria apenas uma variação das cordas básicas, mudando apenas sua tensão e modo de vibrar. A gravidade, por exemplo, seria uma corda fechada, como um elástico de borracha de prender cédulas. Elétrons seriam cordas oscilando com apenas uma extremidade presa.

A cada ajuste na geometria para tornar a teoria compatível com o Universo observável, foi tornando o modelo cada vez mais complicado, de maneira parecida ao modelo cósmico do astrônomo egípcio Ptolomeu, com as adições de ciclos e epiciclos para explicar os movimentos dos planetas.

Macumba
Veio então a explosão final. Logo surgiram cinco alternativas de teorias de cordas. Depois a conjectura de existir uma tal teoria M, que agruparia todas com casos especiais. Finalmente, a teoria de cordas, que prometia simplicidade de beleza tão clara como a célebre E= mc2, revelou-se capaz de produzir nada menos que 10500 (o número 1 seguido de 500 zeros) soluções possíveis, cada uma delas representando um Universo alternativo, com forças e partículas diferentes. Ou seja, há mais soluções para as contas dos físicos de cordas do que há partículas e átomos no Universo inteiro.

Pior, uma parcela mais maluca da comunidade dos teóricos de cordas acha isso muito natural e insinua agora que a necessidade de prova experimental é um ranço arcaico da ciência.

“Vale a pena tentar ensinar mecânica quântica para um cachorro?” -perguntam eles. Seria igualmente inútil para nossos cérebros tentar entender e provar experimentalmente a grande bagunça instalada na ciência nos últimos 30 anos?

É claro que a maioria dos mais brilhantes teóricos de cordas não endossa esse impasse epistemológico. O próprio Brian Greene, físico americano e principal divulgador da concepção de cordas, autor do best-seller (mais falado do que lido, é verdade) “O Universo Elegante”, escreveu um artigo para o jornal “The New York Times” ressaltando que a prova experimental é essencial e que a questão levantada por Smolin é procedente. “O rigor matemático e a elegância não bastam para demonstrar a relevância de uma teoria. Para ser considerada uma descrição correta do Universo, uma teoria deve fazer previsões confirmadas por experimentos.


As dimensões das cordas e as energias que elas envolvem para serem comprovadas estão fora de alcance. Um acelerador de partículas para produzi-las artificialmente, deveria ser maior do que o Sistema Solar


E, quando um pequeno mas barulhento grupo de críticos da teoria de cordas ressalta isso com razão, a teoria de cordas ainda tem de fazer isso. Essa é uma questão chave e merece um escrutínio sério.”

Enquanto o diálogo entre Greene e Smolin tem sido diplomático, nos blogs das comunidades científicas a guerra está vários pontos para baixo. No diário on-line do físico Lubos Motl, de Harvard (motls.blogspot.com), por exemplo, já foram até excluídos posts da cosmóloga brasileira Christine Dantas (christinedantas.blogspot .com). “Na verdade não existe uma guerra entre os muros da academia”, ameniza Victor Rivelles, do Instituto de Física da USP. “O que é novo é que a internet, e particularmente os blogs, amplificam essa discussão dando a impressão de que é muito maior do que na realidade é.”

Saída pela esquerda
Para contornar a questão apareceu o que se chama princípio antrópico: entre os incontáveis Universos possíveis, os observáveis seriam apenas os feitos sob medida para os humanos. Uma interpretação que resvala para o misticismo e devolve o homem ao centro do Universo, como na Idade Média.

Lamentavelmente, a física experimental, a juíza última das verdades desde os tempos de Galileu e Kepler, pouca coisa pode fazer. As dimensões das cordas elementares e as energias que elas envolvem para serem comprovadas estão fora de alcance. Um acelerador de partículas para produzi-las artificialmente, como foi feito na comprovação do Modelo Padrão, deveria ter uma dimensão maior que a do Sistema Solar.

Todas as esperanças de todos os físicos se voltam agora para o Grande Colisor de Hádrons (prótons ou nêutrons), a ser ligado a partir do ano que vem perto de Genebra, na fronteira da Suíça com a França, na sede do Cern (Centro Europeu de Pesquisas Nucleares). Pela primeira vez, esse acelerador, um túnel ultrafrio com 27 km de circunferência, vai atingir energias suficientes para produzir indícios indiretos da existência de uma quarta dimensão espacial. Lamentavelmente isso não prova nem refuta a teoria de cordas, pois o postulado de dimensões adicionais não é uma exclusividade desse modelo. A pendenga na comunidade dos físicos, portanto, pode persistir.

Pano de fundo
A linha teórica desenvolvida por Smolin, por outro lado, é igualmente nebulosa. Ele é um dos principais articuladores da gravitação quântica de laço, que pretende retomar o enfoque einsteniano de unificação. A teoria geral da relatividade, explica Smolin, independe da geometria do espaço-tempo. Mas para toda a teoria de cordas, e mesmo o modelo padrão, as forças e partículas são como atores num cenário ou pano de fundo de uma paisagem espaço-temporal definida.

É o que ele chama de teorias dependente do fundo. A gravitação quântica de laço, ao contrário, é independente do fundo. É uma conjectura arrojada: em vez de partículas e forças elementares, Smolin sugere que as entidades fundamentais são nós ou laços no tecido do espaço-tempo.

Assim como a teoria de cordas deriva todas as partículas e forças a partir de modos diferentes das cordas elementares vibrarem, Smolin acredita que essas entidades surjam de enroscos no tecido do espaço-tempo. Assim, as dimensões espaciais e a passagem do tempo emergem não como cenário do teatro das partículas, mas como sua gênese. Outra conseqüência da teoria é que o espaço-tempo não é contínuo: ele também é quantizado, existindo tamanhos mínimos, como átomos de espaço-tempo.

Lamentavelmente esses enroscos também são indetectáveis, mesmo nos mais poderosos aceleradores. No fim, pateticamente, Smolin admite que não se saiu melhor do que os teóricos de cordas e que seu livro “é uma forma de procrastinação”.

Mas as questões sociológicas colocadas nos últimos capítulos do livro de Smolin não podem mais ficar no limbo. A acusação da formação de gangues nos centros de pesquisa é agora uma questão pública, que envolve a aplicação do dinheiro dos impostos e a estagnação das ciências e, indiretamente, da tecnologia que ela deveria gerar.


LIVRO – “The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next” 
Lee Smolin; Houghton Mifflin, 392 páginas US$26.

‘Dressed’ laser aimed at clouds may be key to inducing rain, lightning (Science Daily)

Date: April 18, 2014

Source: University of Central Florida

Summary: The adage “Everyone complains about the weather but nobody does anything about it” may one day be obsolete if researchers further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning. Other possible uses of this technique could be used in long-distance sensors and spectrometers to identify chemical makeup.

The adage “Everyone complains about the weather but nobody does anything about it,” may one day be obsolete if researchers at the University of Central Florida’s College of Optics & Photonics and the University of Arizona further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning. Credit: © Maksim Shebeko / Fotolia

The adage “Everyone complains about the weather but nobody does anything about it” may one day be obsolete if researchers at the University of Central Florida’s College of Optics & Photonics and the University of Arizona further develop a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning.

The solution? Surround the beam with a second beam to act as an energy reservoir, sustaining the central beam to greater distances than previously possible. The secondary “dress” beam refuels and helps prevent the dissipation of the high-intensity primary beam, which on its own would break down quickly. A report on the project, “Externally refueled optical filaments,” was recently published in Nature Photonics.

Water condensation and lightning activity in clouds are linked to large amounts of static charged particles. Stimulating those particles with the right kind of laser holds the key to possibly one day summoning a shower when and where it is needed.

Lasers can already travel great distances but “when a laser beam becomes intense enough, it behaves differently than usual — it collapses inward on itself,” said Matthew Mills, a graduate student in the Center for Research and Education in Optics and Lasers (CREOL). “The collapse becomes so intense that electrons in the air’s oxygen and nitrogen are ripped off creating plasma — basically a soup of electrons.”

At that point, the plasma immediately tries to spread the beam back out, causing a struggle between the spreading and collapsing of an ultra-short laser pulse. This struggle is called filamentation, and creates a filament or “light string” that only propagates for a while until the properties of air make the beam disperse.

“Because a filament creates excited electrons in its wake as it moves, it artificially seeds the conditions necessary for rain and lightning to occur,” Mills said. Other researchers have caused “electrical events” in clouds, but not lightning strikes.

But how do you get close enough to direct the beam into the cloud without being blasted to smithereens by lightning?

“What would be nice is to have a sneaky way which allows us to produce an arbitrary long ‘filament extension cable.’ It turns out that if you wrap a large, low intensity, doughnut-like ‘dress’ beam around the filament and slowly move it inward, you can provide this arbitrary extension,” Mills said. “Since we have control over the length of a filament with our method, one could seed the conditions needed for a rainstorm from afar. Ultimately, you could artificially control the rain and lightning over a large expanse with such ideas.”

So far, Mills and fellow graduate student Ali Miri have been able to extend the pulse from 10 inches to about 7 feet. And they’re working to extend the filament even farther.

“This work could ultimately lead to ultra-long optically induced filaments or plasma channels that are otherwise impossible to establish under normal conditions,” said professor Demetrios Christodoulides, who is working with the graduate students on the project.

“In principle such dressed filaments could propagate for more than 50 meters or so, thus enabling a number of applications. This family of optical filaments may one day be used to selectively guide microwave signals along very long plasma channels, perhaps for hundreds of meters.”

Other possible uses of this technique could be used in long-distance sensors and spectrometers to identify chemical makeup. Development of the technology was supported by a $7.5 million grant from the Department of Defense.

Journal Reference:

  1. Maik Scheller, Matthew S. Mills, Mohammad-Ali Miri, Weibo Cheng, Jerome V. Moloney, Miroslav Kolesik, Pavel Polynkin, Demetrios N. Christodoulides.Externally refuelled optical filamentsNature Photonics, 2014; 8 (4): 297 DOI:10.1038/nphoton.2014.47